title
listlengths 0
18
| author
listlengths 0
4.41k
| authoraffiliation
listlengths 0
6.45k
| venue
listlengths 0
9
| abstract
stringlengths 1
37.6k
| doi
stringlengths 10
114
⌀ | pdfurls
listlengths 1
3
⌀ | corpusid
int64 158
259M
| arxivid
stringlengths 9
16
| pdfsha
stringlengths 40
40
| text
stringlengths 66
715k
| github_urls
listlengths 0
36
|
---|---|---|---|---|---|---|---|---|---|---|---|
[
"OPERS AND THETA FUNCTIONS",
"OPERS AND THETA FUNCTIONS"
] |
[
"David Ben-Zvi ",
"Indranil Biswas "
] |
[] |
[] |
We construct natural maps (the Klein and Wirtinger maps) from moduli spaces of vector bundles on an algebraic curve X to affine spaces, as quotients of the nonabelian theta linear series. We prove a finiteness result for these maps over generalized Kummer varieties (moduli of torus bundles), leading us to conjecture that the maps are finite in general. The conjecture provides canonical explicit coordinates on the moduli space. The finiteness results give low-dimensional parametrizations of Jacobians (in P 3g−3 for generic curves), described by 2Θ functions or second logarithmic derivatives of theta.We interpret the Klein and Wirtinger maps in terms of opers on X. Opers are generalizations of projective structures, and can be considered as differential operators, kernel functions or special bundles with connection. The matrix opers (analogues of opers for matrix differential operators) combine the structures of flat vector bundle and projective connection, and map to opers via generalized Hitchin maps. For vector bundles off the theta divisor, the Szegö kernel gives a natural construction of matrix oper. The Wirtinger map from bundles off the theta divisor to the affine space of opers is then defined as the determinant of the Szegö kernel. This generalizes the Wirtinger projective connections associated to theta characteristics, and the assoicated Klein bidifferentials.
|
10.1016/s0001-8708(03)00069-0
|
[
"https://arxiv.org/pdf/math/0204301v2.pdf"
] | 15,174,123 |
math/0204301
|
777bec8199af9ceb5e06ea69d4cdd2bf6e2cb0fa
|
OPERS AND THETA FUNCTIONS
28 Nov 2002
David Ben-Zvi
Indranil Biswas
OPERS AND THETA FUNCTIONS
28 Nov 2002
We construct natural maps (the Klein and Wirtinger maps) from moduli spaces of vector bundles on an algebraic curve X to affine spaces, as quotients of the nonabelian theta linear series. We prove a finiteness result for these maps over generalized Kummer varieties (moduli of torus bundles), leading us to conjecture that the maps are finite in general. The conjecture provides canonical explicit coordinates on the moduli space. The finiteness results give low-dimensional parametrizations of Jacobians (in P 3g−3 for generic curves), described by 2Θ functions or second logarithmic derivatives of theta.We interpret the Klein and Wirtinger maps in terms of opers on X. Opers are generalizations of projective structures, and can be considered as differential operators, kernel functions or special bundles with connection. The matrix opers (analogues of opers for matrix differential operators) combine the structures of flat vector bundle and projective connection, and map to opers via generalized Hitchin maps. For vector bundles off the theta divisor, the Szegö kernel gives a natural construction of matrix oper. The Wirtinger map from bundles off the theta divisor to the affine space of opers is then defined as the determinant of the Szegö kernel. This generalizes the Wirtinger projective connections associated to theta characteristics, and the assoicated Klein bidifferentials.
Introduction.
Let X be a compact connected Riemann surface (or equivalently, a connected smooth projective algebraic curve over C). Let M X (n) denote the moduli space of semistable vector bundles over X of rank n and Euler characteristic zero (hence of degree n(g −1)), and N X (n) ⊂ M X (n) is the moduli space of vector bundles with fixed determinant Ω n 2 X (for a fixed theta-characteristic Ω 1 2 X on X). Let Θ ⊂ M X (n) denote the canonical theta divisor; its complement M X (n) \ Θ is an affine variety, parametrizing rank n vector bundles with vanishing cohomology.
The theory of nonabelian theta functions provides an embedding of the (n 2 −1)(g−1)dimensional affine variety N X (n) \ Θ into an affine space of dimension n g . Specifically, by restricting the canonical theta function to the image of the Jacobian Jac X in M X (n) obtained by translating a given E ∈ N X (n) \ Θ by line bundles, we obtain elements of the nΘ linear series on the Jacobian.
It is tempting to look for lower-dimensional parametrizations of N X (n) \ Θ which come closer to giving explicit coordinates on the moduli space. Optimistically, one can hope for a natural finite map of N X (n) \ Θ to affine space of the same dimension. In this paper we give a construction of such a map to affine space, which we conjecture is finite, and explain its relations to theta functions, projective structures, and differential operators on the Riemann surface.
Our construction assigns special differential operators, or opers, on X to a vector bundle E with vanishing cohomologies. We define a map, the Wirtinger map W, from N X (n) \ Θ to the space of all opers, which is an affine space for the Hitchin base space of X. The dimension of the space of opers is same as that of the moduli space (namely, (n 2 − 1)(g − 1)). By realizing the opers as kernel functions on X × X we define the Klein map K, sending the moduli space to a (somewhat bigger) affine space of bidifferentials. Our main result establishes the finiteness of the Klein map (for all X) and the Wirtinger map (for generic X) when restricted to the moduli space of torus bundles -the generalized Kummer variety K X (n) ⊂ N X (n).
The case n = 2 provides new finite parametrizations of Jac X \Θ (factoring through the Kummer K X (2) = Jac X /{L ∼ L * }) in affine spaces of dimensions g 2 and 3g − 3 (that is, quadratic and linear in the genus g), improving on the parametrization given by the 2Θ linear series (which requires exponential dependence 2 g on the genus to embed the Kummer). As a side-note we obtain that the collection of second logarithmic derivatives of the theta function (considered in [Mu1]) suffice to give a (generically) finite parametrization of the Jacobian, and hence of a generic abelian variety. Our proof uses the behavior of the (abelian) Szegö kernel near the theta divisor (in fact near blowups of Brill-Noether loci) to show that the maps are proper, hence finite, on the affine varieties K X (n) \ Θ (giving finite extensions of the Gauss map of the theta divisor).
The Klein and Wirtinger maps may be defined either in terms of restrictions of theta functions, or in terms of determinants of nonabelian Szegö kernels. From the point of view of theta functions, the maps appear as certain quotients of the theta linear series, obtained by restricting the theta function first from N X (n) to Jac X , then to X ×X (via the difference map that sends (x , y) ∈ X ×X to O X (x−y)) and further to the nth order infinitesimal neighborhood of the diagonal. The theta function thereby defines kernel functions, sections on X × X of certain sheaves of differentials. Such kernel functions, expanded near the diagonal, are naturally interpreted as differential operators acting between different line bundles on X. On the nth order infinitesimal neighborhood of the diagonal, we obtain monic differential operators with vanishing subprincipal symbol, which we interpret as SL n -opers on X.
Opers (for a reductive group G) are special principal bundles with connection, which play a central rôle in integrable systems and representation theory of loop algebras. They were introduced in [BD] in the context of the geometric Langlands program, providing a coordinate-free expression for the connections which appeared first in [DS] as the phase space of the generalized Korteweg-de Vries hierarchies. Opers form an affine space, modeled on the vector space which is the base of Hitchin's integrable system on the cotangent bundle of the moduli space of bundles. For G a classical group, opers are identified with certain differential operators acting between line bundles on X. In the case G = SL 2 , opers are identified (after the choice of a theta characteristic Ω 1 2 X , which we fix) with projective connections (or projective structures) on X.
By writing opers in terms of their kernel function, we obtain explicit constructions of opers, generalizing the constructions of projective structures from theta functions due to Klein and Wirtinger ( [Ty]). This helps clarify some constructions of differential operators on Riemann surfaces with projective structure ( [BR]).
Another point of view on the Klein and Wirtinger maps is given by matrix opers and the Szegö kernel. We define matrix opers by applying the oper interpretation of differential operators to matrix differential operators. Matrix opers combine the structures of connections on a vector bundle and oper in a natural way (they play the same rôle for multicomponent soliton equations that opers play for KdV). A special class of matrix opers, the extended connections (combining connections with projective structures) appear in [BS] (implicitly) and [BB] (explicitly) as twisted cotangent spaces to the universal moduli space of vector bundles on Riemann surfaces. In analogy with the Hitchin system, we may apply invariant polynomials to matrix opers, and obtain (scalar) opers. For extended connections, we show the determinant map in fact gives a deformation of the quadratic Hitchin map, which appears in the theory of Virasoro-Kac-Moody algebras and isomonodromic deformation ( [BF]).
To every vector bundle E off of the theta divisor, there is a canonical matrix oper on E, defined by the nonabelian Szegö kernel of Fay ([Fa2,Fa3,BB]). Applying the determinant map to the Szegö kernel we recover the pullback of the theta function, and thus the Klein and Wirtinger maps. This point of view is motivated by conformal field theory, where this map appears from taking correlation functions associated to W-algebra symmetries of current algebras. We hope to describe this point of view in future work, and expect it to facilitate the precise description of the Wirtinger map and the proof of our finiteness conjecture.
Since we believe the point of view provided by opers is important in understanding the rôle of Klein and Wirtinger maps, we describe their structure in some detail in the first section. However, we recommend readers to first jump ahead to the last two sections (which can be read largely independently) where the maps are described in elementary terms. The paper is organized as follows: in § 2 we review the description of differential operators as kernel functions, review some basics of opers, and describe matrix opers, extended connections and their analogue of the Hitchin map. In § 3 we introduce the Klein and Wirtinger maps via the Szegö kernel, and prove the finiteness theorem for Kummers, Theorem 3.1.7. Finally in § 4 we explain the relation with classical constructions with theta functions, and draw conclusions about 2Θ functions and logarithmic derivatives on Jacobians.
2. Differential operators and kernels 2.1. Notations. Let X be a compact connected Riemann surface of genus g (a connected smooth complex projective curve -unless explicitly noted, all constructions will be algebraic). Let p i : X × X −→ X, i = 1, 2, be the projection to the i-th factor. The diagonal divisor on X × X will be denoted by ∆. The involution on X × X given by interchange of factors will be denoted by σ, so σ(x, y) = (y, x). Given holomorphic vector bundles V and W on X, we denote vector bundles on X × X by
V ⊠ W := p * 1 V ⊗ p * 2 W, V ⊠ W (n∆) := p * 1 V ⊗ p * 2 W ⊗ O X×X (n∆) In particular p * 1 V = V ⊠ O and p * 2 W = O ⊠ W .
For a vector bundle V on X we denote by V ∨ = V * ⊗ Ω X the Serre dual vector bundle, where Ω X is the holomorphic cotangent bundle of X. For a sheaf W , we will denote by Γ(W ) = H 0 (X, W ) and
h i (W ) = dim H i (X, W ).
Given a holomorphic vector bundle V over a complex manifold M , a torsor, or affine bundle, for V over M is a submersion of complex manifolds π : A → M with a simply transitive, holomorphic action of the sheaf of sections of V on the sections of A. So
the map A × M V −→ A × M A defined by (a, v) −→ (a, a + v) is an isomorphism. In particular, for x ∈ M the fiber A x is an affine space over the vector space V x .
Fix a theta characteristic Ω (Recall that σ : X × X → X × X is the interchange of factors.) In particular note that µ d = µ 2 and µ ν = (µ 1 ) ⊗ν . For a vector bundle E, denote by M ν (E) the vector bundle
M ν (E) = E ⊠ E * ⊗ M ν = (E ⊗ Ω ν 2 X ) ⊠ (E ⊗ Ω ν 2 X )(ν∆) on X × X.
Consider the space Conn(E) of holomorphic connections on E. A connection is given, following Grothendieck, by an isomorphism between the two pullbacks p * 1 E = E ⊠ O and p * 2 E = O ⊠ E over 2∆ (the first-order infinitesimal neighborhood of the diagonal), which restricts to the identity automorphism of E on the diagonal. In other words, a connection is determined by a section of M 0 (E) = E ⊠ E * on 2∆ with "symbol" the identity map Id E on the diagonal.
A connection on E may also be described as a first-order differential operator ∇ : E −→ E ⊗ Ω X whose symbol is the identity map Id E . Thus ∇ gives rise to a section of
M 2 (E) = (E ⊗ Ω X ) ⊠ (E * ⊗ Ω X )(2∆)
on 2∆ with biresidue the identity. These two formulations are related by tensoring with the de Rham kernel µ 2 = µ d trivializing M 2 on 2∆. Similarly we can identify connections with sections of M ν (E)| 2∆ for any ν. Note also that the difference between any two connection kernels is a section of Ω X ⊗ End E. Thus Conn(E) is an affine space for the space H 0 (X, Ω X ⊗ End E) of endomorphism-valued one-forms, or Higgs fields, on E. Any holomorphic connection on a Riemann surface is flat (since there are no nonzero holomorphic two-forms on X.) This means that the identification between nearby fibers of E can be uniquely extended to an isomorphism p * 1 E → p * 2 E to any order along ∆ (in fact to local trivializations in the analytic topology). Equivalently there is a canonical extension from sections of E ⊠ E * | 2∆ which are identity on the diagonal to sections κ ν on ν∆ for any ν > 0, which in terms of a local flat basis of sections {e i } with dual basis {e * i }, is given by κ n = e i ⊠ e * i ∈ Γ((E ⊠ E * )| n∆ ) . In particular we obtain an isomorphism E ⊠ E * | n∆ ∼ = p * 1 End E| n∆ . In the language of kernels this map may be described as the composition
E ⊠ E * ⊗κ t n −→ End E ⊠ End E tr E ⊗ Id −→ O ⊠ End E .
Here κ t n = σ * κ n ∈ Γ(E * ⊠ E| n∆ ) is the transpose of κ n , and tr E is the trace divided by the rank of E.
Note that this extension is nonlinear with respect to the affine structure on Conn(E): it involves solving the differential equation defining flat sections.
2.3. Opers and kernel functions. We would like to consider monic nth order differential operators L = ∂ n t − q 1 ∂ n−1 t − q 2 ∂ n−2 t − · · · − q n−1 ∂ t − q n on a Riemann surface X. To make this notion coordinate-independent, we take L : A → A ′ to be a nth order operator between two holomorphic line bundles, whose principal symbol is an isomorphism. Since the symbol is a section of Hom(A, A ′ ⊗ Ω −n X ), we must have A ′ ∼ = A ⊗ Ω ⊗n X . It is convenient to label the differential operator L not by the line bundle A but by its twist L = A ⊗ Ω n−1 2 X : 2.3.1. Definition. A GL n -oper on X consists of the data of a line bundle L and a monic nth order differential operator
L ∈ Γ(Diff n (A, A ⊗ Ω n X )) = Γ(Diff n (L ⊗ Ω 1−n 2 X , L ⊗ Ω 1+n 2 X )) over X where A = L ⊗ Ω 1−n 2 X .
The space of all GL n -opers on X is denoted by Op n , and opers for given L by Op n (L).
2.3.2. It follows from the differential operator-kernel dictionary that GL n -opers for given L correspond to kernel functions in M n+1 (L) on (n + 1)∆, whose restriction to the diagonal is the constant 1 (by the trivialization defined using adjunction).
Moreover, note that restricting the kernel function to 2∆ we obtain a section of M n+1 (L)| 2∆ , which by § 2.2.1 defines a connection on L. (This is the reason for labeling opers by L rather than by A.) Thus we have a morphism Op n (L) −→ Conn(L). In particular, for L = O, we can look for opers which induce the trivial connection on O, so that the associated kernel on 2∆ is the de Rham kernel µ n+1 . In terms of differential operators, the induced connection (restriction to 2∆) is determined by the subprincipal symbol q 1 . Thus we are considering differential operators L of the form
L = ∂ n t − q 2 ∂ n−2 t − · · · − q n .
(Conversely the vanishing of the subprincipal symbol forces L and the associated connection to be trivial.) 2.3.3. Definition. A SL n -oper on X (for fixed theta characteristic Ω 1 2 X ) is a monic nth order differential operator L ∈ Γ(Diff n (Ω 1−n 2 X , Ω 1+n 2 X )) with vanishing subprincipal symbol. Equivalently, L is defined by a section of M n+1 on (n + 1)∆, whose restriction to 2∆ agrees with µ n+1 . The space of SL n -opers (for
fixed Ω 1 2 X ) is denoted by Op • n .
2.3.4. Remark. The restriction of the bundles M n to any neighborhood k∆ are independent of the choice of theta characteristic Ω 1 2 X . This follows from the fact that the ratio of two theta characteristic is a bundle of order two, L 2 = O X , and so carries a canonical flat connection (inducing the trivial connection on O X ), which gives rise to a trivialization of L ⊠ L * on n∆ for any n. This may also be seen from the universal form of the transition functions defining Ω ν 2 X ⊠ Ω ν 2 X | (n+1)∆ -in fact these transition functions make sense for an arbitrary complex number ν, since the Taylor expansion of an expression dz ν 1 ⊠ dz ν 2 (z 1 − z 2 ) 2ν in terms of a new coordinate w = w(z) has coefficients which are polynomials (with integer coefficients) in ν. In other words, all of these bundles are attached to natural representations of the group of formal changes of coordinates on X (see [FB,7.2]).
Thus the spaces of opers for different choices of Ω 1 2 X are all isomorphic. Alternatively, one can define PSL n -opers and then identify SL n -opers with pairs consisting of a PSL noper and a theta characteristic ( [BD]).
2.3.5. Example. On P 1 , there is a unique SL n -oper for every n (here Ω 1 2
P 1 = O P 1 (−1) is the unique theta characteristic). It is defined by the kernel function (2.3.1) γ ν = dz n+1 2 ⊠ dz n+1 2 (z 1 − z 2 ) n+1 on (n + 1), where z is the natural coordinate function on C ⊂ C ∪ {∞} = P 1 .
This γ ν is a holomorphic section over P 1 × P 1 , and it invariant under the diagonal action of PSL 2 on P 1 × P 1 .
2.3.6. Lemma. There is a canonical isomorphism Op n (L) = Conn(L) × Op • n . 2.3.7.
Proof. An oper L ∈ Op n (L) defines a connection on L as above. Solving the connection defines a trivialization κ n+1 of L ⊠ L * on (n + 1)∆. This trivialization gives an isomorphism M n+1 (L) ∼ = M n+1 , which sends L to an SL n -oper L ′ . The kernel of L ′ is explicitly given by κ −1 n+1 times the kernel of L, from which it is obvious that the restriction to 2∆ is indeed µ n+1 .
2.3.8. Projective Structures. A projective structure on a Riemann surface X (see [Gu], [De]) is an equivalence class of atlases {U α , ϕ α } α∈I on X, where ϕ α is a holomorphic embedding of the open set U α in P 1 , so that the transition maps ϕ β • ϕ −1 α are Möbius (or fractional linear) transformations (elements of PSL 2 C). The space of projective structures will be denoted Proj. A projective structure on X allows us to pull back any PSL 2 -invariant construction from P 1 to X. In particular, we may pull back the SL n opers on P 1 (and their kernel functions γ n+1 ) to define SL n -opers for every n, or equivalently monic differential operators D n with vanishing subprincipal symbol. The symbol of each such operator is the constant function 1. The operator D 0 is the identity
automorphism of Ω 1 2 X . The operator D 1 is the exterior derivative d : O X −→ Ω X . The operator D 2 ∈ Γ(Diff 2 (Ω − 1 2 X , Ω 3 2 X )
) over X is the Sturm-Liouville operator, or projective connection, associated with a projective structure. Thus D 2 is the differential operator which in projective local coordinates has the form ∂ 2 t . The projective structure can be recovered from the associated projective connection, setting up a bijection Proj ∼ = Op • 2 : the projective atlases are defined by the ratios of any two local linearly independent solutions of the Sturm-Liouville operator D 2 .
Opers as connections.
Opers have an interpretation in terms of vector bundles with connection, which also enables the generalization from GL n to an arbitrary reductive group. This observation and its current formulation are due to Drinfeld-Sokolov [DS] and Beilinson-Drinfeld [BD], respectively. Recall that the study of the differential operator
L = ∂ n t − q 1 ∂ n−1 t − q 2 ∂ n−2 t − · · · − q n
is equivalent to that of the system of n first-order equations which can be written in terms of the first-order matrix operator
∂ t − q 1 q 2 q 3 · · · q n 1 0 0 · · · 0 0 1 0 · · · 0 . . . . . . . . . · · · . . . 0 0 · · · 1 0 .
Now suppose L is a GL n -oper on X for the line bundle A. It is not hard to see that the above first-order systems patch together to define a connection ∇ :
F → F ⊗ Ω X on a rank n vector bundle F , which carries a filtration 0 = F 0 ⊂ F 1 ⊂ · · · ⊂ F n = F , with F 1 ∼ = A.
The key features of the above matrix system are the appearance of zeros beneath the subdiagonal (Griffiths transversality) and 1s on the subdiagonal (nondegeneracy). Locally, the bundle and flag (F, F • ) admit a unique trivialization so that the connection has the above form. Moreover for the connection to be an SL n -connection (so that the determinant line bundle and its connection are trivial) the subprincipal symbol q 1 must vanish, so that we obtain SL n -opers.
2.4.1. Proposition. ( [BD]) GL n -opers on X correspond canonically to the data of a rank n vector bundle F , equipped with a flag 0 ⊂ F 1 ⊂ · · · F n−1 ⊂ F n = F, and a connection ∇, satisfying
• ∇(F i ) ⊂ F i+1 ⊗ Ω X . • The induced maps F i /F i−1 → (F i+1 /F i ) ⊗ Ω X are isomorphisms for all i.
SL n -opers are GL n -opers for which the determinant line bundle of the flat vector bundle (F, ∇) is trivial.
2.4.2. In fact, the transversality condition on the connection is sufficiently rigid to force the underlying vector bundle F to be the (n − 1)st jet bundle F ∼ = J n−1 (F/F n−1 ), with its canonical filtration. Note that from the connection point of view, the extension of projective connections to nth order operators is simply the operation of inducing an SL n -oper from a SL 2 -oper by taking the associated bundle for the (n − 1)st symmetric power representation of SL 2 into SL n .
2.5. The Hitchin base. An important non-obvious feature of opers (for fixed L) is that they form an affine space over the Hitchin base space of X ( [Hi]),
Hitch n (X) = Γ(Ω X ) ⊕ Γ(Ω 2 X ) ⊕ · · · ⊕ Γ(Ω n X ) .
This generalizes the statement that projective structures form an affine space over quadratic differentials Γ(Ω 2 X ).
2.5.1. Proposition. ([BD]) There is a canonical isomorphism Conn(L) × Proj × Hitch >2 n −→ Op n (L) .
2.5.2. Remarks on proof. Geometrically, the proposition is an expression of the fact that the tangent bundle to P n−1 restricted to the rational normal curve splits canonically (i.e., PSL 2 -equivariantly) into a sum of line bundles. Namely, the stabilizer in PSL 2 of a point on the rational normal curve (which is isomorphic to upper triangular matrices B 0 ) acts on the tangent space at that point through its C × -quotient. In fact since a SL n -oper naturally gives rise to a projective structure (on restriction to 3∆), the proposition reduces to this fact since any infinitesimal (or complex-local) statement on P 1 which is PSL 2 -equivariant generalizes to any Riemann surface with projective structure. In particular, given an oper F induced from a SL 2 -oper, one identifies a subbundle V ∼ = n 1 Γ(Ω i X ) inside the Higgs fields Γ(End F ⊗ Ω X ). Addition of sections from V acts transitively on oper connections on F -in particular the action of Γ(Ω X ) changes the connection, while that of Γ(Ω 2 X ) changes the projective structure.
2.5.3. Remark. It follows from the proposition that the dimensions of Op n and Op • n on a compact Riemann surface X of genus g are (g − 1)(n 2 − 1) + g and (g − 1)(n 2 − 1) respectively.
2.6. Shifted opers. The projection from Op • n to Proj may be described conveniently using kernels. Given an SL n -oper with kernel s ∈ Γ(M n+1 | (n+1)∆ ), its restriction to 3∆ defines an element of the space
Proj(k) = {s ∈ Γ(M n+1 | 3∆ ) | s| 2∆ = µ n+1 } .
These "shifted" projective kernels are however naturally identified with projective structures. Note that the difference between any two sections of Proj(k) vanishes on 2∆, and so may be identified with a section of M k (−2∆)| ∆ ∼ = Ω ⊗2 X , that is, a quadratic differential on X. It follows that Proj(k) is a torsor for the quadratic differentials Γ(Ω ⊗2 X ) on X. Recall that we may rescale the torsor structure on a fixed affine bundle by any scalar λ ∈ C × , keeping the manifold π : A → M the same but making v ∈ V act by λ · v.
2.6.1. Lemma. The spaces Proj(k) for k = 0 are all isomorphic (with rescaled torsor structure over quadratic differentials).
2.6.2. Proof. The k-th power map ρ → ρ ⊗n | 3∆ identifies the sheaves Proj(1) and Proj(k). On affine structures, this has the effect of rescaling by k:
(ρ + q) ⊗k | 3∆ = (ρ ⊗k + k · q)| 3∆
(since all higher terms vanish on 3∆). Note that multiplication by k sends Proj(1) isomorphically to sections of M 1 | 3∆ whose restriction to 2∆ is kµ 1 , and has the same effect on torsor structures. 2.6.3. It follows that the restriction of a SL n -oper to 3∆ may be naturally identified with a projective structure on X. (For GL n -opers, we must twist by the connection on L given by the restriction to 2∆.)
One naturally encounters other kernel realizations of the spaces of opers:
2.6.4. Definition. The space of k-shifted opers on L is defined to be the space
Op n (L)(k) = {s ∈ Γ(M k (L)| (n+1)∆ ) | s| ∆ = 1} .
2.6.5. (Note that for k > n shifted opers form a quotient of Op • k−1 by differential operators of order k − n − 2.) The restriction of a shifted oper to 3∆ defines an element of Proj(k), hence a projective structure on X. It follows that we may identify all shifted opers canonically with honest opers. Explicitly, this is done by tensoring with the kernels γ n+1−k obtained from the projective structure by (2.3.1), which trivializes M n+1−k to any order near the diagonal. 2.7. Projective kernels and projective connections. Many projective connections arising in Riemann surface theory arise naturally from projective kernels, or global bidifferentials of the second kind, with biresidue one: 2.7.1. Definition.
(1) A SL n -oper kernel on X is a global section s ∈ H 0 (X × X, M n ) with s| 2∆ = µ n . The space of oper kernels is denoted by Kern n . (2) A projective kernel on X is a symmetric SL 2 -oper kernel. In other words, a bidifferential ω ∈ H 0 (X × X, Ω X ⊠ Ω X (2∆)) sym with biresidue one on X × X. The space of projective kernels is denoted by Kern sym 2 ⊂ Kern 2 . By H 0 (X × X, Ω X ⊠ Ω X (2∆)) sym we mean section invariant under the involution σ.
2.7.2. The difference between two projective kernels is a holomorphic symmetric bidifferential on X, so that Kern sym 2 is an affine space for Sym 2 H 0 (X, Ω), of dimension g 2 . Restriction to 3∆ defines a map Kern sym 2 → Proj 2 , which is surjective for X non-hyperelliptic.
A key rôle of the spaces Proj and Kern sym 2 is in relation to moduli spaces. Namely, H 0 (X, Ω ⊗2 X ) is the cotangent space to the moduli space of curves (or Teichmüller space) at X, and Sym 2 H 0 (X, Ω X ) is the cotangent space to the moduli of abelian varieties (or Siegel upper half space) at the Jacobian Jac X of X. The spaces Proj and Kern sym 2 are naturally identified as the fibers, at X and Jac X respectively, of the space of connections on a natural (Hodge or theta) line bundles on the respective moduli spaces (in other words the fibers of appropriate twisted cotangent bundles). (See [Ty, BB].)
An important example of a projective kernel is the Bergman kernel ω B . Let ω i (i = 1, . . . , g) be the normalized basis of holomorphic differentials on X, with respect to a normalized homology basis A i , B j , and ∂ ∂z i the dual basis of vector fields on the Jacobian. The Bergman kernel is characterized by having vanishing A periods, and the forms ω i as its B-periods.
2.8. Matrix opers. In this section we describe a matrix version of opers. Thus we consider nth order differential operators with matrix coefficients,
L = ∂ n t − q 1 ∂ n−1 t − q 2 ∂ n−2 t − · · · − q n
where the q n+1 are now k by k matrices, acting on C r . Let L : E 1 → E 2 be a nth order differential operator between vector bundles E 1 , E 2 on X, whose symbol is an isomorphism, so that E 2 ∼ = E 1 ⊗ Ω ⊗n X . 2.8.1. Definition. A nth order matrix oper on E is an nth order differential operator
L ∈ Γ(Diff n (E ⊗ Ω 1−n 2 X , E ⊗ Ω 1+n 2
X )) over X with principal symbol the identity Id E . The space of matrix opers on E is denoted by MOp n [E].
2.8.2. We may follow the same procedure as in § 2.4 to describe nth order matrix opers L by first order matrix systems, now of rank nk. The resulting connections were called coupled connections in [Bi]. We have the following statement (see [Bi] for more details): 2.8.3. Proposition. Let E be a vector bundle. There is a natural identification between matrix opers L : E → E ⊗ Ω ⊗n X of order n, and vector bundles F equipped with a filtration 0 = F 0 ⊂ F 1 ⊂ · · · ⊂ F n = F , with F 1 ∼ = E ⊗ Ω 1−n 2 X , and a connection ∇ : F → F ⊗ Ω X satisfying the two conditions • ∇ : F ν → F ν+1 ⊗ Ω X (Griffiths transversality), and • the homomorphism F ν /F ν−1 → F ν+1 /F ν ⊗ Ω X induced by ∇ is an isomorphism for all ν.
2.8.4. Proof. Recall that a nth order operator L ∈ Γ(Diff n (E 1 , E 1 ⊗ Ω n X )) over X is a homomorphism from J n (E 1 ) to E 1 ⊗ Ω n X . This is equivalent to a splitting of the jet sequence 0 → E 1 ⊗ Ω n X → J n (E 1 ) → J n−1 (E 1 ) → 0, and thus to a lift J n−1 (E 1 ) to J n (E 1 ). However there is a natural homomorphism J n (E 1 ) → J 1 (J n−1 (E 1 )) for any bundle. Thus we have constructed a lifting from J n−1 (E 1 ) to its sheaf of 1-jets, in other words a connection on J n−1 (E 1 ). The strict Griffiths transversality with respect to the natural connection on J n−1 (E) follows automatically.
In the reverse direction, given a filtered vector bundle F and a connection ∇ on F as above, consider the homomorphism
ψ k : F −→ J k (F ) −→ J k (F/F n−1 )
where the first arrow is the flat extension map given by the connection and the second is the projection. The transversality condition ensures that ψ n−1 is an isomorphism. Therefore, ψ n •ψ −1 n−1 : J n−1 (F/F n−1 ) → J n (F/F n−1 ) gives a splitting of the jet sequence as above, in other words, a differential operator as desired.
2.8.5. Developing maps. A geometric description of coupled connections ∇ as in Proposition 2.8.3, generalizing the description of projective structures via period maps, is given in [Bi]. Namely consider the Grassmannian bundle G k (F ) of k-dimensional subspaces of F . This inherits a connection from ∇ and a section from F 1 , which is nowhere flat. It follows that on simply connected opens (or on the universal cover of X) we obtain period maps to G k (C nk ) using the connection to trivialize G k (F ) and the section to map. These period maps satisfy natural nondegeneracy conditions. Conversely such nondegenerate period maps with transitions coming from the action of GL nk on G k give rise to coupled connections.
2.8.6. Decomposition of matrix opers. Matrix opers of order n on E correspond to kernel functions in M n+1 (E)| (n+1)∆ , that is a section of M n+1 (E) over (n + 1)∆, whose restriction to the diagonal is
Id E ∈ End E ∼ = M n+1 (E)| ∆ .
For example, if E = L is a line bundle, then matrix opers and GL n -opers for L are the same. It follows that by restriction to 2∆, a matrix oper defines a section of M n+1 (E)| 2∆ with residue Id E and thus a flat connection on E ( § 2.2.1). So there is a canonical projection MOp n (E) → Conn(E).
The induced connection on E allows us to identify M n+1 (E) with M n+1 ⊗ p * 1 End E to any order near ∆. Thus, if p ∈ C[gl n ] GL n is an invariant polynomial on matrices (i.e., a coefficient of the characteristic polynomial) we obtain a map p * : MOp n (E) −→ Op • n by applying p to End E and identifying the resulting shifted oper with an oper.
Together with Proposition 2.5.1, this gives a very simple description of matrix opers. Let
Hitch >1 n (E) • = n i=2 Γ(End • E ⊗ Ω ⊗i X ),
the space of traceless End E-valued polydifferentials.
2.8.7. Proposition. There is a canonical isomorphism
Conn(E) × Op • n × Hitch >1 n (E) • −→ MOp n (E) .
2.8.8. Proof. We describe the isomorphism in the languages of kernels and of coupled connections. We define the map Conn(E) × Op • n → MOp n (E) using the tensor decomposition of M n (E) by taking the tensor product of sections. It follows that the decomposition (Proposition 2.5.1) of sections of M n+1 gives rise to a direct sum decomposition of sections of this tensor product. We identify Op • n with the scalar endomorphisms, thereby obtaining the proposition. The projection back to SL n -opers is given by the induced map tr * for p(A) = tr(A)/ rk(E) above.
Viewing an oper through the corresponding flat bundle (F, F • , ∇), where F • is a filtration of subbundles of F , we may take the tensor product of vector bundles E ⊗ F , with its induced filtration and connection. The result is a coupled connection, which we consider as a matrix oper on E. Inside the space of Higgs fields on E ⊗ F compatible with the filtration we find the tensor product of End E with the space V of Higgs fields from Proposition 2.5.1, so we can modify the coupled connection by End E-valued polydifferentials. Again one checks this gives a bijective parametrization of coupled connections.
To see the compatibility of the constructions, note that a vector bundle with connection is canonically trivialized to any order near a point x ∈ X, up to a constant matrix (the change of trivialization of its fiber at x). Hence the compatibility reduces to the (equivariant) compatibility in the case of the trivial bundle, which is obvious.
2.8.9. The determinant. The determinant map for matrix opers may also be described directly, without solving the associated connection. Let s be a section of (E 1 ⊠ E 2 ) ⊗ L, where E 1 , E 2 are vector bundles on X of the same rank k and L is a line bundle on X×X. Then we may define the determinant section det s = ∧ k s of (det E 1 ⊠det E 2 )⊗L k (e.g. consider s as a homomorphism from p * 1 E 1 to p * 2 E 2 ⊗ L of rank k vector bundles and take its determinant).
If
s ∈ E ⊠ E * | 2∆ is a connection on E, then det s ∈ Γ(det E ⊠ det E * | 2∆ )
is the determinant connection on det E. More generally, the determinant defines a canonical map det : MOp n (E) −→ Op n (det E) Namely, the determinant of s ∈ Γ(M n (E)| k∆ ) defines a section det s ∈ Γ(M n rk E (det E)| k∆ ) which is the identity on the diagonal, i.e., a shifted oper, and which we identify with an (unshifted) oper as in § 2.6. There is a commutative diagram
MOp n (E) −→ Conn(E) × Γ(M n ⊗ p * 1 End E) Op n (det E) −→ Conn(det E) × Op • n
where the horizontal arrows are given by trivializing E, det E using the connection, and the vertical arrows are the determinant maps on kernels and on endomorphisms. This identifies the determinant map for matrix opers above with the determinant of the associated kernel.
2.9. Extended Connections. The splitting in Proposition 2.8.7 picks out a particularly interesting subspace Conn(E) × Proj of matrix opers on E, the extended connections on E. In fact extended connections most naturally appear as a quotient of matrix opers. Their rôle is as an affine space for the cotangent space of the moduli of the pair (X, E). As such they do not split as a product: the splitting in Proposition 2.8.7 is nonlinear (since it involves solving the connection to some order), and in fact a deformation the quadratic part of the Hitchin map.
M n+1 (E)(−2∆) −→ Ω ⊗2 X ⊗ End E −→ Ω 2 X
, and monic sections are sections restricting to Id E on the diagonal. Thus we have modified M n+1 (E)| 3∆ by forgetting all but the trace of the lowest-order term.
It follows that restriction to 2∆ makes ExConn n+1 (E) an affine bundle for quadratic differentials H 0 (X, Ω 2 X ) over Conn(E). Consider the space of extended Higgs fields
ExHiggs(E) = {s ∈ Γ(M n+1 (E)| 2∆ ) | s| ∆ = 0}/Γ(M n+1 (E) • | 2∆ ) .
Note that this space ExHiggs(E) is independent of n + 1 since M n+1 | 2∆ is canonically trivialized. The space of extended connections is clearly a torsor over extended Higgs fields. The importance of the latter is as the cotangent space at (X, E) to the moduli of pairs of Riemann surfaces and vector bundles. They form an extension
0 −→ H 0 (X, Ω 2 X ) −→ ExHiggs(E) −→ H 0 (X, End E ⊗ Ω X ) −→ 0
of Higgs fields on E by quadratic differentials. It is proven in [BB] that the torsors ExConn n+1 (E) over ExHiggs(E) for varying X, E form a twisted cotangent bundle over the moduli space: in particular there is an isomorphism
ExConn 1 ∼ = Conn(Θ)
with the affine bundle of connections on the theta line bundle over the moduli space.
2.9.3. Lemma. For every n ∈ Z, the map
Conn(E) × Proj n+1 −→ Γ(M n+1 (E)| 3∆ ) −→ Γ(M n+1 | 3∆ /M n+1 (E) • )
defines an isomorphism Conn × Proj → ExConn n+1 (E) and thereby lifts the latter to MOp 2 (E)(n + 1) (and hence MOp n for every n).
The deformed quadratic Hitchin map. The projection
ExConn n+1 (E) −→ Proj of extended connections back to projective structures may be described in several ways. Following § 2.8.6, it is given by sending M n+1 (E)| 3∆ → M n+1 | 3∆ ⊗ End E via the connection and thence to Proj(n + 1) via trace of sections. Alternatively, it can be deduced from the determinant map
det : M n+1 (E)| 3∆ −→ M k(n+1) (det E)| 3∆ .
We identify the resulting GL 2 -oper with an element of Proj(k(n+1)) by tensoring with the connection of det E as in Lemma 2.3.6, and thence with an element of Proj(n + 1) by Lemma 2.6.1. (This agrees with the trace map since we are restricting to 3∆, thereby keeping only the leading term of the determinant.)
Another description of the projection is given by taking the trace of the square of the kernel. More precisely, for s ∈ Γ(M n+1 (E)| 3∆ ), its transpose s t = σ * s ∈ M n+1 (E * )| 3∆ , so that the tensor product lives in
s ⊗ s t ∈ (End E ⊠ End E) ⊗ M 2(n+1)
over 3∆. We apply trace to both factors, obtaining
S(s) = tr E ⊠ tr E (s ⊗ s t ) ∈ Γ(M 2(n+1) | 3∆ )
which is monic if s is. To compare this with the other constructions, suppose ρ ∈ M n+1 | 3∆ is a projective structure, ∇ is a connection and κ ∈ E ⊠ E * | 3∆ is the corresponding kernel function giving the isomorphism p * 2 E → p * 1 E. Note that
(Id ⊗ tr E )(κ ⊗ κ t ) = Id E ⊠1 ∈ End E ⊠ O ,
simply expressing the fact that κ t is the flat kernel for the inverse map p * 1 E → p * 2 E. It follows that S(ρ ⊗ κ) = ρ, so that S is indeed the projection back on projective structures.
This description of the determinant map for extended connections presents it as a deformation of the quadratic Hitchin map. Namely let
ExConn λ n+1 (E) = {s ∈ Γ(M n+1 (E)| 3∆ ) | s| ∆ = λ Id}/Γ(M n+1 (E) • | 3∆ )
be the family deforming extended connections to extended Higgs fields.
2.9.5. Proposition. The determinant map ExConn n+1 (E) → Proj(n + 1) deforms to a map ExConn λ n+1 (E) −→ Proj(λ(n + 1)) (for λ ∈ C), which for λ = 0 factors through the quadratic Hitchin map
ExHiggs(E) −→ Γ(Ω X ⊗ End E) −→ Γ(Ω 2 X ) = Proj(0) , sending η ∈ Γ(Ω X ⊗ End E) to tr E (η 2 ).
2.9.6. Proof. If s| ∆ = λ Id E then S(s)| 2∆ = λ 2 µ 2(n+1) (by symmetry with respect to transposition of factors). For λ = 0 the space of such kernels is isomorphic (by rescaling and taking square-root, Lemma 2.6.1) with projective structures. In fact the resulting map ExConn λ n+1 (E) → Proj(λ(n + 1)) is a morphism of torsors for quadratic differentials (the square root Proj(2λ(n + 1)) → Proj(λ(n + 1)) compensates for the quadratic expression s ⊗ s t .) This map clearly descends to ExConn λ n+1 (E). On the other hand, for λ = 0, we obtain a quadratic differential, realized as a section of M n+1 | 3∆ vanishing on 2∆. This quadratic differential depends only on the Higgs field η underlying the extended Higgs field s, and equals tr E (η 2 ) (the first trace squares η by contracting indices, while the other trace takes trace of the resulting matrix).
The Klein and Wirtinger maps.
Let M X (n) denote the moduli space of semistable vector bundles over X of rank n and Euler characteristic 0. It is known that M X (n) is an irreducible normal projective variety of dimension (g − 1)(n 2 − 1) + g. In particular, M X (1) = Pic g−1 X , the moduli of degree g − 1 line bundles. Let M X (n) 0 denote the moduli space of semistable vector bundles of rank n and degree 0. The chosen theta characteristic Ω 1 2 X gives an isomorphism
M X (n) −→ M X (n) 0 , E −→ E 0 = E ⊗ Ω − 1 2 X
(since tensoring by a line bundle preserves semistability). The determinant map E → det E sends M X (n) to Pic n(g−1) X
. We may identify a closed subvariety
N X (n) = det −1 ({Ω n 2 X }) ⊂ M X (n) which is isomorphic, via E → E 0 = E ⊗ Ω − 1 2 X , to the moduli of semistable SL n -bundles. The dimension of N X (n) is (g − 1)(n 2 − 1).
The subvariety
Θ := {V ∈ M X (n) | H 0 (X, V ) = 0}
is a (reduced) divisor, the generalized theta divisor, that gives the ample generator of the Picard group Pic(N X (n)) [DN]. Note that for any E in M X (n), we have h 0 (E) = h 1 (E). The condition h 0 (E) = h 1 (E) = 0 also guarantees that E is semistable. Indeed, if a subbundle F of E contradicts the semistability condition of E, then the Riemann-Roch theorem ensures that h 0 (F ) > 0, thus contradicting the condition that h 0 (E) = 0. The smooth locus of the theta divisor Θ is precisely the subvariety Θ • of vector bundles E with h 0 (E) = h 1 (E) = 1. Let K X (n) ⊂ N X (n) denote the subvariety consisting of vector bundles, which are isomorphic to a direct sum of line bundles. Thus for n = 2, K X (2) consists of vector bundles of the form L ⊕ L ∨ ∼ = L ∨ ⊕ L, so that K X (2) is isomorphic to the Kummer variety K X (2) = Pic g−1
X /{L ∼ L ∨ }. 3.1. The Szegö kernel. For E ∈ M X (n), with E 0 = E ⊗ Ω − 1 2 X ∈ M X (n) 0 , denote by M(E) the sheaf M(E) = M 1 (E 0 ) ∼ = E ⊠ E ∨ (∆) . (By Remark 2.3.4 M(E)| n∆ is independent of Ω 1 2 X .) Let M(E) • denote the subsheaf M(E) • = {s ∈ M(E) : s| ∆ = λ Id E (λ ∈ C)}.
When E ∈ M X (n) \ Θ, there is a canonical kernel function associated to E, the nonabelian Szegö kernel of Fay [Fa2,Fa3] (see also [BB]). In particular we will use the following characterization of the Szegö kernel:
3.1.1. Proposition. ([BB]) (1) If h 0 (E) = h 1 (E) = 0, then H 0 (X × X, M(E) • ) = C · s E , where s E , the Szegö kernel of E, is the unique section with s E | ∆ = Id E . (2) Otherwise, the inclusion H 0 (X, E) ⊗ H 0 (X, E ∨ ) ∼ = H 0 (X × X, E ⊠ E ∨ ) ֒→ H 0 (X × X, M(E) • )
is an isomorphism. In other words, all global sections of M(E) • vanish on ∆.
3.1.2. Thus s E | k∆ ∈ MOp k (E 0 )(1) is a canonical (shifted) matrix oper on E 0 ( § 2.8). The proposition follows from Serre duality and the long exact sequence of cohomologies of E ⊠ E ∨ with poles along the diagonal.
We would like to apply the determinant map to the Szegö kernel:
det s E ∈ Γ(M n (det E 0 )) .
Restricting to k∆ defines a (shifted) GL k -oper for the line bundle det E 0 . (We will identify shifted opers with opers, using § 2.6.)
Definition.
(1) The Wirtinger oper associated to a bundle
E ∈ M X (n) \ Θ is the GL n -oper det s E | (n+1)∆ ∈ Γ(M n (det E 0 )| (n+1)∆ ). The resulting map W : M X (n) \ Θ −→ Op n W : N X (n) \ Θ −→ Op • n
is the Wirtinger map (of rank n).
(2) The Klein oper kernel associated to a bundle E ∈ M X (n) \ Θ is the kernel det s E ∈ H 0 (X × X, M n (det E)). The resulting map K : N X (n) \ Θ −→ Kern n is the Klein map (of rank n).
3.1.4. Note that the dimensions of M X (n) and Op n agree, as do those of N X (n) and Op • n . Thus if we knew W to be a finite map, it would give a canonical system ofétale coordinates on an open subvariety of the moduli space. This leads us to conjecture: 3.1.5. Conjecture.
(1) The Klein map is finite onto its image for all X.
(2) The Wirtinger map is finite for generic X.
3.1.6. We will prove the conjecture in the case of torus bundles, i.e., along K X (n) ⊂ N X (n). We first describe the Szegö kernel and its determinant for torus bundles. Suppose E ∼ = L 1 ⊕L 2 ⊕· · ·⊕L n . Then E ∈ M n (X)\Θ if and only if each L i ∈ Pic g−1 X \Θ. Moreover in this case s E = s L 1 ⊕ · · · ⊕ s Ln , and det
s E = n i=1 s L i . If E ∈ N X (n) then we have in addition n i=1 L i = Ω n 2 X . For example, if n = 2, E = L ⊕ L ∨ and s E = s L s L ∨ = s L s t L . Recall the Petri map H 0 (X, L) ⊗ H 0 (X, L ∨ ) −→ H 0 (X, Ω)
obtained by tensoring of sections [ACGH,p. 127]. Under the Künneth isomorphism
H 0 (X × X, L ⊠ L ∨ ) = H 0 (X, L) ⊗ H 0 (X, L ∨ ) ,
the Petri map is identified with the restriction to the diagonal
H 0 (X × X, L ⊠ L ∨ ) −→ Γ(L ⊠ L ∨ | ∆ ) = H 0 (X, Ω) .
Thus injectivity of the Petri map implies that global sections of L ⊠ L ∨ are determined by their restriction to the diagonal. The curve X is called Brill-Noether general if the Petri map is injective for every line bundle L. By the Petri conjecture (Lazarsfeld's Theorem), this condition is satisfied by a generic curve of genus g.
We then having the following result in the direction of the finiteness conjecture:
3.1.7. Theorem.
(1) The Klein map for Kummers K : K X (n) \ Θ → Kern n is finite onto its image for all X.
(2) The Wirtinger map for Kummers W : K X (n) \ Θ → Op • n is finite onto its image for Brill-Noether general X.
3.1.8. Proof of (1). Consider the subvariety of (Pic g−1 X ) n of line bundles (L 1 , · · · , L n ) with n i=1 L i ∼ = Ω n 2 X . (We identify this with (Pic g−1 X ) n−1 through the first n − 1 L i .) For (1), it clearly suffices to show that the map from (Pic g−1 X ) n−1 to Kern n given by
(L 1 , . . . , L n−1 ) −→ n i=1 s L i is finite.
To do so we consider Kern n as a subvariety of PH 0 (X × X, M n ) (contained in the affine open of sections with nonzero trace on the diagonal), and complete K to a morphism K : (P g−1 ) n−1 −→ PH 0 (X × X, M n ) from a partial resolution of the singular locus of Θ. Here P g−1 → Pic g−1 X is a projective morphism, which is an isomorphism off the singular part of the theta divisor. (In fact P g−1 will be the union, for X Brill-Noether general, of the projectivized conormal bundles to the Brill-Noether loci W g−1,i ⊂ Pic g−1 X .) Hence the extended map is automatically proper, and a closer examination shows it remains proper when restricted to (Pic g−1 X \Θ) n−1 , and hence finite. We construct P g−1 as the moduli of pairs (L, s) consisting of a line bundle L ∈ Pic g−1 X and a nonzero section s of M(L), up to scalar (i.e., a divisor in the complete linear series |M(L)| on X × X.) This is a projective variety mapping to Pic g−1 X , with the fibers the projective spaces PH 0 (X × X, L ⊠ L ∨ (∆)). The construction of this projective variety follows from that of the Hilbert scheme of divisors, of the same degree as M(L), on the surface X × X. This Hilbert scheme fibers over the Picard group of X × X, and we pull it back to Pic g−1 X over the morphism Pic g−1 X → Pic(X × X) sending L to M(L). It follows from Proposition 3.1.1 that over Pic g−1 X \Θ the projection P g−1 → Pic g−1 X is an isomorphism, since the Szegö kernel is the unique section of M(L) up to scalars. In fact, the morphism remains an isomorphism on the smooth locus of Θ, since for h 0 (L) = 1 we have h 0 (M(L)) = h 0 (L)h 0 (L ∨ ) = 1. Since by Proposition 3.1.1 every section of M(L) for L ∈ Θ defines a section of L and one of L ∨ , it follows that the inverse image in P g−1 over Θ (for the projection of P g−1 to Pic g−1 X ) is given by
P g−1 | Θ ∼ = Sym g−1 X × Pic g−1 X i * Sym g−1 X ,
where i : L → L ∨ -in other words, the inverse image is the space of pairs of divisors for L and L ∨ . (Thus P g−1 restricts, for X Brill-Noether general, to the union of blowups of the Brill-Noether loci in Pic g−1 X .) We now extend the morphism K from (Pic g−1 X ) n−1 to P g−1 n , the inverse image of (Pic g−1 X ) n−1 ⊂ (Pic g−1 X ) n in (P g−1 ) n , i.e., P g−1 n parametrizes (L 1 , s 1 ; . . . ; L n , s n ) where the L i add up to Ω n 2 X . To such a tuple we assign the line
[ n i=1 s i ] in n i=1 (π X×X ) * M(L i ) = (π X×X ) * M(Ω n 2 X ) ,
where s i are the tautological sections of L i given by the ith point in P g−1 (taken up to scalar). The right hand side is the vector space H 0 (X × X, M n ), independently of the L i , so we have constructed the desired extension
K : P g−1 n −→ PH 0 (X × X, M n ) .
The completed morphism K is a morphism of projective varieties, hence proper. We claim its restriction to (Pic g−1 X \Θ) n−1 is also proper. Let PH 0 (X × X, M n ) ⊂ PH 0 (X × X, M n ) denote the hyperplane of sections vanishing on the diagonal. By Proposition 3.1.1, for L ∈ Θ, all sections of M(L) automatically vanish on the diagonal, while for L ∈ Pic g−1 X \Θ all nonzero sections give nonzero constant functions on the diagonal. Hence the preimage of the complement of this hyperplane is precisely Pic g−1 X \Θ. We obtain that the morphism K from the affine variety (Pic g−1 X \Θ) n−1 is proper, hence finite.
3.1.9. Proof of (2). We embed the affine space Op • n in the projective space Op • n = PΓ(M n | (n+1)∆ ) . Thus W gives rise to a map W : (Pic g−1 X \Θ) n−1 −→ Op • n . In order to prove finiteness of W, we would like to extend it to P g−1 , whenever possible.
Let L ∈ Θ. Then by Proposition 3.1.1, global sections of L ⊠ L ∨ (∆) vanish on ∆. If the Petri map of L is injective, however, such sections are determined by their restriction to 2∆. So we take X to be Brill-Noether general. It follows that for a collection of nonzero sections s i ∈ H 0 (X × X, M(L i )), the restriction ( n i=1 s i )| (n+1)∆ is also nonzero. Thus the s i define a point in Op • n , and we have completed W to a map W : (P g−1 ) n−1 −→ Op Again the inverse image of the hyperplane of sections vanishing on the diagonal is precisely the inverse image of the theta divisor, so the map remains proper off Θ, implying finiteness as before.
4. Relations with theta functions.
4.1. The theta linear series. The Klein and Wirtinger maps have natural interpretations as quotients of the theta linear series on M X (n) and N X (n). For E ∈ M X (n), consider the sequence of maps
((n + 1)∆) ֒→ X × X δ −→ Jac X τ E −→ M X (n) .
Here
τ E : Jac X := Pic 0 (X) −→ M X (n), τ E (L) = E ⊗ L
is the translation map, δ(x, y) = y − x and the composition τ E • δ is the difference map
δ E : X × X −→ M X (n), δ E (x, y) = E(y − x) .
It is well-known that for E ∈ N X (n), the pullback of nonabelian theta functions
τ * E [O M X (n) (Θ)] =
O Jac X (nΘ) are weight n abelian theta functions. Moreover the resulting map
τ * : N X (n) −→ PH 0 (Jac X , O Jac X (nΘ))
is an embedding (see [Be]). (Note that we have fixed a theta characteristic Ω 1 2 X , which allows us to principally polarize the Jacobian and pass from line bundles L of degree n(g − 1) to L 0 of degree 0.) Pulling back further to X × X or (n + 1)∆, we obtain sections of the pullback δ * E O M X (n) (Θ) = M n ⊗ Θ| E , the tensor of the line bundle M n by the complex line Θ| E , the fiber of Θ. (See e.g. [BB].)
It follows that we have a sequence of pullback maps
H 0 (Jac X , O Jac X (nΘ)) −→ H 0 (X × X, M n ) −→ Γ(M n | (n+1)∆ ) ,
and consequently rational maps on the corresponding projective spaces. Composing these with τ * we obtain rational maps from N X (n) (if the image of τ * is not contained in the kernels of the projections). We will use the following description of the Szegö kernel:
4.1.1. Theorem. ( [BB], see also [GP, Po]) det s E = δ * E θ/θ(E). are equal to the composition of the theta linear series τ * with the restrictions to X × X and (n + 1)∆, respectively.
4.2.
The linear series |2Θ|. Let us consider the case n = 2. (Our reference for 2Θ functions is [Do].) The map τ * : N X (n) → PH 0 (Jac X , O(2Θ)) restricts on the Kummer variety Jac X ։ K X (2) ⊂ N X (2) to the map Jac X ∋ e −→ Θ e + Θ −e (where Θ e denotes the translate of Θ by e). The Riemann quadratic identity and Kummer identification theorem provide a natural isomorphism between this map and the 2Θ linear series |2Θ| * : Jac X −→ PH 0 (Jac X , O(2Θ)) * which naturally maps to the dual projective space. By the symmetry properties of 2Θ it follows that the image of H 0 (Jac X , O(2Θ)) in H 0 (X × X, M 2 ) consists of symmetric bidifferentials. In fact there is a short exact sequence 0 −→ Γ 00 −→ H 0 (Jac X , O(2Θ)) δ * −→ H 0 (X × X, Ω ⊠ Ω(2∆)) sym −→ 0 , where the kernel Γ 00 can be characterized as the subspace of 2Θ-functions vanishing to fourth order at 0. The right hand side is a vector space of dimension g 2 + 1. Its projective space Kern sym 2 ∼ = P ( g 2 ) contains as an affine open the space Kern sym 2 of projective kernels. This vector space has a further quotient Γ(Ω X ⊠ Ω X (2∆)| 3∆ ) sym , obtained by restricting kernels to 3∆. Its projective space Proj ∼ = P 3g−3 contains as an affine open the space Proj of projective structures. Note that the image of K : N X (2)\Θ → Kern 2 lies in Kern sym 2 , while W defines a map W : N X (2)\Θ → Proj. We may thus reinterpret the finiteness theorem as follows: 4.2.1. Corollary.
(1) The rational map Jac X → P ( g 2 ) defined by the composition of |2Θ| * with projection by Γ 00 is a finite morphism on Jac X \Θ.
(2) For X generic, the further projection Jac X → P 3g−3 remains finite on Jac X \Θ.
4.2.2.
Formulas. The explicit description of the Szegö kernel for line bundles is (4.2.1) s L (x, y) = θ(y − x + L 0 ) θ(L 0 )E(x, y) ,
where E(x, y) is the prime form (this is the rank one case of Theorem 4.1.1.) Thus the Klein map on the Kummer K X (2) becomes
(4.2.2) K(L ⊕ L ∨ ) = s L s L ∨ = θ(y − x + L 0 )θ(y − x − L 0 ) θ(L 0 ) 2 E(x, y) 2 .
The relation to 2Θ is easily seen explicitly. Let θ 2 α β (e) be the generating vector of the second order theta functions with characteristics. By Riemann's quadratic identity ( [Mu1]), we may rewrite the expression K(L ⊕ L ∨ ) of (4.2.2) as follows:
(4.2.3) θ(y − x + L 0 )θ(y − x − L 0 ) θ(L 0 ) 2 E(x, y) 2 = → θ (y − x)· → θ (L 0 ) θ(L 0 ) 2 E(x, y) 2 .
4.3. The Gauss map. Let Θ • ⊂ Pic g−1 X denote the smooth part of the theta divisor. The Gauss map for the theta divisor sends G : Θ • −→ PH 0 (X, Ω) .
Since H 0 (X, L) = Cl is one dimensional for L ∈ Θ • , the Petri map for L, L −→ l ⊗ l ∨ , also defines a line in H 0 (X, Ω), which is known to agree with the Gauss line for L. On the other hand the extension of W to Θ • ⊂ P g−1 sends
L −→ (l ⊠ l ∨ ) ⊗ (l ∨ ⊠ l)| ∆ = (l ⊗ l ∨ ) ⊗2 ,
which defines a line in PH 0 (X, Ω) ⊂ Proj. Thus the tensor square of the Gauss map agrees with the morphism W: 4.3.1. Corollary. For a Brill-Noether general curve, the square of the Gauss map G ⊗2 : Θ • −→ PH 0 (X, Ω ⊗2 ) extends to a finite morphism W : Pic g−1 X \Θ sing −→ Proj .
4.3.2.
Remark. It is interesting to note that this relation of the Klein map to the theta divisor fails completely in higher rank. Namely, for E ∈ Θ • we still have H 0 (X, E) = Cs. It follows that the Higgs field s ⊗ s ∨ ∈ End E ⊗ Ω = (E ⊠ E ∨ )| ∆ is nilpotent. In fact as E varies over Θ • we obtain this way an irreducible component of the global nilpotent cone in the moduli of Higgs bundles. Thus the "Hitchin-Gauss" map, applying characteristic polynomials to this canonical line of Higgs bundles along Θ • , is identically zero. In particular the determinant det s ⊠ s ∨ = 0 vanishes identically on X × X, so we cannot use this to extend the Klein map across the theta divisor.
4.4. Logarithmic derivatives of theta. In [Mu1], Mumford cites three general techniques for constructing meromorphic functions on Jacobians out of theta functions, of which the third is that of taking second logarithmic derivatives. Namely, there is a collection of g 2 meromorphic functions ∂ 2 log θ ∂z i ∂z j
4. 1 . 2 .
12Corollary. The Klein and Wirtinger mapsK : N X (n) \ Θ −→ PH 0 (X × X, M n ) W : N X (n) \ Θ −→ PΓ(M n | (n+1)∆ )
→θ
: C g −→ H 0 (Jac X , O(2Θ)) * defined by → θ (e) = α,β∈Jac X [2]
2.9.1. Definition. The space ExConn n+1 (E) of extended connections on E is the space of monic sections of the quotient of M n+1 (E)| 3∆ by the subsheaf of sections vanishing on 2∆ and with vanishing trace on 3∆.2.9.2. Here trace refers to the composition
Acknowledgments: We would like to thank Ron Donagi and Matthew Emerton for useful discussions. We are especially grateful to Mohan Ramachandran for suggesting the use of properness to establish finiteness.on the Jacobian -or more invariantly, a rational map (4.4.1) e −→ g 1 ∂ 2 log θ ∂z i ∂z j (e)ω i (x)ω j (y) .from the Jacobian to holomorphic symmetric bidifferentials on X, H 0 (X ×X, Ω⊠Ω) sym . By translating these holomorphic bidifferentials by the Bergman kernel ω B ( § 2.7), we obtain the Klein projective kernels ω e ∈ Kern sym 2 ([Ty]):Here the point e ∈ Jac X \Θ. Classically e is taken to be a two-torsion point, so that ω e is written in terms of theta functions with characteristics. The corresponding projective connections ω e | 3∆ ∈ Proj are known ([Ty]) as the Wirtinger connections. The relation of these classical kernels with our Klein and Wirtinger maps is provided by the "second corollary to the trisecant identity" of J. Fay([Fa1], Corollary 2.12; also[Mu2]):4.4.1. Corollary. The second logarithmic derivatives of θ provide a finite parametrization of the complement of the theta divisor in the Jacobian in affine space of dimension g 2 . Namely, the holomorphic map Jac X \Θ −→ H 0 (X × X, Ω ⊠ Ω) sym of (4.4.1) is finite onto its image.4.4.2.Remark. It also follows from Corollary 4.4.1 that the second logarithmic derivative map is generically finite for generic abelian varieties, since it is finite on the Jacobian locus.
E Arbarello, M Cornalba, P A Griffiths, J Harris, Geometry of algebraic curves. I. Grundlehren der Mathematischen WissenschaftenNew YorkSpringer-Verlag267E. Arbarello, M. Cornalba, P. A. Griffiths and J. Harris : Geometry of algebraic curves. Vol. I. Grundlehren der Mathematischen Wissenschaften, 267. Springer-Verlag, New York, 1985.
Vector bundles on curves and generalized theta functions -recent results and open problems. Current topics in complex algebraic geometry. A Beauville, Math. Sci. Res. Inst. Publ. 28Cambridge Univ. PressA. Beauville : Vector bundles on curves and generalized theta functions -recent results and open problems. Current topics in complex algebraic geometry (Berkeley, CA, 1992/93), 17-33, Math. Sci. Res. Inst. Publ., 28, Cambridge Univ. Press, Cambridge, 1995.
Quantization of Hitchin's Hamiltonians and Hecke Eigensheaves. A A Beilinson, V G Drinfeld, Preprint, available at www.math.uchicago.edu/˜benzviA. A. Beilinson and V. G. Drinfeld : Quantization of Hitchin's Hamiltonians and Hecke Eigensheaves. Preprint, available at www.math.uchicago.edu/˜benzvi.
Determinant bundles and Virasoro algebra. A A Beilinson, V Schechtman, Comm. Math. Phys. 118A. A. Beilinson and V. Schechtman : Determinant bundles and Virasoro algebra. Comm. Math. Phys. 118 (1988), 651-701.
D Ben-Zvi, E Frenkel, Geometrization of the Sugawara Construction. PreprintD. Ben-Zvi and E. Frenkel : Geometrization of the Sugawara Construction. Preprint, 2001.
D Ben-Zvi, I Biswas, math.AG/0211441Szegö Kernels and Theta Functions. PreprintD. Ben-Zvi and I. Biswas : Szegö Kernels and Theta Functions. Preprint, math.AG/0211441
Coupled connections on a compact Riemann surface. I Biswas, Jour. Math. Pures Appl. to appearI. Biswas : Coupled connections on a compact Riemann surface. Jour. Math. Pures Appl. (to appear).
Projective structures on a Riemann surface. I Biswas, A K Raina, II. Int. Math. Res. Not. No. 13I. Biswas and A. K. Raina : Projective structures on a Riemann surface, II. Int. Math. Res. Not. No. 13 (1999), 685-716.
P Deligne, Equations Différentiellesà Points Singuliers Réguliers. BerlinSpringer-Verlag163P. Deligne : Equations Différentiellesà Points Singuliers Réguliers. Lecture Notes in Math., No.: 163, Springer-Verlag, Berlin, 1970.
R Donagi, The Schottky problem. Theory of moduli. Montecatini Terme; BerlinSpringerR. Donagi : The Schottky problem. Theory of moduli (Montecatini Terme, 1985), 84-137, Lecture Notes in Math., 1337, Springer, Berlin, 1988.
Groupe de Picard des variétés de modules de fibrés semi-stables sur les courbes algébriques. J.-M Drezet, M S Narasimhan, Inv. Math. 97J.-M. Drezet and M. S. Narasimhan : Groupe de Picard des variétés de modules de fibrés semi-stables sur les courbes algébriques. Inv. Math. 97 (1989), 53-94.
V G Drinfeld, V Sokolov, Lie algebras and equations of Korteweg-de Vries type. 30V. G. Drinfeld and V. Sokolov : Lie algebras and equations of Korteweg-de Vries type, Journal of Soviet Mathematics 30 (1985), 1975-2035.
J D Fay, Theta functions on Riemann surfaces. Berlin Heidelberg New-YorkSpringer352J. D. Fay : Theta functions on Riemann surfaces. Lecture Notes in Math 352, Springer Berlin Heidelberg New-York, 1973.
Kernel functions, analytic torsion, and moduli spaces. J D Fay, Memoirs of the Amer. Math. Soc. 464J. D. Fay : Kernel functions, analytic torsion, and moduli spaces. Memoirs of the Amer. Math. Soc. 464 (1992).
J D Fay, The non-abelian Szegö kernel and theta-divisor. Curves, Jacobians, and abelian varieties. Amherst, MA; Providence, RIAmer. Math. Soc136J. D. Fay : The non-abelian Szegö kernel and theta-divisor. Curves, Jacobians, and abelian varieties (Amherst, MA, 1990),171-183, Contemp. Math. 136, Amer. Math. Soc., Providence, RI, 1992.
. E Frenkel, D Ben-Zvi, Vertex Algebras and Algebraic Curves. Mathematical Surveys and Monographs. 88American Mathematical Society PublicationsE. Frenkel and D. Ben-Zvi : Vertex Algebras and Algebraic Curves. Mathematical Surveys and Monographs 88, American Mathematical Society Publications (2001).
Addition formulae for non-abelian theta functions and applications. To appear. E , Gómez Gonzàlez, F J Plaza Martín, Journal of Geometry and Physics. E. Gómez Gonzàlez and F.J. Plaza Martín : Addition formulae for non-abelian theta functions and applications. To appear, Journal of Geometry and Physics.
R C Gunning, Lectures on Riemann surfaces. Mathematical Notes 2. Princeton, New JerseyPrinceton University PressR. C. Gunning : Lectures on Riemann surfaces. Mathematical Notes 2. Princeton University Press, Princeton, New Jersey, 1966.
Hitchin : Stable bundles and integrable systems. N J , Duke Math. Jour. 54N. J. Hitchin : Stable bundles and integrable systems. Duke Math. Jour. 54 (1987), 91-114.
D Mumford ; C. Musili, M Nori, E Previato, M Stillman, Tata Lectures on Theta. I. With the assistance of. Boston, MABirkhäuser Boston, Inc28D. Mumford : Tata Lectures on Theta. I. With the assistance of C. Musili, M. Nori, E. Previato and M. Stillman. Progress in Mathematics, 28. Birkhäuser Boston, Inc., Boston, MA, 1983.
D Mumford ; C. Musili, M Nori, E Previato, M Stillman, H Umemura, Tata Lectures on Theta. II. Jacobian theta functions and differential equations. Boston, MABirkhäuser Boston, Inc43D. Mumford : Tata Lectures on Theta. II. Jacobian theta functions and differential equations. With the collaboration of C. Musili, M. Nori, E. Previato, M. Stillman and H. Umemura. Progress in Mathematics, 43. Birkhäuser Boston, Inc., Boston, MA, 1984.
Triple Massey products on curves, Fay's trisecant identity and tangents to the canonical embedding. e-print math. A Polishchuk, /0107194A. Polishchuk : Triple Massey products on curves, Fay's trisecant identity and tangents to the canonical embedding. e-print math.AG/0107194.
On periods of quadratic differentials. A N Tyurin, Russian Math. Surveys. 336A. N. Tyurin : On periods of quadratic differentials. Russian Math. Surveys 33:6 (1978), 169-221.
|
[] |
[
"Expanding perfect fluid generalizations of the C-metric",
"Expanding perfect fluid generalizations of the C-metric"
] |
[
"Lode Wylleman \nFaculty of Applied Sciences TW16\nGhent University\nGalglaan 29000GentBelgium\n",
"David Beke \nFaculty of Applied Sciences TW16\nGhent University\nGalglaan 29000GentBelgium\n\nCentre de Physique Théorique\nCampus de Luminy13288MarseilleFrance\n"
] |
[
"Faculty of Applied Sciences TW16\nGhent University\nGalglaan 29000GentBelgium",
"Faculty of Applied Sciences TW16\nGhent University\nGalglaan 29000GentBelgium",
"Centre de Physique Théorique\nCampus de Luminy13288MarseilleFrance"
] |
[] |
Petrov type D gravitational fields, generated by a perfect fluid with spatially homogeneous energy density and with flow lines which form a non-shearing and non-rotating timelike congruence, are re-examined. It turns out that the anisotropic such spacetimes, which comprise the vacuum Cmetric as a limit case, can have non-zero expansion, contrary to the conclusion in the original investigation by Barnes [1]. Apart from the static members, this class consists of cosmological models with precisely one symmetry. The general line element is constructed and some important properties are discussed. It is also shown that purely electric Petrov type D vacuum spacetimes admit shearfree normal timelike congruences everywhere, even in the non-static regions. This result incited to deduce intrinsic, easily testable criteria regarding shearfree normality and staticity of Petrov type D spacetimes in general, which are added in an appendix.
|
10.1103/physrevd.81.104038
|
[
"https://arxiv.org/pdf/1001.5263v2.pdf"
] | 119,254,536 |
1001.5263
|
9df699e4793d1fbc36d40a15b038f94184576f0b
|
Expanding perfect fluid generalizations of the C-metric
26 Mar 2010 (Dated: March 29, 2010)
Lode Wylleman
Faculty of Applied Sciences TW16
Ghent University
Galglaan 29000GentBelgium
David Beke
Faculty of Applied Sciences TW16
Ghent University
Galglaan 29000GentBelgium
Centre de Physique Théorique
Campus de Luminy13288MarseilleFrance
Expanding perfect fluid generalizations of the C-metric
26 Mar 2010 (Dated: March 29, 2010)PACS numbers: 04.20.-q, 04.20.Jb, 04.40.Nr
Petrov type D gravitational fields, generated by a perfect fluid with spatially homogeneous energy density and with flow lines which form a non-shearing and non-rotating timelike congruence, are re-examined. It turns out that the anisotropic such spacetimes, which comprise the vacuum Cmetric as a limit case, can have non-zero expansion, contrary to the conclusion in the original investigation by Barnes [1]. Apart from the static members, this class consists of cosmological models with precisely one symmetry. The general line element is constructed and some important properties are discussed. It is also shown that purely electric Petrov type D vacuum spacetimes admit shearfree normal timelike congruences everywhere, even in the non-static regions. This result incited to deduce intrinsic, easily testable criteria regarding shearfree normality and staticity of Petrov type D spacetimes in general, which are added in an appendix.
INTRODUCTION
The C-metric is a well-known exact solution of Einstein's vacuum equation with zero cosmological constant. The static region of the corresponding spacetime was first described by Weyl [2]. At about the same time Levi-Civita [3] constructed its line element in closed form, arriving at essentially one cubic polynomial with two parameters as the metric structure function. The C-metric is a Petrov type D solution for which at each spacetime point both Weyl principal null directions (PNDs) are geodesic, non-shearing, non-twisting but diverging; it thus belongs to the Robinson-Trautman class of solutions and was rediscovered as such [4]. The label 'C' derives from the invariant classification of static degenerate Petrov type D vacuum spacetimes by Ehlers and Kundt [5]. The importance of this solution as summarized by Kinnersley and Walker [6], is threefold. First, the C-metric describes a spacetime with only two independent Killing vector fields (KVFs) which can be fully analyzed. Next, it is an 'example of almost everything', most notably it describes a radiative, locally asymptotically flat spacetime, whilst containing a static region. The C-metric is contained in the class of boost-rotationsymmetric spacetimes [7,8], which are the only axially symmetric, radiative and asymptotically flat spacetimes with two Killing vectors. Finally, the solution has a clear physical interpretation as the anisotropic gravitational field of two Schwarzschild black holes being uniformly accelerated in opposite directions by a cosmic string or strut, provided that mα < 1/ √ 27, where the mass m and acceleration α are equivalents of the two essential * Supported by a BOF Research Found (UGent) E-mail: [email protected] † Ph.D. Fellow of the Research Foundation -Flanders (FWO), Email: [email protected] parameters of Levi-Civita [6,9] (see, however, the end of § 2.3 for a comment).
Generalizations of the C-metric have been widely considered. Adding a cosmological constant Λ is straightforward, and we will henceforth refer with 'C-metric' to such Einstein spaces. Incorporating electromagnetic charge |q| 2 ≡ e 2 + g 2 is equally natural and leads to quartic structure functions [6]. Recently, the question how to include rotation for the holes received a new answer [10,11], avoiding the NUT-like behavior of the previously considered 'spinning C-metric' [12,13]. All these generalizations fit in the well-established class D of Petrov type D Einstein-Maxwell solutions with a non-null electromagnetic field possessing geodesic and non-shearing null directions aligned with the PNDs [14,15], which reduces for zero electromagnetic field to the subclass D 0 of Petrov type D Einstein spaces and which contains all well-known 4D black hole metrics. In fact, all D-metrics can be derived by performing 'limiting contractions' [16] from the most general member, the Plebianski-Demianski line element [17], which exhibits two quartic structure functions with six essential parameters m, α, |q| 2 , Λ, NUT parameter l [18] and angular momentum a. A physically comprehensive and simplified treatment can be found in [19], also surveying recent work in this direction.
In this paper we present a new family of Petrov type D, expanding and anisotropic perfect fluid (PF) generalizations of the C-metric. The direct motivation and background for this work is the following.
According to the Goldberg-Sachs theorem [20] the two PNDs of any member of D 0 are precisely those null directions which are geodesic and non-shearing. Such a member is purely electric (PE, cf. appendix B) precisely when both PNDs, as well as the complex null directions orthogonal to them, are non-twisting (non-rotating or hypersurface-orthogonal (HO)). This is in particular the case for the C-metric. As we will show, it implies the existence of an umbilical synchronization (US), i.e., a non-shearing and non-rotating unit timelike vector field (tangent to a congruence of observers). The importance of USs in cosmology was stressed in [21]. If a congruence of observers measuring isotropic radiation admits orthogonal hypersurfaces, an US exists. Only small deviations from isotropy are seen in the cosmic microwave background, and scalar perturbations of a Friedmann-Lemaître-Robertson-Walker universe preserve the existence of an US [22]. In general, spacetimes admitting an US have zero magnetic part of the Weyl tensor wrt it [23] and thus are either of Petrov type O, or PE and of type D or I [16]. Conformally flat spacetimes always admit USs (see e.g. (6.15) in [16]). Trümper showed that algebraically general vacua with an US are static [24]. Motivated by this result and by his own work [25] on static PFs, Barnes [1] studied PF spacetimes with an US tangent to their flow lines. He was able to generalize Trümper's result to Petrov type I such PFs and recovered Stephani's results on conformally flat PF solutions which are either of generalized Schwarzschild type or of generalized Friedmann type (so called Stephani universes) [26]. The type D solutions were integrated and invariantly partitioned, based on the direction of the gradient of the energy density relative to the PNDs and the flow vector at each point. Class I, characterized by the energy density being constant on the hypersurfaces orthogonal to the flow lines and thus the only class containing Einstein spaces as limit cases, was further subdivided using the gradient of Ψ 2 (cf. § 2.2 for details). By solving the field equations, Barnes concludes that class ID, consisting of the anisotropic class I models, solely contains non-expanding solutions. Hence, these PF solutions would not be viable as a cosmological model. However, based on an integrability analysis of class I in the Geroch-Held-Penrose (GHP) formalism [27], we found that this conclusion cannot be valid and this led to a detailed reinvestigation.
In this article we construct the general line element of the full ID class, comprising both the known nonexpanding perfect fluid models and the new expanding ones, and discuss some elementary properties. We want to stress the following point. The full class represents a PF generalization of the C-metric in the sense that the C-metric is contained as the Einstein space limit. The physical interpretation of this fact is however not established. This would require to exhibit this solution for small masses as a perturbation of a known PF solution, just as the C-metric interpretation of small accelerating black holes has been established in a flat or (anti-)de Sitter background [6,[28][29][30][31].
However, the mathematical relation with the C-metric is useful. As already deduced in [1], the PF solution is, just as the C-metric, conformally related to the direct sum of two 2D metrics. The fact that one part is equal for the PF solution and the C-metric is helpful in the analysis, e.g. we will show that (a part of) the axis of symmetry can readily be identified as a conical singularity, analogous to the defect of the cosmic string present in the C-metric. The non-static spacetimes presented are exact perfect fluid solutions with only this symmetry, and the analysis appears to be within reach. For the expanding ID PF models both the matter density w(t) and the expansion scalar θ(t) can be arbitrary functions. This freedom is displayed explicitly in the metric form, and makes the solutions more attractive as a cosmological model.
The paper is organized as follows. In section 2 we present the GHP approach to class I. We derive a closed set of equations, construct suitable scalar invariants, interpret the invariant subclassification of [1] and start the integration. At the end we provide alternative characterizations for the Einstein space members and identify their static regions and USs. In section 3 we finish the construction of the general ID line element in a transparent way, and correct the calculative error of [1] in the original approach. Then we deduce basic properties of the ID perfect fluid models. In section 4 we summarize the main results and indicate points of further research. The work greatly benefited from the use of the GHP formalism, which at the same time elucidates the deviation from the C-metric. In appendix A we provide a pragmatic survey of this formalism for the non-expert reader. In appendix B, finally, we present criteria for deciding when a Petrov type D spacetime admits a (rigid) US or is static.
Notation. For spacetimes (M, g ab ) we take (+ + + -) as the metric signature and use geometrized units 8πG = c = 1, where G is the gravitational coupling constant and c the speed of light. Λ denotes the cosmological constant. We make consistent use of the abstract Latin index notation for tensor fields, as advocated in [32]. Round (square) brackets denote (anti-)symmetrization, η abcd is the spacetime alternating pseudo-tensor and ∇ c T ab... (L X T ab... ) designates the Levi-Civita covariant derivative (Lie derivative wrt X a ) of the tensor field T ab... . One has
d a f = ∇ a f, d b Y a = ∇ [b Y a]
for the exterior derivative of a scalar field f , resp. oneform field Y a , and
X(f ) ≡ X a d a f
denotes the Leibniz action of a vector field X a on f ; when X a is the x i -coordinate vector field ∂ x i a we write ∂ x i f or f ,x i , and a prime denotes ordinary derivation for functions of one variable, f ′ (x) ≡ ∂ x f (x). However, we use index-free notation in line elements ds 2 = g ij dx i dx j . The specific GHP notation is introduced in appendix A.
GHP APPROACH TO CLASS I
Definition and integrability
We consider Barnes' class I [1], consisting of spacetimes (M, g ab ) with the following properties:
(i) the spacetime admits a unit timelike vector field u a (u a u a = −1) which is non-shearing and nonrotating, i.e., its covariant derivative is of the form
∇ b u a = θh ab −u a u b , h ab ≡ g ab + u a u b ,(1)
where the accelerationu a = u b ∇ b u a and expansion rate θ = ∇ a u a are the remaining kinematic quantities of u a ;
(ii) the Einstein tensor has the structure
G ab = Su a u b + pg ab = wu a u b + ph ab , (2) D a w ≡ h a b ∇ b w = 0,(3)
i.e., the spacetime represents the gravitational field of either a perfect fluid with shearfree normal fourvelocity u a , pressure p + Λ and spatially homogeneous energy density w − Λ (case S ≡ w + p = 0) or a vacuum (Einstein space case S = 0, where w = −p may be identified with Λ);
(iii) the Weyl tensor C abcd is degenerate but non-zero, i.e., the spacetime is algebraically special but not conformally flat.
Choose null vector fields k a and l a , subject to the normalization condition k a l a = −1, such that
u a = 1 √ 2q (qk a + l a ) , q > 0.(4)
Within the GHP formalism (cf. appendix A) based on the complex null tetrad (k a , l a , m a , m a ), q is (-2,-2)-weighted and the conditions (i) and (ii) translate into
π + τ = qκ + q −1 ν, λ = qσ, µ − µ = q(ρ − ρ), (5) Þ ′ q − qÞq = −2q(µ − qρ), ðq = ð ′ q = 0(6)
and
Φ 01 = Φ 12 = Φ 02 = 0,(7)Φ 11 = S 8 , Φ 00 = S 4q , Φ 22 = qS 4 ,(8)R ≡ 24Π = w − 3p = 4w − 3S, (9) ðw = ð ′ w = 0, Þ ′ w − qÞw = 0,(10)
respectively. By virtue of condition (i) the magnetic part H ab ≡ 1 2 η acmn C mn bd u c u d of the Weyl tensor wrt u a vanishes [23]. In combination with condition (iii) it follows that the Weyl tensor is purely electric wrt u a , E ab ≡ C acbd u c u d = 0, the Weyl-Petrov type is D, and at each point u a lies in the plane Σ spanned by the Weyl PNDs (cf. appendix B for a GHP proof of these wellknown facts). Hence, choosing k a and l a along the PNDs, (k a , l a , m a , m a ) is a Weyl principal null tetrad (WPNT) and we have
Ψ 0 = Ψ 1 = Ψ 3 = Ψ 4 = 0,(11)Ψ = Ψ = 0, Ψ ≡ 2Ψ 2 .(12)
Under the restrictions (7) and (11), the GHP Bianchi equations are given by (A31)-(A36) and their prime duals. Combining these with the other equations in (5)- (12) results in
κ = 0, ν = 0, σ = λ = 0, (13) ρ = ρ, µ = µ, π = −τ ,(14)ÞΨ = 3ρΨ, Þ ′ Ψ = −3µΨ, (15) ðΨ = 3τ Ψ, ð ′ Ψ = −3πΨ,(16)Þ ′ S − qÞS = S(Þq − µ + qρ),(17)ðS = τ S, ð ′ S = τ S,(18)Þ ′ w = qÞw = − 3S(µ − qρ) 2 .(19)
With (7)-(9) and (11)- (14) the Ricci equations, given by (A25)-(A30) and their prime duals, reduce to
Þµ = −Þ ′ ρ (20) = −ð ′ τ + µρ + τ τ + Ψ 2 + w 3 − S 4 ,(21)Þ ′ µ = −µ 2 − qS 4 , ðµ = ð ′ µ = 0,(22)Þρ = ρ 2 + S 4q , ðρ = ð ′ ρ = 0,(23)Þτ = Þ ′ τ = 0, ðτ = τ 2 ,(24)−ðπ = ð ′ τ = ðτ ≡ H 2(25)
and the complex conjugates of (24), while the commutator relations applied to a (w p , w q )-weighted scalar η become
[Þ, Þ ′ ]η = (w p + w q ) τ τ − Ψ 2 + w 6 − S 4 η, (26) [ð, ð ′ ]η = (w p − w q ) −µρ + Ψ 2 − w 6 η,(27)
[Þ, ð]η = (−τ Þ + ρð + w q ρτ )η,
[Þ, ð ′ ]η = (−τ Þ + ρð ′ + w p ρτ )η,(28)[Þ ′ , ð]η = (−τ Þ ′ − µð + w p µτ )η,(29)[Þ ′ , ð ′ ]η = (−τ Þ ′ − µð ′ + w q µτ )η.(30)ðH = 2τ (H + Ψ − G), ð ′ H = 2τ (H + Ψ − G), ÞH = ρ(H + F ), Þ ′ H = −µ(H + F ),(32)
where
F ≡ 2τ τ , G ≡ 2µρ + w 3 .(33)
One checks that the integrability conditions for the system (6)-(33) of partial differential equations (PDEs) are identically satisfied, indicating that corresponding solutions exist. Those for which u a is non-expanding additionally satisfy
θ ∼ µ − qρ = 0(34)
(cf. (96) and (100) below). However, (34) does not follow as a consequence of the ansätze; this implies the existence of expanding anisotropic perfect fluid models in class I ( § 3). Also, the scalar invariant µρ may be strictly negative, which is incompatible with (34); as a consequence, the class I Einstein spaces are not necessarily static ( § 2.3).
Metric structure and subclassification
The first, second and last parts of (13)- (14) precisely account for the hypersurface-orthogonality of k a , l a and m a ↔ m a , respectively. Thus real scalar fields u, v, (zeroweighted) and U , V ((−1, −1)-resp. (1, 1)-weighted), and complex scalar fields ζ (zero-weighted) and Z ((1,-1)weighted) exist such that
d a u = Ψ 1/3 U k a , d a v = Ψ 1/3 V l a , d a ζ = Ψ 1/3 Z m a .(35)
By (A16) this is equivalent to
Þ ′ u = −Ψ 1/3 /U, Þu = ðu = ð ′ u = 0,(36)Þv = −Ψ 1/3 /V, Þ ′ v = ðv = ð ′ v = 0,(37)ð ′ ζ = Ψ 1/3 /Z, Þζ = Þ ′ ζ = ðζ = 0,(38)ðζ = Ψ 1/3 /Z, Þζ = Þ ′ ζ = ð ′ ζ = 0.(39)
The commutator relations (28)-(31) applied to u, v, ζ and ζ then yield
ðU = ð ′ U = ðV = ð ′ V = 0,(40)ÞZ = Þ ′ Z = ÞZ = Þ ′ Z = 0.(41)
Hence, when we take these fields as coordinates, (35)- (41) imply that the zero-weighted fields U V and ZZ only depend on (u, v), resp. (ζ, ζ), such that all class I metrics are conformally related to direct sums of metrics on twospaces:
g ab = Ψ −2/3 (g ⊥ ab ⊕ g Σ ab ),(42)g ⊥ ab ≡ 2Ψ 2/3 m (a m b) = 2ZZ(ζ, ζ) d (a ζ d b) ζ, (43) g Σ ab ≡ −2Ψ 2/3 k (a l b) = −2U V (u, v) d (a u d b) v. (44)
The line elements of g ⊥ ab and g Σ ab will be denoted by ds 2 ⊥ , resp. ds 2 Σ . In the case where such a two-space is not of constant curvature, however, we will construct more suitable coordinates in the sequel. Inspired by the GHP manipulations of [33] for type D vacua [57], we start this construction by deducing suitable combinations of the scalar invariants F , G, H and Ψ. From (A16), (10) and (15)-(33) it is found that
d a F = 3Ψ 1/3 ϕ α a , d a G = 3Ψ 1/3 γ β a ,(45)d a ϕ = 2Ψ 1/3 x α a , d a γ = 2Ψ 1/3 y β a ,(46)d a x = Ψ 1/3 α a , d a y = Ψ 1/3 β a ,(47)
where α a ≡ τ m a + τ m a , β a ≡ µk a − ρl a (48) are invariantly-defined one-forms and
ϕ ≡ H + F 3Ψ 1/3 , γ ≡ −H + Ψ + F + 2G 3Ψ 1/3 ,(49)
x
≡ H + Ψ − G 3Ψ 2/3 , y ≡ −H + 2Ψ + G 3Ψ 2/3 .(50)
Consequently, the scalar invariants
C ≡ 3(ϕ − x 2 ) = 3(γ − y 2 ),(51)D ≡ −x 3 − Cx + F = y 3 + Cy − G.(52)
are constant (d a C = d a D = 0). From (50) and (52) it follows that F , G, H and Ψ are biunivocally related to x, y, C and D, where
2τ τ ≡ F = x 3 + Cx + D,(53)2µρ ≡ G − w 3 = y 3 + Cy − D − w 3 ,(54)2ð ′ τ ≡ H = 2x 3 + 3x 2 y + Cy − D,(55)Ψ = (x + y) 3 = 0.(56)
Barnes [1] partitioned class I according to the position of the gradient ∇ a Ψ relative to Σ and Σ ⊥ . This relates to the vanishing of the invariants τ τ = −πτ or µρ, maximal symmetry of g ⊥ ab or g Σ ab and spatial rotation or boost isotropy of g ab , as follows.
First assume τ = 0. In this case (25) and the first parts of (33) and (49)- (52) imply
H = F = ϕ = 0, Ψ − G = 3xΨ 2/3 , C = −3x 2 , D = 2x 3 ,(57)
such that x is constant. In combination with the last part of (14), (16) and the first parts of (47)-(48) one gets
τ τ = 0 ⇔ π = τ = 0 ⇔ x = const ⇔ ∇ a Ψ ∈ Σ. (58)
The [ð, ð ′ ] commutator relation applied to ζ, ζ and Z imply ðZ = ð ′ Z = 0 and ðð ′ Z = 3xΨ 2/3 Z. Herewith the Gaussian curvature of the two-space with metric g ⊥ ab becomes
K ⊥ = −(ZZ) −1 (ln(ZZ)) ,ζζ = −Ψ −2/3 ðð ′ (ln ZZ) = −Ψ −2/3 ð ð ′ Z Z = −3x,(59)
where the dual of (35) was used in the calculation. In conjunction with the results of Goode and Wainwright [34], we conclude that (58) yields the class I solutions which are locally rotationally symmetric (LRS) of label II in the Stewart-Ellis classification [35], characterized by g ⊥ ab having constant curvature K ⊥ = −3x. As well known (see e.g. the appendix of [36]) the coordinates ζ and ζ may then be adapted such that ZZ(ζ, ζ) = (1 + K ⊥ ζζ/2) −1 in (43), or an alternative form may be taken:
ds 2 ⊥ = 2dζdζ 1 + K ⊥ 2 ζζ = Y 2 ⊥ (dx 2 1 + cos( k ⊥ x 1 ) 2 dx 2 2 ), K ⊥ = k ⊥ Y −2 ⊥ , k ⊥ ∈ {−1, 0, 1}.(60)
Now assume µρ = 0. It follows from (20)-(23), (33), (55) and the second parts of (49)-(52) that
S = γ = 0, G = w 3 ≡ Λ 3 , −H + 2Ψ + Λ 3 = 3yΨ 2/3 , C = −3y 2 , D = −2y 3 − Λ 3 ,(61)
such that y is constant. In combination with (15) and the second parts of (47)-(48) this implies
µρ = 0 ⇔ µ = ρ = 0 ⇔ y = const ⇔ ∇ a Ψ ∈ Σ ⊥ . (62)
By a similar reasoning as in the case τ = 0 one concludes that (62) yields the locally boost-isotropic Einstein spaces of Petrov type D, characterized by g Σ ab having constant curvature
K Σ = −3y,(63)
such that in this case one may take (44) and we have
U V (u, v) = (1 − K Σ uv/2) −1 inds 2 Σ = − 2dudv 1 − K Σ 2 uv = Y 2 Σ (dx 2 3 − cos( k Σ x 3 ) 2 dx 2 4 ), K Σ = k Σ Y −2 Σ , k Σ ∈ {−1, 0, 1}.(64)
With (42) and ds 2 Σ written in the second form, it is clear that
∂ x4 a = −Ψ −2/3 Y 2 Σ cos( k Σ x 3 ) 2 d a x 4(65)
is a HO timelike Killing vector field. Four subclasses of class I thus arise, which were labeled by Barnes as follows:
IA : τ = 0 = µρ, IB : τ = 0 = µρ, IC : τ = 0 = µρ, ID : τ = 0 = µρ.(66)
We proceed with the respective integrations. Notice that in the joint case µρτ = 0 one has
2(τ τ + µρ) = (x + y) 3 + K(x + y) 2 − w 3 ,(67)
with K = K ⊥ for τ = 0 and K = K Σ for µρ = 0. When τ = 0 or µρ = 0 we may take x, resp. y as a coordinate, where (47)- (48) and (56) imply
(x + y)(τ m a + τ m a ) = d a x,(68)(x + y)(µk a − ρl a ) = d a y.(69)
In view of (42)- (44) and (56) it then remains to determine suitable complementary coordinates for x in g ⊥ ab or y in g Σ ab .
For τ = 0, Frobenius' theorem and (68) suggest to examine whether zero-weighted functions φ and f exist such that
i x + y 2τ τ (τ m a − τ m a ) = f d a φ.(70)
By (A16) this amounts to calculating the integrability conditions of the system
Þφ = Þ ′ φ = 0, τ ðφ = −τ ð ′ φ = i x + y 2f(71)
which turn out to be
Þf = Þ ′ f = 0, α a ∇ a f = 0. (72)
These last equations have the trivial solution f = 1, for which a solution φ of (71) is determined up to an irrelevant constant. Herewith the invariantly-defined one-form on the left-hand side in (70) is exact, and we take φ as the coordinate complementary to x. On solving (68) and (70) with f = 1 for m a and m a and using (53) we conclude from (43) that
ds 2 ⊥ = dx 2 2τ τ + 2τ τ dφ 2 , 2τ τ = x 3 + Cx + D(73)
for classes IC and ID. Clearly, the metric solutions should be restricted to spacetime regions where x 3 + Cx+ D > 0 for consistency, while
∂ φ a = i τ m a − τ m a x + y = 2τ τ (x + y) 2 d a φ,(74)
is a HO spacelike Killing vector field (KVF).
For µρ = 0 one analogously considers
ðψ = ð ′ ψ = 0, µÞψ = ρÞ ′ ψ = x + y 2g(75)
but the integrability conditions of this system are now
ðg = ð ′ g = 0, β a ∇ a g = −gS µ 2 + q 2 ρ 2 qµρ .(76)
So g = 1 is only a solution in the Einstein subcase S = 0, for which we then get
ds 2 Σ = dy 2 2µρ − 2µρdψ 2 , 2µρ = y 3 + Cy − D − Λ 3(77)
from (44), (54), (69) and (75), with KVF
∂ ψ a = µk a + ρl a x + y = − 2µρ (x + y) 2 d a ψ,(78)
which is timelike for µρ > 0 and spacelike for µρ < 0. In general, the second vector field in (78) is always HO: the integrability conditions of (76) are checked to be identically satisfied, such that solutions g and a corresponding solution ψ of (75) exist. However, taking ψ as a complementary coordinate of y eventually leads to a very complicated system of coupled partial differential equations for g = g(y, ψ), which is impossible to solve explicitly. We shall remedy this in section 3.1 but now discuss characterizing features of the Einstein space limit cases.
Characterizations of PE Petrov type D Einstein spaces
Petrov type D Einstein spaces constitute the class D 0 (cf. the introduction) and are all explicitly known. The line elements are obtained by putting the electromagnetic charge parameter Φ 0 or e 2 + g 2 equal to zero in the Dmetrics given by Debever et al. [14], resp. García [15]. These coordinate forms generalize and streamline those found by Kinnersley [37] in the Λ = 0 case.
Recently, a manifestly invariant treatment of D 0 , making use of the GHP formalism, was presented [33]. Within GHP, D 0 -metrics are characterized by the existence of a complex null tetrad wrt which (11) and Φ ij = 0 hold (i.e., the tetrad is a WPNT and (7)-(8) with S = 0 hold). According to the Goldberg-Sachs theorem [20], (13) holds and characterizes WPNTs as well. The scalar invariant identities (see [33,38])
µρ = µρ, ππ = τ τ ,(79)
just as (15)-(16), (20)- (21) and the first equation of (25) are also valid in general. From these relations it follows that
(12) ⇔ (14),(80)
i.e., a Petrov type D Einstein space is PE if and only if the WPNT directions are HO. In fact, it can readily be shown by a more detailed analysis than in [33] that if the spacetime belongs to Kundt's class, i.e., if one of the PNDs is moreover non-diverging, one has
µ = 0 ⇒ ρ = 0 or ρ − ρ = 0 = π + τ .(81)
Equations (4), (5) and (31) in [33] then imply
ρ = ρ = 0 ⇒ µ = µ = 0 = π + τ , (82) π = −τ = 0 ⇒ µ = µ, ρ = ρ.(83)
One concludes that the Kundt and Robinson-Trautman subclasses of D 0 have empty intersection, and that the latter consists of PE spacetimes for which both PNDs are non-twisting but diverging. These results -which remain valid for the electrovac class D, just as the two theorems below -are implicit in [14], where the concerning PE metrics form the Einstein space subclasses of the classes labeled by
C 00 : τ = 0 = µρ, C 0 + : τ = 0 = µρ, C 0 − : τ = 0 = µρ, C * : τ = 0 = µρ.(84)
By the Einstein space specifications S = 0 and w = Λ = const, the boost-field q disappears from the equations (7)-(33) and is not determined by the geometry, in contrast to the situation for perfect fluids S = 0 (cf. § 3.1). Moreover, the set (5)-(6), i.e. the requirement that an US given by (4) exists, is decoupled from (7)-(33) and is not needed to derive (15)-(33) from (7)- (14). From the integrability of the complete set (6)-(33), (66) and the above we conclude:
Theorem 2.1
The closed set (7)-(33) characterizes the class D 0 of PE Petrov type D Einstein spaces, which are precisely those Einstein spaces for which the WPNT directions are HO or, alternatively, those which belong to Barnes' class I, all admitting a one-degree freedom of USs in all regions of spacetime. Barnes' boost-isotropic Kundt classes IA and IC coincide with C 00 , resp. C 0 − , while the Robinson-Trautman members of D 0 constitute C 0 + and C * , form the Einstein space subclasses of IB, resp. ID, and possess non-twisting but diverging PNDs at each point.
The result is in agreement with proposition B.1, which provides criteria for deciding when a Petrov type D spacetime allows for an US, regardless of the structure of the energy-momentum tensor. The hypersurfaceorthogonality (13)- (14) of the WPNT directions corresponds to criterion 5 and is actually equivalent to a onedegree freedom of USs. It is worth to mention that all LRS II spacetimes, i.e. those exhibiting (pseudo-) spherical or plane symmetry, share this property with the D 0and D-metrics. On the other hand, certainly not all PE spacetimes admit an US. For instance, the Gödel solution is an LRS I PE perfect fluid of Petrov type D, described in GHP by (7)- (13) and
S/2 = w = p = −3Ψ 2 = −2µρ = const. > 0, (85) π = τ = 0, µ = qρ, µ = −µ, ρ = −ρ,(86)
q > 0 being annihilated by all weighted GHP-derivatives; hence the invariant (µ−µ)(ρ−ρ) = 4µρ = 4qρ 2 appearing in criterion 2 of proposition B.1 is strictly negative, and it follows that the Gödel solution does not admit an US. As another example, the spatially-homogeneous Λ = 0 vacuum metrics
ds 2 = t 2p1 dx 2 + t 2p2 dy 2 + t 2p3 dz 2 − dt 2 ,(87)p 1 + p 2 + p 3 = p 2 1 + p 2 2 + p 2 3 = 1, p 1 p 2 p 3 = 0.(88)
attributed to Kasner [39] are PE [58]. If the p i are all different, there is a complete group G 3 I of isometries and the Petrov type is I. In this case ∂ t a is the up to reflection unique Weyl principal vector field and hence the only possible US-candidate; however, its shear tensor has the non-zero eigenvalues (1/3 − p i )/t and hence the spacetime does not admit an US. On the other hand, if two p i 's are equal it follows that p 2 = p 3 = −2p 1 = 2/3 (without loss of generality). Then the line element represents a Petrov type D, non-stationary, plane-symmetric vacuum which, according to theorem 2.1, admits a one-degree freedom of USs (cf. the end of this section).
In theorem 3 of [1] it is claimed that all vacuum spacetimes admitting an US are static, which would generalize Trümper's result [24] by including Petrov type D. However, this conclusion only holds when µρ ≥ 0. Indeed, a static member of D 0 necessarily admits a rigid (i.e. non-expanding) US, such that µρ ≥ 0, cf. (34). Conversely, when µρ = 0 or µρ > 0 for a PE member, it admits the HO timelike KVF (65), resp. (78) and is thus static. This is in agreement with proposition B.3: regarding µ = ρ = 0 criterion 6" tells that in fact all boostisotropic spacetimes, with π = −τ wrt a WPNT, are static, while for µρ > 0 one checks that criterion 2" is satisfied by virtue of (11)- (33). In appendix B the freedom of the rigid USs and HO timelike KVF directions (static observers) in these cases is also specified, which is in accordance with a result by Wahlquist and Estabrook [40]. In summary we have: Theorem 2.2 A Petrov type D Einstein space is static if it admits a rigid US. This is precisely the case when the spacetime is PE and has a positive or zero scalar invariant µρ, being the product of the divergences of (nontwisting) Weyl principal null vectors k a and l a subject to k a l a = −1. For µρ > 0 there is an up to reflection unique rigid US, defined from the geometry by (4) and q = µ/ρ and parallel to the unique HO timelike KVF direction. For µρ = 0 ⇒ µ = ρ = 0 (classes IA and IC) all USs are rigid USs and have a one-degree freedom, while the HO timelike KVF directions are parametrized by two constants.
For completeness we display standard coordinate forms of the PE Petrov type D Einstein space metrics, as recovered here by (42), (56) and (60), (64), (73), (77).
C 00 corresponds to (60) and (64). From (57), (59), (61) and (63) one deduces that
K ⊥ = −3x = −3y = K Σ , Ψ 2 = − Λ 3 = 4x 3 = 0.
Rescaling ζ, u and v by a factor (2x) −1 one arrives at
ds 2 = 2dζdζ 1 + Λ 2 ζζ − 2dudv 1 − Λ 2 uv , 1 + Λ 2 ζζ > 0, Λ = 0.
This represents the Einstein space limit Φ 0 = 0 of Bertotti's static and homogeneous electrovac family with cosmological constant [41,42], exhibiting spatial rotation and boost isotropy (complete group G 6 of isometries). The Λ = 0 limit yields flat Minkowski spacetime. C 0 + and C 0 − correspond to (60) and (77), resp. (64) and (73). Making use of (67), replacing in the C 0 + (C 0 − ) case the coordinate y (x) by r = −(2m) 1/3 /(x+y), rescaling the remaining coordinates by a factor (2m) −1/3 and writing Y ≡ (2m) 1/3 > 0 one finds
ds 2 = r 2 dξ 2 + δ cos( √ k ξ) 2 dη 2 + dr 2 g k (r) − δg k (r)dχ 2 , g k (r) = k − 2m r − Λ 3 r 2 , k = K(2m) 1/3 ∈ {−1, 0, 1},
with δ = 1 for C 0 + (δ = −1 for C 0 − ). These solutions have a complete group G 4 of isometries acting on spacelike (timelike) three-dimensional orbits, and for Λ = 0 correspond to Kinnersley's case I (IV) with l = 0. The static region of C 0 + (g k (r)>0) yields class A in the classification of static Petrov type D vacua by Ehlers and Kundt [5]; C 0 − is static everywhere and corresponds to class B. Regarding C 0 + , the subcase k = 1 reproduces after ξ → π/2 − ξ the well-known forms of the spherically symmetric Schwarzschild-Kottler interior and exterior metrics [43,44]; the subcase k = Λ = 0, r > 0 (g k (r) < 0) gives another form of the plane-symmetric Kasner metrics (cf. supra).
C * corresponds to (73) and (77), which gives the line element
ds 2 = 1 (x + y) 2 dx 2 f (x) + f (x)dφ 2 + dy 2 g(y) − g(y)dψ 2 , f (x) = x 3 + Cx + D > 0, g(y) = −f (−y) − Λ 3 .(89)
The KVFs ∂ φ a and ∂ ψ a generate the complete, abelian group G 2 of isometries. For Λ = 0, (89) is the form of the C-metric obtained by Levi-Civita and recovered by Ehlers and Kundt, and corresponds to Kinnersley's case IIIA. It is generally assumed -and suggested in the original paper [6] -that the Kinnersley-Walker form
ds 2 = 1 α 2 (ξ + η) 2 dξ 2 h(ξ) + h(ξ)dφ 2 + dη 2 k(η) − k(η)dψ 2 , h(ξ) = 1 − ξ 2 − 2mαξ 3 > 0, k(η) = −h(−η)(90)
equivalently describes the gravitational field of the Λ = 0 C-metric. However, this is not entirely correct. Equating the Lorentz invariants appearing in the right hand sides of the equations in (50)-(52), calculated for the metrics (89) and (90), yields
x = −(2m) 1 3 αξ + 1 6m , y = −(2m) 1 3 αη − 1 6m , C = − 1 3(2m) 4 3 , D = α 2 − 1 54m 2 .(91)
Hence (90) only covers the range C < 0, D > −2 − C 3 3 2 , whereas in general the constant scalar invariants C and D are allowed to take any real value. Yet, the cubic f (x) has discriminant −4C 3 − 27D 2 ; thus it has three distinct real roots if and only if
C < −3 D 2 2 3 ⇔ C < 0, |D| < 2 −C 3 3 2 .(92)
Thus (91) is compatible for this case, and by further rescaling φ and ψ with a factor α(2m) −1/3 one arrives at (90); (92) is equivalent with mα < 1/ √ 27, leading to the physical interpretation of two uniformly accelerating masses. Recently, Hong and Teo [45] introduced a normalized factored form for this situation, which greatly simplifies certain analyses of the C-metric. A further coordinate transformation can be made such that the Schwarzschild metric is comprised as the subcase α = 0. This was further exploited for the full D-class in [19].
Finally, we write down the equations which determine all USs for a member of C * , in the coordinates y and ψ of (89). Let
U a = − x + y g(y) ∂ t a , V a = −(x + y) g(y)∂ y a
in the static region and
U a = −(x + y) −g(y)∂ y a , V a = x + y −g(y) ∂ t a
in the non-static region, and gauge-fix k a = (U a + V a )/ (2), l a = (U a − V a )/ (2). The unit timelike field (4) is an US if and only (6) holds; this translates to q = q(y, ψ) and g(y)(q ± 1)q ,y + (q ∓ 1)(q ,ψ + g ′ (y)q) = 0.
Here and below the upper (lower) signs should be taken in the static (non-static) region. For solutions q = q(y), i.e. q ,ψ = 0, direct integration of (93) yields
g(y)(q(y) ∓ 1) 2 = E ± q(y),(94)
with E + > 0 and E − < 0 constants of integration. Notice that in the static region the solution q(y) = 1 yields the unique static observer. In the case q ,ψ = 0 the solutions get implicitly determined by an equation of the form ψ = ψ(y, q), on applying the method of characteristics for first-order PDEs (see e.g. [46]). In the subcase where C = D = Λ = 0, g(y) reduces to y 3 and this equation reads
ψ = − y(q ∓ 1) 2 3 q 1 3 2 (q ∓ 1) 4 3 dq 3q 5 3 + Z y(q ∓ 1) 2 3 q 1 3
.
(95) Here Z is a free function of its argument, making the one-degree freedom of USs more explicit. Replacing (73) by (60) does not alter these equations, i.e., the above remains valid for C 0 + . Then C = D = 0 is equivalent to x = K ⊥ = 0, cf. (57) and (59); for Λ = 0 the nonstatic region y < 0 corresponds to the plane-symmetric Kasner vacuum metrics, where y = −(3t/2) −2/3 and a rescaling of the other coordinates recovers (87), the USs being determined by (94) and (95) with the lower sign.
PERFECT FLUID GENERALIZATIONS OF THE C-METRIC
Line element
We resume the integration of class I started at the end of § 2.3. We thereby focus on the subclass ID characterized by τ = 0 = µρ. Let us first summarize what we did so far. We started off with the closed set (6)-(33) of first-order GHP equations in the seven (weighted) real variables Ψ, S, w, µ, ρ, q, ð ′ τ and the complex variable τ . These variables are equivalent to two dimensionless spin and boost gauge fields, e.g. τ /τ and µ/ρ, and seven real scalar invariants. The boost and spin gauge fields could serve to invariantly fix the tetrad -the ID members being therefore anisotropic -but can be further ignored. For the C * -Einstein spaces, S = 0 and w = Λ = const, and we remarked that q is not a part of the intrinsic describing set of variables. Hence we end up with four real scalar invariants in this subcase. These invariants are equivalent to the two constants C and D and two independent functions x and y, which we took as coordinates and in terms of which, on adding two coordinates φ and ψ related to the symmetries, the corresponding C * -metric can be expressed. In the perfect fluid case S = 0, (8) gives the boost field q which, starting from an arbitrary gauge (k a , l a ), turns u a given by (4) into the invariantlydefined fluid four-velocity. The four invariants and their use persist, just as the coordinate φ. However, the scalar invariants w and S are no longer constant and ψ is no longer a suitable coordinate. Thus we need one more scalar invariant for our description and one remaining coordinate complementary to y.
For the first purpose it is natural to look at the kinematics of the fluid, which are fully determined by
b ≡ 2∇ (a u c) m a m c = ∇ a u c v a v b = θ 3 ,(96)u ≡ v au a ,u ⊥ a ≡ 2m (a m c)u c .(97)
Here
v a ≡ 1 √ 2q (qk a − l a )(98)
is the intrinsic spacelike vector field, which determines at each point the up to reflection unique normalized vector orthogonal to u a and lying in the PND plane Σ, whilė u andu ⊥ a are the component along v a , resp. projection onto Σ ⊥ of the accelerationu a . In analogy with (96) we define the invariant
b ≡ 2∇ (a v c) m a m c .(99)
The relation with GHP quantities is
b = µ − qρ √ 2q ,b = − µ + qρ √ 2q ,(100)u = (2q) −3/2 (Þ ′ q + qÞq) = Þq √ 2q − b, (101) −u ⊥ a = τ m a + τ m a ≡ α a = d a x x + y .(102)
Notice that (100) is equivalent to
b u a −b v a = µk a − ρl a ≡ β a = d a y x + y .(103)
In combination with (53)-(54), (102) and (103) imply
2τ τ =u ⊥ au ⊥a = x 3 + Cx + D,(104)2µρ =b 2 − b 2 = y 3 + Cy − D − w 3 .(105)
We choose b as the final describing invariant and useb andu as auxiliary variables. In view of (100)-(102) one deduces that the differential information for S, w and b comprised in (6)-(33) is precisely
D a S = −Su a ,(106)d a w = −u(w)u a , u(w) = −3bS,(107)d a b = −u(b)u a , u(b) = −v(b) +b(u −b) − S 2 , (108) v(b) = − x + y 2 (3y 2 + C).(109)
From (109) it follows thatb is non-constant, such that we may see the second part of (108) as a definition ofu . (107) is nothing but the energy resp. momentum conservation equations for a perfect fluid subject to D a w = 0. The first part of equation (108) confirms that D a θ = 0 [1,23], whilst the second implies again that the expansion scalar does not vanish in general (cf. the end of § 2.1 and below). For the second purpose we rely on the hypersurfaceorthogonality of u a by assumption: zero-weighted real scalar fields t and I exist such that
d a t = Iu a .(110)
The integrability condition hereof is
D a I = −Iu a = −I(u ⊥ a +u v a ),(111)
which is equivalent to
ðI = τ I, v(I) = −u I.(112)
Fromb = 0, (103) and (110) it follows that t is functionally independent of y (and of x and φ) and we take it as the fourth coordinate. With the aid of (110)-(111), (106) and the first parts of (107) and (108) precisely tell that A ≡ S 2I , w and b only depend on t. Henceb =b(y, t) from (105). On using (102)-(103), the first part of (112) is equivalent to J = J(y, t), where J ≡ x+y Ib . Eliminatingu between the second parts of (108) and (112), and using v(x + y) = −b(x + y) implied by (102)-(103), yields
b 2 v(J) =bJu(b) + A(x + y).(113)
Inverting (103) and (110) we get
(x + y)u a =bJd a t, (x + y)v a = bJd a t − d a ỹ b ,(114)or dually − u a x + y = ∂ t ã bJ + b ∂ y a , − v a x + y =b ∂ y a .(115)
Thus in the chosen coordinates (113) reads
∂ y J(y, t) =b(y, t) −3 [b ′ (t) − A(t)] .(116)
From (42), (73), g Σ ab = (v a v b − u a u b )/(x + y) 2 and the only remaining equation (107) we obtain the line element
ds 2 = (x + y) −2 [ds 2 ⊥ + ds 2 Σ ],(117)ds 2 ⊥ = dx 2 2τ τ + 2τ τ dφ 2 , 2τ τ = x 3 + Cx + D,(118)ds 2 Σ = bJdt − dỹ b 2 − b Jdt 2 ,(119)where b = b(t), w = w(t), A ≡b JS 2(x + y) = A(t), (120) w ′ (t) = 6b(t)A(t) ⇔ d a w = 6b A d a t = 3b Su a , (121) b =b(y, t) = y 3 + Cy − D + b(t) 2 − w(t)/3, (122) J = J(y, t) = [b ′ (t) − A(t)] dỹ b(y, t) 3 + L(t),(123)
with L(t) a free function of integration. The solutions are defined and regular in the coordinate regions
2τ τ ≡ x 3 + Cx + D > 0,(124)b(y, t) 2 ≡ y 3 + Cy − D + b(t) 2 − w(t)/3 > 0. (125)
Notice that we nowhere used S = 0 explicitly in the above integration procedure. Therefore, the above line element describes the complete class ID, including the C * -vacuum limits which correspond to w(t) = Λ and A(t) = 0, cf. (120). In this case the coordinate transformation (t, y, x, φ) → (ψ, y, x, φ), which connects (117)-(123) to the original form (89), eliminates b(t) and L(t) and follows from (4), (78), (98), (100) and (114), giving
d a ψ = − x + y 2µρ (µk a + ρl a ) = x + y 2µρ b u a − bv a = Jd a t + b b b2 − b 2 d a y.(126)
Hence, ψ = ψ(y, t) and it is the solution of the consistent system
∂ t ψ = J, ∂ y ψ = b b b2 − b 2 ,(127)
the integrability condition hereof being precisely (116) with A(t) = 0. The transformation is singular at degenerate roots ofb 2 and at the union of the black hole and acceleration horizons [19,47]
b 2 − b 2 ≡ f (−y) + Λ/3 = 0,
which separate the static from the non-static regions. Let us emphasize that the b(t)-freedom is essentially a freedom in the choice of coordinates. The form (89) describes the full C-metric manifold; y can take any value, and the sign of f (−y) + Λ 3 is positive in the static region and negative in the non-static region. In the form (117)-(119) y is always spacelike and we have constructed t as a synchronized timelike coordinate corresponding to an US u a , with associated expansion rate θ(t) = 3b(t); for fixed b(t) the range of y is constrained by (125) and only this subregion of the manifold is described by the coordinates. E.g. (117)-(123) with A(t) = 0, w(t) = Λ/3, b(t) = 0 and L(t) = 1 (which formally reduces to (89) on putting t = ψ additionally) only describes the static part of the C-metric, the vector field u a then lying along the unique HO timelike KVF direction. However, in the neighborhood of any point with coordinate label y, the metric can be described by (117)-(123), by choosing b(t) 2 > f (−y) + Λ/3.
We neither used τ = 0. This implies that the line element of the complete class IB, characterized by µρ = 0 = τ and constituted by all LRS II Einstein spaces and shear-free perfect fluids with D a w = 0, is described by (117)-(123), with (118) replaced by (60). This class was first described by Kustaanheimo [48] and rediscovered by Barnes [1], both using different coordinates (see also (16.49), (16.51) in [16]).
Of course, the result (117)-(123) could have been obtained without referring to GHP calculus. Barnes [1] showed that the metric can be written in the form (114), (116) and (122) it follows that (x + y)v a is exact: (x + y)v a = d a z; z is used as a coordinate instead of y, and one puts Jb ≡ e Z , Z = Z(y, t). Notice from (103) that now
ds 2 = (x + Y ) −2 f −1 dx 2 + f dφ 2 + dz 2 − e 2Z dt 2 , (128) with f = f (x). Indeed, fromy = Y (z, t), Y ,z = −b, θ = 3Y ,t e −Z .(129)
Let us directly attack the field equations in these coordinates, thereby correcting [1]. One can check that only four of the field equations are not identically satisfied (the indices 1 to 4 label the Weyl principal tetrad vectors naturally associated with (128)):
G 34 = 0 = −Y ,tz + Y ,t Z ,z , (130) G 11 − G 33 = 0 = (131) f ′ − 2Y ,zz + (x + Y ) Z 2 ,z + Z ,zz − f ′′ /2 , G 11 = p = (132) 2 e −2Z (Y ,tt − Y ,t Z ,t ) − Y ,z Z ,z − f ′ /2 (x + Y ) + Z ,zz + Z 2 ,z (x + Y ) 2 + 3 Y 2 ,z + f − Y 2 ,t e −2Z , G 33 + G 44 = S = (133) 2 (x + Y ) [Y ,zz − Y ,z Z ,z + e −2Z (Y ,tt − Y ,t Z ,t ) .
Hence, if supplemented with θ ∼ Y ,t = 0, these equations are the ones obtained in Barnes [1]: equation (130) ≡ v(θ) = 0 was missed out, and both equations (132) and (133) differ from equations (4.23), resp. (4.24) in [1] by a term 2(x + Y )Y ,tt e −2Z . Thus, it is clear that with these differences a correct non-expanding solution can be found, but the analysis of expanding solutions will be incorrect.
Differentiating (132) twice wrt x yields d 4 f (x)/dx 4 = 0, whence
f (x) = ax 3 + bx 2 + cx + d.(134)
Substituting this in equation (132), and equating coefficients of powers of x, leads to
Z ,zz (z, t) + Z ,z (z, t) 2 = 3aY (z, t) − b, (135) Y ,zz (z, t) = c 2 − Y (z, t)b + 3 2 aY (z, t) 2 . (136)
The solutions Y (z, t) of the last equation are defined by
Y (z,t) dr ar 3 − br 2 + cr + f 1 (t) − z + f 2 (t) = 0,(137)
which can be solved for z in terms of Y . This eventually suggests to transform coordinates from (z, t) into (y, t), with y = Y (z, t). Rescaling and translating coordinates allows us to set a = 1 and b = 0. One can check that the remaining equations lead exactly to equations (116) and (121), recovering solution (117)-(123).
Properties
Consider the metric (117)-(123), for which we assume henceforth that it describes a perfect fluid (A(t) = 0). In contrast to the Einstein subcase, u a is now the unique invariantly-defined fluid velocity, and the expansion rate θ(t) and energy density w(t) − Λ of the fluid are scalar invariants. Expressions for the pressure p + Λ and the componentsu ⊥a andu of the acceleration follow from (102), (108)-(109), (115) and (120):
p = 2(x + y)A(t) (bJ)(y, t) − w(t),u ⊥a = −(x + y)f (x)∂ x a , u =b(y, t) − x + ỹ b(y, t) 3y 2 + C 2 + b ′ (t) − A(t) (bJ)(y, t) .
The fluid is non-shearing and non-rotating, i.e. u a is an US. Because of (13)-(14) criterion 5 of proposition B.1 is satisfied, such that there is a one-degree freedom of USs. These can be found by taking q = 1 in (4) and (98), hereby fixing the (k a , l a )-gauge geometrically, and solving (6) with q replaced by Q, the USs then being (Qk a + l a )/ √ 2Q. This yields Q = Q(y, t) and
bJ[b(Q + 1) + b(Q − 1)]Q ,y + (Q − 1)Q ,t = −2Q(Q − 1)b(bJ) ,y = − Q(Q − 1) b (3y 2 + C)bJ + 2(b ′ − A) .
If the class is to be used as a cosmological model, it is interesting to discuss the intrinsic freedom. By (121) and (114) we have that 2A(t)d a t = Su a and J(y, t)d a t = (x + y)u a /b are invariantly-defined oneforms, and hence so is L(t)d a t because of (123). It follows that L A (t) is a scalar invariant. Moreover, as A(t)d a t is exact we may remove the only remaining coordinate freedom on t by putting A(t) = 1, such that the conservation of energy equation (121) can be considered as a definition θ(t) = w ′ (t)/2. Hence, in this most general picture for S = 0, the scalar constants C, D and invariants L A (t), w(t) characterize the model within the class. Notice that the presence of two invariantly-defined, distinguishing free functions could have been predicted, since after elimination ofu , there are two scalar invariants u(b) and u(S) remaining unprescribed in the system of equations (106)-(109).
In this fashion however, the physical implications remain obscure: it would be nice to have a free function, with a clear physical interpretation, instead of L/A. Spacetimes with L(t) = 0 have w(t) as the only free function. If L(t) = 0, L(t) can alternatively be fixed to 1 by a t-coordinate transformation. In this case the metric structure functions display the expansion scalar, the energy density and the pressure (since 2A(t)d a t = (w + p)u a ); these are related by energy conservation (121), where w(t) and A(t) can be chosen freely. Alternatively, one can subdivide further in θ = 0 and θ = 0. In the case θ = 0, the energy density w − Λ is constant because of (121) and can be chosen freely, just as A(t). In the most interesting case θ = 0, w(t) and θ(t) can be chosen freely, determining Su a via (121). Thus class ID provides a class of anisotropic cosmological models with arbitrary evolution of matter density and (non-zero) expansion.
Regarding symmetry, all perfect fluid ID models admit at least one KVF ∂ φ a given by (74), which at each point yields an invariantly-defined spacelike vector orthogonal tou ⊥a and lying in Σ ⊥ . If φ is chosen to be a periodic coordinate, with range given by [−πE, πE[, the spacetime is cyclically symmetric. We will then refer to the region F ≡ f (x) = 0, where the norm of ∂ φ a vanishes, as the axis of symmetry [47] [59]. Finding the complete group of isometries and their nature is trivial in our approach. The functions x, y, w and L/A are invariant scalars, such that K a d a x = K a d a y = K a d a w = K a d a L A = 0 for any KVF K a . As the ID models are anisotropic, it follows that the complete isometry group is at most G 2 , and if it is G 2 , both w and L/A are constant. Conversely, when w and L/A are constant we have θ ≡ 3b = 0 from (121), b =b(y) from (122) and J(y, t) = −A(t)F 2 (y) from (123). By redefining the time coordinate such that A(t) = 1 one sees from (117)-(123) that ∂ t a is a HO timelike KVF. We conclude that the ID perfect fluid models have at least one spacelike KVF ∂ φ a , which may be interpreted as the generator of cyclic symmetry. They admit a second independent KVF if and only if both scalar invariants w and L/A are constant, in which case the spacetimes are static and the complete group of isometries is abelian G 2 , generated by ∂ φ a and ∂ t a .
Consider the case where f (x) has 3 real non-degenerate roots x i , i.e. (92) holds. If x 1 < x 2 < x 3 then f (x) > 0 for all x ∈ ]x 1 , x 2 [. Furthermore, we let φ be a periodic coordinate. The ratio between circumference and radius of a small circle around the axis, x = x 1 or x = x 2 , is given by
lim x→x 2 < 2πE f (x) x2 x f −1 (x)dx = −πE 3x 2 2 + C (138) respectively lim x→x 1 > 2πE f (x) x x1 f −1 (x)dx = πE 3x 2 1 + C .(139)
It is only possible to choose the parameter E such that the complete axis is regular, if 3x 2 1 + C = −(3x 2 2 + C). However, eliminating C and D between this equation and
f (x 1 ) = f (x 2 ) = 0 implies x 1 = x 2 .
Consequently, if f (x) has three real non-degenerate roots, the spacetime contains a conical singularity. This echoes the properties of the C-metric [6,47], and suggests the presence of a cosmic string.
CONCLUSIONS AND DISCUSSION
A new class of Petrov type D exact solutions of Einstein's field equation in a perfect fluid with spatially homogeneous energy density has been presented. It consists of all anisotropic such fluids with shearfree normal four-velocity, and generalizes a previously found class to include non-zero expansion. The analysis and integration was rooted in the 2+2 structure of the metric and use of invariant quantities. This approach clarified the link with the vacuum C-metric limit, and certain properties of the vacuum case are inherited. However, the presence of the perfect fluid defines generically two extra invariants. For the expanding solutions, this translates into an evolution of energy density and expansion which can be chosen freely. This subclass contains only one (potentially cyclic) symmetry.
The viability of these solutions as a low-symmetry class of cosmological models is subject to further research. More in particular, it should be clarified whether a thermodynamic interpretation of the perfect fluid can be made [50] -it is certainly not possible to prescribe a barotropic equation of state p = p(w). The relation with the C-metric also suggests to further examine the arising coordinate ranges, properties of horizons, and whether an interpretation as a perturbation for small masses of a known PF solution exists for certain members. The GHP formalism [16,27] is a complex, scalar formalism, which is a 'weighted' version of the Newman-Penrose (NP) tetrad formalism. Use is made of a complex null tetrad (e 1 a , e 2 a , e 3 a , e 4 a ) ≡ (m a , m a , l a , k a ), where k a l a = −1, m a m a = 1 (A1) and all other inner products vanish. To put it in other words, at each point one takes a timelike plane, two vectors k a and l a lying along its real null directions, and two vectors m a and m a lying along the complex conjugate null directions of the orthogonal spacelike plane, these pairs of vectors satisfying the normalization conditions (A1). We use the labelsâ,b etc. for the tetrad indices. The basic variables of the formalism are the spin coefficients (Γâbĉ ≡ eâ a ∇ c (eb) a eĉ c = −Γbâĉ)
κ = Γ 414 , τ = Γ 413 , σ = Γ 411 , ρ = Γ 412 , (A2) ν = Γ 233 , π = Γ 234 , λ = Γ 232 , µ = Γ 231 , (A3)
the 9 independent components of the traceless part of the Ricci tensor S ab = R ab − 1 4 Rg ab ,
Φ 00 = 1 2 S ab k a k b , Φ 22 = 1 2 S ab l a l b ,(A4)Φ 01 = 1 2 S ab k a m b , Φ 12 = 1 2 S ab l a m b ,(A5)Φ 02 = 1 2 S ab m a m b , Φ 11 = 1 2 S ab k a l b + m a m b , (A6) with Φ ji = Φ ij , the multiple Π ≡ R 24 (A7)
of the Ricci scalar, and the 10 independent components of the Weyl tensor C abcd ,
Ψ 0 = C abcd k a m b k c m d , Ψ 4 = C abcd l a m b l c m d , (A8) Ψ 1 = C abcd k a l b k c m d , Ψ 3 = C abcd l a k b l c m d , (A9) Ψ 2 = C abcd k a m b m c l d .(A10)
Changes of the tetrad leaving the null directions spanned by k a , l a , m a and m a invariant, and at the same time preserving the normalization conditions (A1), consist of boosts
k a → Ak a , l a → A −1 l a(A11)
and spatial rotations m a → e iθ m a .
Quantities transforming under (A11)-(A12) as
η → A wp+wq 2 e i wp −wq 2 θ η (A13)
are called well-weighted of type (w p , w q ) or (w p , w q )weighted (zero-weighted in the case of type (0, 0)). They have boost-weight w B (η) = wp+wq 2 and spin-weight w S (η) = wp−wq 2
. One can check that the GHP basic variables are well-weighted, their weights following from the definitions (A2)-(A10) and (A11)-(A13) -see also equation (7.36) in [16]. E.g. w B (ν) = −2, w S (ν) = −1, implying ν is of type (-3,-1). The following derivative operators are defined such that a well-weighted quantity η is transformed in a well-weighted quantity:
Dâη = eâ(η) + Γ 34â w B (η) η + Γ 12â w S (η) η. (A14)
When η is of type (w p , w q ) one can check that
w B (Dâη) = w B (η)+w B (â), w S (D A η) = w S (η)+w S (â), wherẽ w B (â) = 1,â = 4, −1,â = 3, 0,â = 1, 2 ,w S (â) = 1,â = 1 −1,â = 2 0,â = 3, 4.
One uses the notation
ð ≡ D 1 , ð ′ ≡ D 2 , Þ ′ ≡ D 3 , Þ ≡ D 4 .(A15)
Notice that the differential of zero-weighted scalars f can be expressed as
d a f = −Þ ′ f k a − Þf l a + ð ′ f m a + ðf m a (A16) = −l(f )k a − k(f )l a + m(f )m a + m(f )m a (A17) = −u(f )u a + v(f )v a + m(f )m a + m(f )m a , (A18)
where u a and v a are related to k a and l a according to (4), resp. (98).
The basic (or 'structure') equations of the GHP formalism are (a) the commutator relations of the weighted derivatives, in the joint Dâ notation given by
Dâ, Db η = 2Γĉ [âb] Dĉη + w B (η) R 34âb + 2Γ 3ĉ[â Γĉ |4|b] η +w S (η) R 12âb + 2Γ 1ĉ[â Γĉ |2|b] η +w B (b)Γ 34â Dbη +w S (b)Γ 12â Dbη −w B (â)Γ 34b Dâη −w S (â)Γ 12b Dâη;
(b) 12 complex Ricci identities (or 'equations'), namely
eĉ(Γâbd) − ed(Γâbĉ) = Râbĉd − 2Γâê [ĉ| Γê b|d] − 2ΓâbêΓê [ĉd]
with [âb] = [14], [23] (the complex conjugates corresponding to [âb] = [24], [13]); (c) the Bianchi identities (or 'equations')
e [f (R |âb|ĉd] ) = −2Râbê [ĉ Γêdf ] + Γê a[ĉ Rdf ]êb − Γê b[ĉ Rdf ]êâ .
One can show that, after writing the directional derivatives eâ in terms of the weighted derivatives Dâ, these basic equations (a)-(c) form a consistent, closed system of PDEs in the variables (A2)-(A10) and with formal derivative operators Dâ. Compared to the NP formalism, the 6 complex Ricci identities which concern directional derivatives of the non-well-weighted NP spin coefficients α, β, γ and ǫ (corresponding to [âb] = [12], [34]) have been absorbed in the commutator relations (1). Explicitly, for a (w p , w q )-weighted scalar one gets
[Þ, Þ ′ ](η) = (π + τ )ð(η) + (π + τ )ð ′ (η) +(κν − πτ + Π − Φ 11 − Ψ 2 )w p η +(κν − πτ + Π − Φ 11 − Ψ 2 )w q η, (A19) [ð, ð ′ ](η) = (µ − µ)Þ(η) + (ρ − ρ)Þ ′ (η) +(λσ − µρ − Π − Φ 11 + Ψ 2 )w p η −(λσ − µρ − Π − Φ 11 + Ψ 2 )w q η, (A20) [Þ, ð](η) = π Þ(η) − κÞ ′ (η) + ρ ð(η) + σð ′ (η) +(κµ − σπ − Ψ 1 )w p η +(κλ − πρ − Φ 01 )w q η,(A21)
together with the equations obtained by applying the complex conjugate and/or prime dual operation to (A21). This prime dual operation is generated by interchanging k a ↔ l a and m a ↔ m a , which comes down to
κ ↔ −ν, τ ↔ −π, σ ↔ −λ, ρ ↔ −µ, (A22) Φ ij ↔ Φ 2−i 2−j , Ψ i ↔ Ψ 4−i , (A23) Þ ↔ Þ ′ , ð ↔ ð ′ .(A24)
The interchange (A24) means that (Þη) ′ = Þ ′ η ′ etc., and is due to (A14) and
w B (η ′ ) = −w B (η), w S (η ′ ) = −w S (η), i.e. w p (η ′ ) = −w p (η), w q (η ′ ) = −w q (η).
Regarding complex conjugation one has Þη = Þη, ðη = ð ′ η and w B (η) = w B (η), w S (η) = −w S (η), i.e. w p (η) = w q (η), w q (η) = w p (η).
Explicitly, the 12 complex Ricci identities read
Þτ − Þ ′ κ = (τ + π)ρ + (τ + π)σ + Φ 01 + Ψ 1 , (A25) ðρ − ð ′ σ = (ρ − ρ)τ + (µ − µ)κ + Φ 01 − Ψ 1 , (A26) Þσ − ðκ = (ρ + ρ)σ + (π − τ )κ + Ψ 0 , (A27) Þρ − ð ′ κ = ρ 2 + σσ − κτ + κπ + Φ 00 , (A28) Þ ′ σ − ðτ = −σµ − λρ − τ 2 + κν − Φ 02 , (A29) Þ ′ ρ − ð ′ τ = −µρ − λσ − τ τ + κν − 2Π − Ψ 2 (A30)
and their prime duals (A25)'-(A30)'. Finally, the Bianchi identities involve weighted derivatives of the Riemann tensor components. In full generality they are given in ref. [16], (7.32a-k), or [32], (4.12.36-41).
The formalism is especially suited for situations where two null directions are singled out by the geometry, such that k a and l a can be chosen along them. In particular, the Weyl tensor of a Petrov type D spacetime has precisely two PNDs; choosing k a and l a along them is equivalent to condition (11), and a complex null tetrad realizing this condition is called a Weyl principal null tetrad (WPNT). When (7) and (11) are both satisfied, the Bianchi identities reduce to 0 = σ(2Φ 11 + 3Ψ 2 ) − λΦ 00 , (A31) ÞΨ 2 + Þ ′ Φ 00 + 2ÞΠ = ρ(2Φ 11 + 3Ψ 2 ) − µΦ 00 , (A32) ÞΦ 11 + Þ ′ Φ 00 + 3ÞΠ = 2(ρ + ρ)Φ 11 − (µ + µ)Φ 00 ,(A33)
ðΨ 2 + 2ðΠ = −τ (2Φ 11 − 3Ψ 2 ) + νΦ 00 , (A34) ðΦ 11 − 3ðΠ = 2(τ − π)Φ11 − νΦ 00 + κΦ 22 , (A35) ðΦ 00 = κ(2Φ 11 − 3Ψ 2 ) − πΦ 00 (A36)
and their prime duals (A31)'-(A36)'. In general, the GHP formalism may be used to find a class of solutions, defined by a particular set of properties. One first translates these properties in terms of GHP variables, yielding (algebraic or differential) constraints on the system of basic equations, then recloses the resulting extended system (integrability analysis) and finally describes the corresponding metrics in terms of coordinates (integration). These coordinates are four suitable, functionally independent zero-weighted scalars f ; they may be combinations of (derivatives of) basic variables, appearing in the reclosed system S itself, or 'external' coordinates associated to HO vector fields due to Frobenius' theorem. The geometric duals of the null tetrad vectors, and hence the metrics g ab = −2k (a l b) + 2m (a m b) , are obtained by inverting (A16) for the chosen f 's. Eventually the remaining equations of S are written in terms of these coordinates and the resulting PDE's are solved as far as possible. We refer to [51] for enlightening discussions, and to e.g. [52] or this work for illustrations. In particular for Petrov type D spacetimes, notice that zero-weighted combinations of WPNT spin coefficients and their weighted derivatives (e.g. µρ or ð ′ τ ) are scalar (Lorentz) invariants x, which are thus annihilated by any present KVF K a , K(x) = L K x = 0. This facilitates the detection of KVFs. More generally, zero-weighted tensor fields T ab... , algebraically constructed from the Riemann tensor, WPNT vectors and covariant derivatives thereof, are invariantly-defined by the geometry, and L K T ab... = 0.
Appendix B: (Rigid) shearfree normality and staticity of Petrov type D spacetimes
Consider (an open region of) a spacetime and a unit timelike vector field u a defined on it. Choose a null vector field k a . At each point, k a and u a span a timelike plane Σ, the first null direction of which is spanned by k a . Construct the null vector field l a by taking at each point the unique vector lying along the second null direction and satisfying k a l a = −1. Then u a is decomposed as in (4), where q = A 2 , A = −( √ 2k a u a ) −1 . The field v a defined in (98) determines at each point the up to reflection unique unit spacelike vector lying in Σ and orthogonal to u a . The electric and magnetic parts of the Weyl tensor wrt u a can be decomposed as
E ab ≡ C acbd u c u d = (Ψ 2 + Ψ 2 )[v a v b − m (a m b) ] + Ψ 4 + q 2 Ψ 0 2q m a m b + 2 Ψ 3 − qΨ 1 √ 2q m (a v b) + c.c, (B1) H ab ≡ η acmn 2 C mn bd u c u d = i(Ψ 2 − Ψ 2 )[v a v b − m (a m b) ] +i Ψ 4 − q 2 Ψ 0 2q m a m b + 2 Ψ 3 + qΨ 1 √ 2q m (a v b) + c.c.(B2)
If u a exists such that H ab = 0, the Weyl tensor is purely electric (PE) wrt u a , the spacetime itself being also called PE. A criterion in terms of Weyl tensor concomitants, deciding whether this is the case, follows from the flow diagram 9.1 in [16] and theorem 1 in [53]. Suppose now that the spacetime admits a unit timelike vector field u a satisfying (1) -corresponding to an US, i.e. forming the tangent field of a shearfree and vorticity-free cloud of test particles. Within the GHP formalism based on k a and l a as introduced above, this is the case if and only if a (-2,-2)-weighted field q exists satisfying (5)- (6). By virtue of these relations, the [ð, ð ′ ](q) commutator relation yields (12), adding 2q[(A26)−(A26) ′ ] to the [Þ ′ − qÞ, ð ′ ](q) commutator relation gives Ψ 3 + qΨ 1 = 0 and the combination q 2 (A27) − (A27) ′ + q[(A29) ′ − (A29)] produces Ψ 4 − q 2 Ψ 0 = 0. Hence H ab = 0 from (B2), and if we choose k a to be a (multiple) PND, Ψ 0 = 0 (Ψ 0 = Ψ 1 = 0), then also l a is a (multiple) PND, Ψ 4 = 0 (Ψ 4 = Ψ 3 = 0). Hence the spacetime must be either conformally flat (all Ψ i zero), and then USs are always admitted (see e.g. (6.15)) in [16]), or purely electric (PE) and of Petrov type D or I, the Weyl tensor being PE wrt u a . For Petrov type I, there are 4 distinct PNDs, and u a is the up to reflection unique timelike vector lying along the intersection of the planes spanned by two particular pairs of PNDs. For Petrov type D, k a and l a can be taken to be the multiple PNDs, and u a lies in the plane Σ spanned by them.
Propositions 4 in [54] and 16 in [55] imply intrinsic, easily testable criteria for deciding when a Petrov type I spacetime admits an US, resp. is static. Here we present likewise criteria in the Petrov type D case. These criteria are invariant statements, in terms of GHP basic variables and weighted derivatives associated to an arbitrary WPNT. Given a Petrov type D spacetime in coordinates, the determination of the PNDs, and hence the WPNTs, is straightforward and can be performed covariantly. It then suffices to fix one WPNT and calculate the appearing spin-boost covariant expressions by using definitions (A2)-(A10) and (A14). For complex (2k, 2k)-weighted scalars (k ∈ Z) z = Re(z) + i Im(z) we mean with z > 0 (z < 0) that z is real and strict positive (negative) in the sequel.
It turns out that, given (11)-(12), the integrability conditions of (6) are identically satisfied. Thus we find: Proposition B.1 A Petrov type D spacetime admits an US if and only if, wrt an arbitrary WPNT, Ψ 2 is real and one of the following sets of conditions holds:
1. σ = 0, the scalar invariant λσ > 0, and q 0 ≡ λ/σ satisfies (5)-(6); 2. ρ = ρ, the real scalar invariant (µ − µ)(ρ − ρ) > 0, and q 0 ≡ −(µ − µ)/(ρ − ρ) satisfies (5)-(6);
3. λ = σ = µ − µ = ρ − ρ = 0, the scalar invariant κν = 0 and one of the following situations occurs, where q 0 defined in each subcase satisfies (6) and where b ≡ (π + τ )/κ, c ≡ ν/κ:
3a. Im(b)Im(c) > 0 and q 0 ≡ Im(c)/Im(b) also satisfies q 2 0 − Re(b)q 0 + Re(c) = 0; 3b. b = Re(b), c < 0 and q 0 ≡ (b + √ b 2 − 4c)/2;
3c. b > 0, c > 0, b 2 ≥ 4c, and q 0 ≡ (b + √ b 2 − 4c)/2 or q 0 ≡ (b − √ b 2 − 4c)/2;
4. λ = σ = µ − µ = ρ − ρ = 0, and either 4.1 κ = 0 = ν, (π + τ )ν > 0, and q 0 = ν/(π + τ ) satisfies (6), or 4.2 κ = 0 = ν, (π + τ )κ > 0 and q 0 = (π + τ )/κ satisfies (6); 5. the WPNT directions are HO, i.e., (13)- (14) holds.
The subdivision of case 3 stems from a straightforward analysis of the second equation of (5). In cases 1, 2, 3a, 3b and 4 there is a unique US, whereas there may be one or two USs in case 3c. Due to the number and nature of the equations (6) there is a one-degree freedom of USs in case 5, where the condition Ψ 2 = Ψ 2 can be dropped since it is implied by the imaginary part of (A30) + (A30)' and (13)- (14). Important examples of spacetimes satisfying criterion 5 are the Petrov type D purely electric Einstein spaces and their 'electrovac' generalizations (see [38,56] and § 2.3) and all spacetimes with (pseudo-)spherical or planar symmetry (which constitute the LRS II Lorentzian spaces, see [35]). These examples all satisfy (7) on top of (13)- (14) and are further characterized by Φ 00 = Φ 22 = (Φ 11 =) 0, resp. π = τ = ðR = 0 (cf. [34]).
The spacetime will admit a unit timelike vector field u a satisfying
u a;b = −u a u b ,(B3)
corresponding to a rigid US or modeling a rigid nonrotating cloud of test particles, if and only if a (-2,-2)weighted field q exists satisfying (5)-(6) and (34). Notice that, given (34), the third equation of (5) is identically satisfied. Hence we have Proposition B.2 A Petrov type D spacetime admits a rigid US if and only if, wrt an arbitrary WPNT, Ψ 2 is real and one of the following sets of conditions holds:
1'. condition 1 with the third equation of (5) replaced by (34); 2'. the scalar invariant µρ > 0 and q 0 ≡ µ/ρ satisfies (5)-(6);
3'-5'. conditions 3-5 with µ − µ = ρ − ρ = 0 replaced by µ = ρ = 0.
In case 5', the spacetime possesses geodesic, shearfree and non-diverging PNDs (κ = σ = ρ = 0, ν = λ = µ = 0)thus belonging to Kundt's class -and HO Weyl principal complex null directions (λ = σ = π + τ = 0), and admits a one-degree freedom of rigid USs.
The spacetime is static if and only if it admits a HO timelike KVF. An equivalent characterization was given by Ehlers and Kundt [5]: the spacetime is static if and only if a unit timelike vector field u a exists for which shear, vorticity and expansion scalar vanish, i.e. (B3) holds, and for which the accelerationu a is Fermipropagated along the integral curves of u a :
u [a u b] = 0. (B4)
The field u a is then parallel to a (HO and timelike) KVF and identified with a congruence of static observers. By a long but straightforward calculation, thereby simplifying expressions by means of (5)-(6), (34), (A25), (A25)' and the [Þ, Þ ′ ](q) commutator relation, one shows that the extra condition (B4) is equivalent to (qκ + q −1 ν)(Þq + 2q) − 2Þν + 2qÞτ +Φ 12 − qΦ 01 = 0, (B5) ÞÞq = πτ + πτ − q(κπ + κπ) − q −1 (νπ + νπ)
+2Φ 11 − R 12 + 2Ψ 2 .(B6)
In case 5' above, the Ricci equations (A25), (A28) and (A28)' yield Þτ = Φ 01 and Φ 00 = Φ 22 = 0, and so (B5)-(B6) reduces to Φ 12 + qΦ 01 = 0,
ÞÞq = −2τ τ + 2Φ 11 − R 12 + 2Ψ 2 .(B7)
In the subcase Φ 01 = Φ 12 = 0 of (B7), the [Þ, Þ ′ ], [Þ, ð] and [Þ, ð ′ ] commutators applied to q yield Þ ′ Þq = −qÞÞq + (Þq) 2 , (B9) ðÞq = τ Þq, ð ′ Þq = τ Þq.
(B10)
The compatibility requirement of (B8)-(B10) with the commutator relations for Þq gives the single condition
Þ ′ R + qÞR = 0.(B11)
According to the Sach's star dual [27] of the LRS criterion in [34], the subcase Þ ′ R = ÞR = 0 of (B11) precisely corresponds to a boost-isotropic spacetime with π + τ = 0. From the above we conclude:
Proposition B.3 A Petrov type D spacetime is static if and only if, wrt an arbitrary WPNT, one of the following sets of conditions holds:
1"-4". Ψ 2 is real, conditions 1'-4' hold and q 0 additionally satisfies (B5)-(B6); 5"a. condition 5' holds, the scalar invariant Φ 01 Φ 21 < 0 and q 0 ≡ −Φ 12 /Φ 01 satisfies (6) and (B8); 5"b. condition 5' holds, Φ 01 = Φ 21 = 0, the scalar invariant (Þ ′ R)(ÞR) < 0 and q 0 ≡ −Þ ′ R/ÞR satisfies (6) and (B8); 6". the spacetime is (locally) boost-isotropic and π + τ = 0.
The HO timelike KVF directions are parametrized by two constants in case 6" [60], are 1 or 2 in number in case 3"c and are unique in all other cases.
AcknowledgmentsThe authors would like to thank N. Van den Bergh and S. B. Edgar for reading the document and useful suggestions.
. A Barnes, Gen. Rel Grav. 4105A. Barnes, Gen. Rel Grav. 4, 105 (1973)
. A , Ann. Phys. (Germany). 54117A. Weyl, Ann. Phys. (Germany) 54, 117 (1917)
. T Levi-Civita, Rend R Acad, Lincei, Cl. Sci. Fis. Mat. Nat. 26343T. Levi-Civita, Rend. R. Acad. Lincei, Cl. Sci. Fis. Mat. Nat. 26, 519 (1917) and 27, 183, 343 (1918)
. I Robinson, A Trautman, Proc. Roy. Soc. Lond. A. 265463I. Robinson and A. Trautman, Proc. Roy. Soc. Lond. A 265, 463 (1962)
J Ehlers, W Kundt, Gravitation: an introduction to current research. L. WittenNew YorkWiley49J. Ehlers and W. Kundt, in Gravitation: an introduction to current research, edited by L. Witten, page 49 (New York: Wiley, 1962)
. W Kinnersley, M Walker, M , Phys. Rev. D. 21359W. Kinnersley and M. Walker M, Phys. Rev. D 2, 1359 (1970)
. J Bicák, B Schmidt, Phys. Rev. D. 401827J. Bicák and B. Schmidt, Phys. Rev. D 40, 1827 (1989)
. V Pravda, A Pravdová, Czech. J. Phys. 50333V. Pravda and A. Pravdová, Czech. J. Phys. 50, 333 (2000)
. W B Bonnor, Gen. Rel. Grav. 15535W. B. Bonnor, Gen. Rel. Grav. 15, 535 (1983)
. K Hong, E Teo, Class. Quantum Grav. 22K. Hong and E. Teo, Class. Quantum Grav. 22, 109 (2005)
. J B Griffiths, J Podolský, Class. Quantum Grav. 223467J. B. Griffiths and J. Podolský, Class. Quantum Grav. 22, 3467 (2005)
. H Farhoosh, R L Zimmerman, Phys. Rev. D. 212064H. Farhoosh and R. L. Zimmerman, Phys. Rev. D 21, 2064 (1980)
. J Bicák, V Pravda, Phys. Rev. D. 6044004J. Bicák and V. Pravda, Phys. Rev. D 60, 044004 (1999)
. R Debever, N Kamran, R G Mclenaghan, J. Math. Phys. 251955R. Debever, N. Kamran R. G. McLenaghan, J. Math. Phys. 25, 1955 (1984)
. D A García, J. Math. Phys. 251951D. A. García, J. Math. Phys. 25, 1951 (1984)
H Stephani, D Kramer, M A H Maccallum, C Hoenselaers, E Herlt, Exact Solutions to Einstein's Field Equations. CambridgeCambridge University PressSecond EditionH. Stephani, D. Kramer, M. A. H. MacCallum, C. Hoenselaers and E. Herlt, Exact Solutions to Einstein's Field Equations (Second Edition) (Cambridge: Cam- bridge University Press, 2003)
. J F Plebański, M Demiański, Ann.Phys. (N.Y.). 9898J. F. Plebański and M. Demiański, Ann.Phys. (N.Y.) 98, 98 (1976)
. E T Newman, L A Tamburino, T Unti, J. Math. Phys. 4915E. T. Newman, L. A. Tamburino and T. Unti, J. Math. Phys. 4, 915 (1963)
J B Griffiths, J Podolský, Exact space-times in Einstein's general relativity. CambridgeCambridge University PressJ. B. Griffiths and J. Podolský, Exact space-times in Ein- stein's general relativity (Cambridge: Cambridge Univer- sity Press, 2009)
. J N Goldberg, R K Sachs, Acta Phys. Polon. 2213Suppl.J. N. Goldberg and R. K. Sachs, Acta Phys. Polon., Suppl. 22, 13 (1962)
. J J Ferrando, J A Morales, M Portilla, Phys. Rev. D. 46578J. J. Ferrando, J. A. Morales and M. Portilla, Phys. Rev. D 46, 578 (1992)
. J J Ferrando, J A Morales, M Portilla, Phys. Rev. D. 502567J. J. Ferrando, J. A. Morales and M. Portilla, Phys. Rev. D 50, 2567 (1994)
. M Trümper, J. Math. Phys. 6584M. Trümper, J. Math. Phys. 6, 584 (1965)
. M Trümper, Z. Phys. 16855M. Trümper, Z. Phys. 168, 55 (1962)
. A Barnes, J. Phys. A. 5374A. Barnes, J. Phys. A 5, 374 (1972)
. H Stephani, Comm. Math. Phys. 4337H. Stephani, Comm. Math. Phys. 4, 137 (1967) and 5, 337 (1967)
. R Geroch, A Held, R Penrose, J. Math. Phys. 14874R. Geroch, A. Held and R. Penrose, J. Math. Phys. 14, 874 (1973)
. J Podolský, J B Griffiths, Phys. Rev. D. 6324006J. Podolský and J. B. Griffiths, Phys. Rev. D 63, 024006 (2000)
. J Podolský, Czech. J. Phys. 521J. Podolský, Czech. J. Phys. 52, 1 (2002)
. Ó J C Dias, J P S Lemos, Phys. Rev. D. 6784018Ó. J. C. Dias and J. P. S. Lemos, Phys. Rev. D 67, 064001, 084018 (2003)
. R Emparan, G T Horowitz, R C Myers, JHEP. 000121R. Emparan, G. T. Horowitz and R. C. Myers, JHEP 0001 007, 021 (2000).
. R Penrose, W Rindler, Spinors and Spacetime. 1Cambridge University PressR. Penrose and W. Rindler, Spinors and Spacetime vol 1 (Cambridge: Cambridge University Press, 1986)
. S B Edgar, A García-Parrado, J M Martin-Garcia, Class. Quantum Grav. 26105022S. B. Edgar, A. García-Parrado, J. M. Martin-Garcia, Class. Quantum Grav. 26, 105022 (2009)
. S W Goode, J Wainwright, Gen. Rel. Grav. 18315S.W. Goode and J. Wainwright, Gen. Rel. Grav. 18, 315 (1986)
. J M Stewart, G F R Ellis, J. Math. Phys. 91072J. M. Stewart and G. F. R. Ellis, J. Math. Phys. 9, 1072 (1968)
. P Szekeres, Comm. Math. Phys. 4155P. Szekeres, Comm. Math. Phys. 41, 55 (1975)
. W Kinnersley, J. Math. Phys. 101195W. Kinnersley, J. Math. Phys. 10, 1195 (1969)
. S R Czapor, R G Mclenaghan, J. Math. Phys. 232159S. R. Czapor and R. G. McLenaghan, J. Math. Phys. 23, 2159 (1982)
. E Kasner, . J Amer, Math, Trans. A.M.S43155E. Kasner, Amer. J. Math. 43, 217 (1921) and Trans. A.M.S. 27, 155 (1925)
. H D Wahlquist, F B Estabrook, J. Math. Phys. 7894H.D. Wahlquist and F. B. Estabrook, J. Math. Phys. 7, 894 (1966)
. B Bertotti, Phys. Rev. B. 1331331B. Bertotti, Phys. Rev. B 133, 1331 (1959)
. I Robinson, Bull. Acad. Pol. Sci. Ser. Sci. Math. Astron. Phys. 7351I. Robinson, Bull. Acad. Pol. Sci. Ser. Sci. Math. Astron. Phys. 7, 351 (1959)
. K Schwarzschild, Preuss. Akad. Wiss. Berlin, Sitzber. 189K. Schwarzschild, Preuss. Akad. Wiss. Berlin, Sitzber., 189 (1916)
. F Kottler, Ann. Phys. (Germany). 56410F. Kottler, Ann. Phys. (Germany) 56, 410 (1918)
. K Hong, E Teo, Class. Quantum Grav. 203269K. Hong and E. Teo, Class. Quantum Grav. 20, 3269 (2003)
Y Pinchover, J Rubinstein, An introduction to partial differential equations. CambridgeCambridge University PressY. Pinchover and J. Rubinstein, An introduction to par- tial differential equations (Cambridge: Cambridge Uni- versity Press, 2005)
. J B Griffiths, P Krtous, J Podolský, Class. Quantum Grav. 236745J. B. Griffiths, P. Krtous and J. Podolský, Class. Quan- tum Grav. 23, 6745 (2006)
. P Kustaanheimo, Comment. Phys. Math., Helsingf. 13P. Kustaanheimo, Comment. Phys. Math., Helsingf. 13, 8 (1947)
. H Bondi, Mon. Not. Roy. Astron. Soc. 142333H. Bondi, Mon. Not. Roy. Astron. Soc. 142, 333 (1969)
. B Coll, J J Ferrando, J. Math. Phys. 302918B. Coll and J. J. Ferrando, J. Math. Phys. 30, 2918 (1989)
. S B Edgar, Gen. Rel. Grav. 121267S. B. Edgar, Gen. Rel. Grav. 12, 347 (1980) and 24, 1267 (1992)
. S B Edgar, G Ludwig, Gen. Rel. Grav. 291309S. B. Edgar and G. Ludwig, Gen. Rel. Grav. 29, 1309 (1997)
. C B G Mcintosh, R Arianrhod, S T Wade, C Hoenselaers, Class. Quantum Grav. 111555C. B. G. McIntosh, R. Arianrhod, S. T. Wade and C. Hoenselaers, Class. Quantum Grav. 11, 1555 (1994)
. J J Ferrando, J A Sáez, Class. Quantum Grav. 14129J. J. Ferrando and J. A. Sáez, Class. Quantum Grav. 14, 129 (1997)
. J J Ferrando, J A Morales, J A Sáez, Class. Quantum Grav. 184939J. J. Ferrando, J. A. Morales and J. A. Sáez, Class. Quan- tum Grav. 18, 4939 (2001)
. R Debever, R G Mclenaghan, J. Math. Phys. 221711R. Debever and R. G. McLenaghan, J. Math. Phys. 22, 1711 (1981)
There are some typos in these equations: the integers 3 and 9 should be omitted in (50) and (51)-(52), respectively. More precisely equations (48)-(53) of ref. while there should be 9k instead of k in (53)More precisely equations (48)-(53) of ref. [33]. There are some typos in these equations: the integers 3 and 9 should be omitted in (50) and (51)-(52), respectively, while there should be 9k instead of k in (53).
If one of the pi's is zero then so is a second one. this reproduces MinkowskiIf one of the pi's is zero then so is a second one; this reproduces Minkowski.
The two constants result from the integrations of (B8)-(B10), giving Þq, and consecutively of. The two constants result from the integrations of (B8)- (B10), giving Þq, and consecutively of (6).
|
[] |
[
"Binary crystals in two-dimensional two-component Yukawa mixtures",
"Binary crystals in two-dimensional two-component Yukawa mixtures"
] |
[
"Lahcen Assoud \nInstitut für Theoretische Physik II: Weiche Materie\nHeinrich-Heine-Universität Düsseldorf\nUniversitätsstrasse 1D-40225DüsseldorfGermany\n",
"René Messina \nInstitut für Theoretische Physik II: Weiche Materie\nHeinrich-Heine-Universität Düsseldorf\nUniversitätsstrasse 1D-40225DüsseldorfGermany\n",
"Hartmut Löwen \nInstitut für Theoretische Physik II: Weiche Materie\nHeinrich-Heine-Universität Düsseldorf\nUniversitätsstrasse 1D-40225DüsseldorfGermany\n"
] |
[
"Institut für Theoretische Physik II: Weiche Materie\nHeinrich-Heine-Universität Düsseldorf\nUniversitätsstrasse 1D-40225DüsseldorfGermany",
"Institut für Theoretische Physik II: Weiche Materie\nHeinrich-Heine-Universität Düsseldorf\nUniversitätsstrasse 1D-40225DüsseldorfGermany",
"Institut für Theoretische Physik II: Weiche Materie\nHeinrich-Heine-Universität Düsseldorf\nUniversitätsstrasse 1D-40225DüsseldorfGermany"
] |
[] |
The zero-temperature phase diagram of binary mixtures of particles interacting via a screened Coulomb pair potential is calculated as a function of composition and charge ratio. The potential energy obtained by a Lekner summation is minimized among a variety of candidate two-dimensional crystals. A wealth of different stable crystal structures is identified including A, B, AB 2 , A 2 B, AB 4 structures [A (B) particles correspond to large (small) charge.] Their elementary cells consist of triangular, square or rhombic lattices of the A particles with a basis comprising various structures of A and B particles. For small charge asymmetry there are no intermediate crystals besides the pure A and B triangular crystals. The predicted structures are detectable in experiments on confined mixtures of charged colloids or dusty plasma sheets.
|
10.1063/1.2996515
|
[
"https://arxiv.org/pdf/0801.1453v1.pdf"
] | 12,319,154 |
0801.1453
|
dc9eb556d4c0d6d790536d9d572985fff80cab2b
|
Binary crystals in two-dimensional two-component Yukawa mixtures
9 Jan 2008 (Dated: February 2, 2008)
Lahcen Assoud
Institut für Theoretische Physik II: Weiche Materie
Heinrich-Heine-Universität Düsseldorf
Universitätsstrasse 1D-40225DüsseldorfGermany
René Messina
Institut für Theoretische Physik II: Weiche Materie
Heinrich-Heine-Universität Düsseldorf
Universitätsstrasse 1D-40225DüsseldorfGermany
Hartmut Löwen
Institut für Theoretische Physik II: Weiche Materie
Heinrich-Heine-Universität Düsseldorf
Universitätsstrasse 1D-40225DüsseldorfGermany
Binary crystals in two-dimensional two-component Yukawa mixtures
9 Jan 2008 (Dated: February 2, 2008)PREPRINTnumbers: 8270Dd6150Ah6166Dk Typeset by REVT E X 1
The zero-temperature phase diagram of binary mixtures of particles interacting via a screened Coulomb pair potential is calculated as a function of composition and charge ratio. The potential energy obtained by a Lekner summation is minimized among a variety of candidate two-dimensional crystals. A wealth of different stable crystal structures is identified including A, B, AB 2 , A 2 B, AB 4 structures [A (B) particles correspond to large (small) charge.] Their elementary cells consist of triangular, square or rhombic lattices of the A particles with a basis comprising various structures of A and B particles. For small charge asymmetry there are no intermediate crystals besides the pure A and B triangular crystals. The predicted structures are detectable in experiments on confined mixtures of charged colloids or dusty plasma sheets.
I. INTRODUCTION
Two-component mixtures in general exhibit much richer crystallization phenomena and polymorphism than their one-component counterparts [1] as witnessed by a huge variety of possible stable binary crystals, e.g. for binary hard sphere systems [2,3,4,5]. How the whole crystal phase behavior in mixtures depends on the interparticle interactions is far from being understood even in equilibrium [6,7]. This is true also in two spatial dimensions where the number of Bravais lattices is smaller than in three dimensions. Binary mixtures in two dimensions have been studied for hard disks [8] and a complex diagram of close packing was obtained as a function of their diameter ratio. More recently, a two-dimensional binary mixture with soft interactions was considered [9], namely that for parallel dipoles where the pair potential scales with the inverse cube of the interparticle separation. A variant of this model has been considered in Ref. [10]. Such systems can be realized in granular matter [11] and in magnetic colloidal suspensions confined to a air-water interface [12]. Again, as a function of the ratio of dipole moments of the two species, a complex stability phase diagram of stable binary crystals was obtained that qualitatively differs from the hard disk case [8].
In particular for low asymmetries, the hard disk system shows a complete separation into pure A and B triangular crystals [8] while the soft dipolar systems possesses two stable mixed crystals as well with stoechiometric ratio A 2 B and AB 2 [9]. These differences show that the topology of the phase diagrams depend on details of the interactions and there is certainly a need to understand this dependence in more detail.
In this paper, we consider a two-dimensional binary system of Yukawa particles, i.e. the pair interaction potential V (r) between the particles is a screened Coulomb interaction ∝ exp(κr)/r where κ is the screening constant (or the inverse screening length). This potential interpolates between the case of hard disks (as obtained in the limit of high κ) and the unscreened Coulomb case (as obtained for κ = 0). The latter limit, V (r) ∝ 1/r is even softer than the dipolar case where V (r) ∝ 1/r 3 . The two components are defined by two different charges, i.e. different prefactors in front of the Yukawa interaction. In previous works, such a classical binary mixture with Yukawa interactions in three-dimensions has been used as a model to study mixing rules [13], effective forces [14], fluid-fluid phase separation [15,16,17], dynamical correlations [18,19] and transport properties [20]. Likewise the pure (one-component) Yukawa system was also studied in two-spatial dimensions for fluid structure [21,22,23,24], dynamics [25,26,27,28] and transport properties [29]. Binary mixtures of Yukawa particles in two dimensions have also been studied for fluid structure [30], adsorption [31], interfaces [32] and transport [33]. However, the crystallization issue was only addressed in one-component Yukawa systems (for a recent work, see e.g. [34]) but never in binary mixtures.
The Yukawa potential is realized in charged colloidal suspensions [35] as well as in dusty plasmas [36], both for one component systems and mixtures. In fact, highly charged colloidal suspensions can be confined between highly charged parallel glass plates [37,38,39] which restricts their motion practically to two dimensions. The interactions between these macroions are screened due to the presence of the microscopic microions and additional salt ions. As in three dimensions, the Debye-Hückel screened Coulomb interaction is a reasonable model for confined charged colloids [40,41]. Crystallization of binary charged colloids has been studied experimentally in the bulk. However, a monolayer of a confined binary mixture of charged colloids has not yet been realized although this is in principle possible as has been shown for sterically-stabilized [42] and magnetic colloids [43]. On the other hand, sheets of highly charged dust particles in plasmas (so-called complex plasmas) can also be confined to two dimensions, e.g. by levitating electric fields. The interaction between the dust particles is again screened such that a Yukawa model is appropriate [36,44,45]. Highly charged microspheres suspended in a plasma settled in a horizontal monolayer were studied experimentally and compared to a two-dimensional Yukawa model [46,47,48]. There is no principle problem in studying binary mixtures of dust particles but a concrete realization in an experiments still has to be performed as well.
Apart from its important realizations, our major motivation for our studies is to understand the interplay between the interparticle interaction and the stability of different two-dimensional crystal lattices. A control of colloidal composite lattices may lead to new photonic crystals [49] to molecular-sieves [50] and to micro-and nano-filters with desired porosity [51]. The electric properties of a nanocrystal depend on its superlattice structure [52]. For these type of applications, it is crucial to understand the various stable lattice types in binary mixtures.
For the two-component two-dimensional Yukawa mixture, we obtain the full phase diagram at zero-temperature as a function of the charge asymmetry using lattice sums.
As a result, we find a variety of different stable composite lattices. They include A, B, AB 2 , A 2 B, AB 4 structures. Their elementary cells consist of (equilateral) triangular, square and rhombic lattices of the big particles. These are highly decorated by a basis involving either A particles alone or both B and A particles. The topology of the resulting phase diagram differs qualitatively from that of hard disk mixtures [8] and dipoles [9].
The paper is organized as follows: In Sec. II the model is described and possible candidate structures for crystal lattices in two dimensions are proposed. Results for the phase diagrams are presented in Sec. III. We conclude finally in Sec. IV.
II. MODEL
The model systems used in our study are binary mixtures of (repulsive) charged particles made up of two species denoted as A and B. Each component A and B is characterized by its charge valency Z A and Z B , respectively. These constitutive particles are confined to a two-dimensional plane, and interact via the Yukawa pair potential. Introducing the ratio Z = Z B /Z A the pair interaction potentials between two A particles, a A-and B-particles, and two B-particle at distance r are
V AA (r)= κV 0 ϕ(r), V AB (r) = κV 0 Zϕ(r), V BB (r) = κV 0 Z 2 ϕ(r),(1)
respectively. The dimensionless function ϕ(r) is given by
ϕ(r) = exp(−κr) κr ,(2)
where the energy amplitude V 0 κ sets the energy scale.
Our goal is to determine the stable crystalline structures adopted by the system at zero temperature. We consider a parallelogram as a primitive cell which contains n A A-particles and n B B-particles. This cell can be described geometrically by the two lattice vectors a = a(1, 0) and b = aγ(cos θ, sin θ), where θ is the angle between a and b and γ is the aspect ratio (γ = |b|/|a|). The position of a particle i (of species A) and that of a particle j (of species B) in the parallelogram is specified by the vectors
r A i = (x A i , y A i ) and r B j = (x B j , y B j )
, respectively. The total internal energy (per primitive cell) U has the form
U = 1 2 J=A,B n J i,j=1 ′ R V JJ r J i − r J j + R + n A i=1 n B j=1 R V AB ( r A i − r B j + R ),(3)
where R = ka + lb with k and l being integers. The sums over R in Eq. 3 run over all lattice cells where the prime indicates that for R = 0 the terms with i = j are to be omitted.
In order to handle efficiently the long-range nature of the Yukawa interaction at moderate screening strength, we employed a Lekner-summation (see Appendix A).
We choose to work at prescribed pressure p and zero temperature (T = 0). Hence, the corresponding thermodynamic potential is the Gibbs free energy G. Additionally, we consider interacting particles at composition X := n B /(n A + n B ), so that the (intensive)
Gibbs free energy g per particle reads: g = g(p, Z, X) = G/(n A + n B ). At vanishing temperature, g is related to the internal energy per particle u = U/(n A + n B ) through
g = u + p/ρ,
where the pressure p is given by p = ρ 2 (∂u/∂ρ), and ρ = (n A + n B )/|a × b| is the total particle density. The Gibbs free energy per particle g has been minimized with respect to γ, θ and the position of particles of species A and B within the primitive cell.
In order to decrease the complexity of the energy landscape, we have limited the number of variables and considered the following candidates for our binary mixtures:
A 4 B, A 3 B, A 2 B, A 4 B 2 , A 3 B 2 , AB, A 2 B 2 , A 3 B 3 , A 2 B 3 , AB 2 , A 2 B 4 , AB 3
, AB 4 and AB 6 . For the AB 6 and A 3 B 3 case we have only considered a triangular lattice formed by the A particles.
III. RESULTS
A. Phase diagram
The ultimate phase diagrams in the (Z, X) plane has been obtained by employing the Maxwell construction. We recall here that the both dimensionless quantities, namely the charge ratio Z as well as the composition X, can vary between zero and unity. A low charge ratio (i.e., Z is close to zero) indicates a strong charge asymmetry, whereas a high charge ration (i.e., Z is close to unity) represents a large charge symmetry or equivalently a weak charge asymmetry. Given the fact that the phase behavior is getting increasingly Fig. 1. The corresponding nomenclature of the phase labeling is explained in Table I. The phase diagrams in the (Z, X) plane for the three reduced pressures p * = 0.01, 1
and 100 are depicted in Fig. 2(a), Fig. 2(b), and Fig. 2(c), respectively. Note that upon increasing p * at prescribed Z and X one decreases the density.
Let us first focus our discussion on the apparently simple phase behavior reported at weak charge asymmetry (here roughly Z 0.5, see Fig. 2). Thereby, the system phase separates into a pure A-triangular crystalline phase and a B-one (see also Fig. 1). This triangular structure obviously corresponds to the single-component ground-state. Having in mind that the same phase behavior is reported for hard disks binary mixtures at small size asymmetry [8], it is meaningful to equally expect a phase separation for moderate or sufficiently large reduced screening strength κ * ≡ κ/ √ ρ. For Z = 1, we have κ * ≈ 3.0, 1.2, 0.4 for p * = 0.01, 1, 100, respectively, so that the phase separation is certainly to be expected for moderate pressures (here p * = 0.01 and possibly p * = 1) when referring to the hard disk limit [8].
What is now less obvious, still in the regime of weak charge asymmetry, is the phase Recently, we have shown for dipolar binary mixtures [9], whose pair potential is governed by 1/r 3 , that, at weak dipolar asymmetry (the analogous quantity to the charge ratio in our present study), the stable mixtures A 2 B and AB 2 (who are globally triangular) set in.
This phase behavior contrasts therefore strongly with that presently reported for Yukawa mixtures, see Fig. 2(c). Given the fact that at weak screening the Yukawa pair potential is well approximated by a 1/r dependence, which is even softer than 1/r 3 , it is legitimate to expect stable mixtures in the regime of weak screening and charge asymmetry. In order to check this idea we have performed additional calculations at p * = 10 10 with Z = 0.99 leading to reduced screening strengths of the order of 10 −2 . Those values for κ * turn out to be still too large to recover the phase behavior found at 1/r 3 -pair interactions [9]. The consideration of even much smaller screening strengths (say roughly of the order of 10 −7 ) are numerically not tractable within reasonable CPU time. Unfortunately, the implementation of a direct Lekner and/or Ewald sum for the 1/r-pair interactions is delicate at prescribed pressure, since the lack of electroneutrality involves the presence of an artificial homogeneous neutralizing background which is thermodynamically only consistent at prescribed density [53]. Consequently, although we have a strong intuition about the stability of mixtures at weak charge asymmetry and screening, we can not prove it here on computational basis.
We now briefly address the more complicated phase behavior reported at strong charge asymmetry, see Fig. 2 with Z 0.5. As a clear general trend, it is found that the number of stable phases increases with growing pressure. This feature is in agreement with the idea that mixing is favored upon softening the pair potential.
A common and remarkable feature in this regime of strong charge asymmetry (see Fig. 2) is the imposing stability of the composition X = 1/2. This feature was also reported for dipolar mixtures [9]. More specifically, the following cascade S(AB) → T(A)A 2 B 3 → Rh(A)AB 2 is found upon increasing Z, see Fig. 2 and Fig. 1 for the corresponding structures. Thereby,
the transition S(AB) → T(A)A 2 B 3 is discontinuous whereas T(A)A 2 B 3 → Rh(A)AB 2 is
continuous, see Fig. 2. Note that, for p * = 0.01 shown in Fig. 2(a), the stability of the square phase S(AB) occurs for values of Z smaller than 0.2 that are not shown here.
B. Thermodynamical properties
Constant pressure
In this part, we investigate some thermodynamic properties such as the reduced density ρ * or the reduced Gibbs free energy g * ≡ g/(V 0 κ), as obtained prior the Maxwell construction.
Although the pressure considered here is fixed at p * = 100, very similar results are obtained for the two other pressures.
The reduced density ρ * as a function of the charge ratio Z at different compositions X is sketched in Fig. 3. At given composition X, the density decreases monotonically with Z, see Fig. 3. This effect can be simply explained as follows: Upon increasing Z the repulsive A−B and B −B pair interactions increase accordingly, so that to keep the pressure fixed the system has to decrease its density. Moreover, at prescribed charge ratio, Fig. 3 the density increases with the composition. This feature can also be explained with simple physics: Upon enlarging the composition X, the proportion of weakly charged B-particles increases accordingly, so that to keep the pressure constant the system has to increase its density.
The reduced Gibbs free energy g * as a function of the charge ratio Z at different compositions X is sketched in Fig. 4. Recalling that g * = u * + p * /ρ * [with u * ≡ u/(V 0 κ)], the behavior of g * exhibited in Fig. 4 explained behavior of ρ * described in Fig. 3. Since the reduced internal energy u * decreases with growing X and therefore with growing ρ * at prescribed Z, it is clear that g * decreases with growing X at given Z, as seen in Fig. 4. Besides, at given composition X, Fig. 4 shows that g * increase with growing Z, as expected.
Constant composition
We now analyze ρ * and g * at X = 1/2 as a function of Z for different values of p * . As far as the behavior of ρ * is concerned, the new information provided by Fig. 5 is that ρ * increases with growing pressure at given charge ratio, as it should be. The same qualitative feature is also observed for g * in Fig. 6. A closer inspection of Fig. 5 and Fig. 6 suggests that ρ * and g * increase rather slowly with p * (at given Z * ).
IV. CONCLUDING REMARKS
In conclusion we have determined the ground-state (i.e. zero-temperature) phase diagram for a two-component Yukawa monolayer at various pressure for arbitrary compositions and a broad range of charge asymmetries. Among a big number of candidate phases, a wealth of different composite lattices has been found to be stable. The larger the charge asymmetry, the more complex is the phase diagram. At low asymmetry the system shows demixing into pure A and B crystals similar to hard disks but different from the soft inverse cube interaction valid for dipoles. The results are in principle detectable in binary mixtures of charged colloids confined between two charged plates or levitated dusty plasma sheets.
It would be interesting to study the effect of finite temperature. We expect that the topology of the phase diagram does not change upon gently increasing the temperature though this could change close to melting. When cooling a two-component fluid down, glass formation in the binary systems at finite temperature may be a fascinating topic as well [54] to be studied in the future. In fact, it has been speculated that the underlying crystallization into the stable crystal lattices may control vitrification [55] and therefore are findings are relevant for the structure of glasses as well.
the xy plane gives a 2-dimensional lattice, and can be described by two lattice vectors a = (a x , 0) and b = (b x , b y ). In the parallelogram, the position of a charge valency Z i is defined by r i = (x i , y i ).
The total interaction energy per cell is given by
U V 0 = 1 2 n i=1 n j =1 Z i Z j Φ(r ij ) + 1 2 n i=1 Z 2 i Φ 0 (A1) with Φ(r) = R exp(−κ|r + R|) |r + R| and Φ 0 = R =0 exp(−κ|R|) |R| ,(A2)
where
|r + R| = (x + a x l + b x m) 2 + (y + b y m) 2 and |R| = (a x l + b x m) 2 + (b y m) 2 .
Here R = la+mb with l and m being integers. The slowly convergent sums over lattice sites (Eq. A2) can not be efficiently used in a numerical calculation, so that we will transform them into rapidly convergent forms using a Lekner Method [56,57]. With the help of the following integral representation exp(−κ|r + R|)
|r + R| = 1 √ π ∞ 0 dt √ t exp (− κ 2 4t − |r + R| 2 t),(A3)
we obtain
Φ(r) = 1 √ π ∞ 0 dt √ t exp(− κ 2 4t ) ∞ m=−∞ ∞ l=−∞ exp −(y + mb y ) 2 t exp − x a x + l + m b x a x 2 a 2 x t .(A4)
Now, to get further, we apply a 1-dimensional Poisson summation
∞ l=−∞ exp −(α + βl) 2 t = √ π β √ t ∞ k=−∞ exp i2πk α β exp − π 2 k 2 β 2 1 t , (A5) which provides +∞ l=−∞ exp −( x a x + l + m b x a x ) 2 a 2 x t) = 1 |a x | π t 1 + 2 +∞ k=1 cos 2πk x a x + m b x a x exp −π 2 k 2 /a 2 x t .(A6)
Inserting Eq. (A6) into Eq. (A4) yields:
Φ(r) = 1 |a x | ∞ m=−∞ ∞ 0 dt t exp − κ 2 4t − (y + mb y ) 2 t + 2 |a x | +∞ k=1 +∞ m=−∞ cos 2πk x a x + m b x a x × ∞ 0 dt t exp − κ 2 + 4π 2 k 2 a 2 x 1 4t − (y + mb y ) 2 t (A7)
Now, taking into account the following relation
∞ 0 dt t exp − B 2 4t − C 2 t = 2K 0 (BC) (A8)
where K 0 is the zeroth order modified Bessel function of the second kind, The final expression for Φ(r) reads:
Φ(r) = 2 |a x | +∞ m=−∞ K 0 (κ|y + mb y |) + 4 |a x | ∞ k=1 +∞ m=−∞ cos 2π
x a x + m bx a x ×K 0 |y + mb y | κ 2 + 4π 2 k 2 a 2
x for y = 0 (A9) and the "self" contribution Φ 0
Φ 0 = 4 |a x | ∞ m=1 K 0 (κmb y ) + 8 |a x | ∞ k=1 ∞ m=1 cos 2πkm bx a x K 0 mb y κ 2 + 4π 2 k 2 a 2 x − 2 |a x | ln [1 − exp(−κa x )](A10)
In the limit of a rectangular based cell, i.e setting b x = 0, one obtains the formulas for the cross and self-energies that are identical to those derived in [57] with z = 0.
Rh(A)B 4 S(A)B 4 Rh(A)B 2 Rh(A)AB 4 S(AB) T(A)A 2 B 3 Rh(A)AB 2 Rh(A)AB FIG. 1: The stable binary crystal structures and their primitive cells. The red (green) discs correspond to A (B) particles.
FIG. 2 :
2The phase diagram in the (Z, X) plane of charge asymmetry and composition at T = 0 for a effective pressure (a) p * = 0.01, (b) p * = 1, (c) p * = 100. The symbol # ( * ) denote continuous (discontinuous) transitions. separation reported in Fig. 2(c) for p * = 100, where κ * ≈ 0.4 for large charge symmetry.
FIG. 4 :FIG. 5 :
45Same asFig. 3but for g * . Reduced density ρ * (prior the Maxwell construction) as a function of the charge ratio Z for various reduced pressures p * at prescribed composition X = 1/2.
FIG. 6 :
6Same as Fig. 5 but for g * .
TABLE I :
IThe stable phases with their Bravais lattice and their basis.be described as a power low of the separation distance (as it was the case in our previous work on dipolar mixtures[9]), the phase diagram becomes pressure dependent for Yukawa systems. To capture this feature, we present results at three well distinct pressures, namelyPhase
Bravais lattice [basis]
T(A)
Triangular for A [one A particle]
T(B)
Triangular for B [one B particle]
S(AB)
Square for A and B together [one A and one B particles]
S(A)B n
Square for A [one A and n B particles]
Rh(A)A m B n Rhombic for A [(m + 1) A and n B particles]
T(A)A m B n
Triangular for A [(m + 1) A and n B particles]
complicated upon lowering Z, involving a huge basket of candidates, we only present results
starting from Z = 0.2. Furthermore, in contrast to situations where the pair potential can
p * ≡ p/(V 0 κ 3 ) = 0.01, 1 and 100. An overview of the resulting stable crystalline phases can
be found in
can be equally well rationalized when advocating the just0.2
0.4
0.6
0.8
1
Z
0
5
10
15
20
25
30
35
g
*
X
X=0
X=1/5
X=1/4
X=1/3
X=2/5
X=1/2
X=3/5
X=2/3
X=3/4
X=4/5
X=6/7
X=1
AcknowledgmentsWe thank T. Palberg and H. Tanaka for helpful discussions. This work was supported by the DFG via the SFB TR6 (project section D1).MENSIONAL SYSTEMSWe consider a primitive cell in the shape of a parallelogram, which contains a set of n = n A + n B particles interacting via Yukawa potentials. The parallelogram repeated in
. G Tammann, Ann. d. Physik. 40237G. Tammann, Ann. d. Physik. 40, 237 (1913).
. S Pronk, D Frenkel, Phys. Rev. Lett. 90255501S. Pronk and D. Frenkel, Phys. Rev. Lett. 90, 255501 (2003).
. H Xu, M Baus, J. Phys: Condens. Matter. 4663H. Xu and M. Baus, J. Phys: Condens. Matter 4, L663 (1992).
. M D Eldridge, P A Madden, D Frenkel, Nature. 36535M. D. Eldridge, P. A. Madden, and D. Frenkel, Nature 365, 35 (1993).
. P Bartlett, R H Ottewill, P N Pusey, Phys. Rev. Lett. 683801P. Bartlett, R. H. Ottewill, and P. N. Pusey, Phys. Rev. Lett. 68, 3801 (1992).
J Hafner, From Hamiltonians to Phase Diagrams. BerlinSpringerJ. Hafner, From Hamiltonians to Phase Diagrams (Springer, Berlin, 1987).
G Gompper, M Schick, Complex Colloidal Suspensions. WeinheimWILEY-VCH Verlag GmbH and Co. KGaA2G. Gompper and M. Schick, Soft Matter, Vol. 2: Complex Colloidal Suspensions (WILEY- VCH Verlag GmbH and Co. KGaA, Weinheim, 2006).
. C N Likos, C L Henley, Philos. Mag. B. 6885C. N. Likos and C. L. Henley, Philos. Mag. B 68, 85 (1993).
. L Assoud, R Messina, H Löwen, Europhys. Letters. 8048001L. Assoud, R. Messina, and H. Löwen, Europhys. Letters 80, 48001 (2007).
. J Fornleitner, F Lo, G Verso, C N Kahl, Likos, To be publishedJ. Fornleitner, F. Lo Verso, G. Kahl and C. N. Likos. To be published.
. M B Hay, R K Workman, S Manne, Phys. Rev. E. 6712401M. B. Hay, R. K. Workman, and S. Manne, Phys. Rev. E 67, 012401 (2003).
. K Zahn, J M Mendezalcaraz, G Maret, Phys. Rev. Lett. 79175K. Zahn, J. M. MendezAlcaraz, and G. Maret, Phys. Rev. Lett. 79, 175 (1997).
. Y Rosenfeld, Phys. Rev. E. 472676Y. Rosenfeld, Phys. Rev. E 47, 2676 (1993).
. A A Louis, E Allahyarov, H Löwen, R Roth, Phys. Rev. E. 6561407A. A. Louis, E. Allahyarov, H. Löwen, and R. Roth, Phys. Rev. E 65, 061407 (2002).
. E Scholl-Paschinger, G Kahl, J. Chem. Phys. 1187414E. Scholl-Paschinger and G. Kahl, J. Chem. Phys. 118, 7414 (2003).
. P Hopkins, A J Archer, R Evans, J. Chem. Phys. 12454503P. Hopkins, A. J. Archer, and R. Evans, J. Chem. Phys. 124, 054503 (2006).
. J Kofinger, N B Wilding, G Kahl, J. Chem. Phys. 125234503J. Kofinger, N. B. Wilding, and G. Kahl, J. Chem. Phys. 125, 234503 (2006).
. M A Chavez-Rojo, M Medina-Noyola, Physica A: Statistical Mechanics and its Applications. 36655M. A. Chavez-Rojo and M. Medina-Noyola, Physica A: Statistical Mechanics and its Appli- cations 366, 55 (2006).
. N Kikuchi, J Horbach, Europhys. Letters. 7726001N. Kikuchi and J. Horbach, Europhys. Letters 77, 26001 (2007).
. G Salin, D Gilles, J. Phys. A: Math. Gen. 174517G. Salin and D. Gilles, J. Phys. A: Math. Gen. 17, 4517 (2006).
. H Löwen, J. Phys.: Condens. Matter. 410105H. Löwen, J. Phys.: Condens. Matter 4, 10105 (1992).
. R Messina, H Löwen, Phys. Rev. Lett. 91146101R. Messina and H. Löwen, Phys. Rev. Lett. 91, 146101 (2003).
. P Hartmann, G J Kalman, K K Z Donko, Phys. Rev. E. 7226409P. Hartmann, G. J. Kalman, and K. K. Z. Donko, Phys. Rev. E 72, 026409 (2005).
. P Hartmann, G Z Kalman, Z Donko, J. Phys. A: Math. Gen. 394485P. Hartmann, G. Z. Kalman, and Z. Donko, J. Phys. A: Math. Gen. 39, 4485 (2006).
. K Nelissen, B Partoens, F , K. Nelissen, B. Partoens, and F. .
. M Peeters, Europhys. Letters. 7966001M. Peeters, Europhys. Letters 79, 66001 (2007).
. B Liu, J Goree, Phys. Rev. E. 7516405B. Liu and J. Goree, Phys. Rev. E 75, 016405 (2207).
. A Libal, C Reichhardt, C J O Reichhardt, Phys. Rev. E. 7511403A. Libal, C. Reichhardt, and C. J. O. Reichhardt, Phys. Rev. E 75, 011403 (2007).
. G J Kalman, P Hartmann, Z Donko, M Rosenberg, Phys. Rev. Lett. 9265001G. J. Kalman, P. Hartmann, Z. Donko, and M. Rosenberg, Phys. Rev. Lett. 92, 065001 (2004).
. B Liu, J Goree, Phys. Rev. Lett. 94185002B. Liu and J. Goree, Phys. Rev. Lett. 94, 185002 (2005).
. J M Mendez-Alcaraz, M Chavez-Paez, B , R Klein, Physica A. 15173J. M. Mendez-Alcaraz, M. Chavez-Paez, B. D'aguanno, and R. Klein, Physica A 15, 173 (1995).
. J J Gray, R T Bonnecaze, Langmuir. 177935J. J. Gray and R. T. Bonnecaze, Langmuir 17, 7935 (2001).
. A Wysocki, H Löwen, J. Phys.: Condens. Matter. 167209A. Wysocki and H. Löwen, J. Phys.: Condens. Matter 16, 7209 (2004).
. J Dzubiella, G P Hoffmann, H Löwen, Phys. Rev. E. 6521402J. Dzubiella, G. P. Hoffmann, and H. Löwen, Phys. Rev. E 65, 021402 (2002).
. C Desgranges, J Delhommelle, J. Chem. Phys. 12654501C. Desgranges and J. Delhommelle, J. Chem. Phys. 126, 054501 (2007).
. E Allahyarov, H Löwen, S Trigger, Phys. Rev. Lett. 575818E. Allahyarov, H. Löwen, and S. Trigger, Phys. Rev. Lett. 57, 5818 (1998).
. O S Vaulina, I E Dranzhevskii, Plasma. Phys. Reports. 33494O. S. Vaulina and I. E. Dranzhevskii, Plasma. Phys. Reports 33, 494 (2007).
. C A Murray, D H Van Winkle, Phys. Rev. Lett. 581200C. A. Murray and D. H. van Winkle, Phys. Rev. Lett. 58, 1200 (1987).
. M Brunner, Europhys. Letters. 58926M. Brunner et al., Europhys. Letters 58, 926 (2002).
. A B Fontecha, J. Phys.: Condens. Matter. 172779A. B. Fontecha et al., J. Phys.: Condens. Matter 17, S2779 (2005).
. E Chang, D Hone, Europhys. Letters. 5635E. Chang and D. Hone, Europhys. Letters 5, 635 (1988).
. E Allahyarov, I D'amico, H Löwen, Europhys. Letters. 5635E. Allahyarov, I. D'Amico, and H. Löwen, Europhys. Letters 5, 635 (1988).
. C R Nugent, H N P K V Edmond, E R Weeks, Phys. Rev. Lett. 9925702C. R. Nugent, H. N. P. K. V. Edmond, and E. R. Weeks, Phys. Rev. Lett. 99, 025702 (2007).
. N Hoffmann, F Ebert, C N Likos, G M H Löwen, Phys. Rev. Lett. 9778301N. Hoffmann, F. Ebert, C. N. Likos, and G. M. H. Löwen, Phys. Rev. Lett. 97, 078301 (2006).
. M H Kong, B Partoens, F , M. H. Kong, B. Partoens, and F. .
. M Peeters, New J. of Phys. 5494M. Peeters, New J. of Phys. 5, 494 (2003).
. G P Hoffmann, H Löwen, J. Phys: Condens. Matter. 127359G. P. Hoffmann and H. Löwen, J. Phys: Condens. Matter 12, 7359 (2000).
. V Nosenko, S Nunomura, J Goree, Phys. Rev. Lett. 88215002V. Nosenko, S. Nunomura, and J. Goree, Phys. Rev. Lett. 88, 215002 (2002).
. V Nosenko, Phys. Rev. E. 6856409V. Nosenko et al., Phys. Rev. E 68, 056409 (2003).
. H Totsuji, T Kishimoto, C Totsuji, T Sasabe, Phys. Rev. E. 587831H. Totsuji, T. Kishimoto, C. Totsuji, and T. Sasabe, Phys. Rev. E 58, 7831 (1998).
. V N Manoharan, T Elsesser, D J Pine, Science. 301483V. N. Manoharan, T. Elsesser, and D. J. Pine, Science 301, 483 (2003).
. J Kecht, Langmuir. 205271J. Kecht et al., Langmuir 20, 5271 (2004).
. F Yan, W A Goedel, Chem. Mater. 161622F. Yan and W. A. Goedel, Chem. Mater. 16, 1622 (2004).
. A E Saunders, B A Korgel, ChemPhysChem. 661A. E. Saunders and B. A. Korgel, ChemPhysChem 6, 61 (2005).
. L Bonsall, A A Maradudin, Phys. Rev. B. 151959L. Bonsall and A. A. Maradudin, Phys. Rev. B 15, 1959 (1977).
. T Hamanaka, A Onuki, Phys. Rev. E. 7411506T. Hamanaka and A. Onuki, Phys. Rev. E 74, 011506 (2006).
. T Kawasaki, T Araki, H Tanaka, Phys. Rev. Lett. 99215701T. Kawasaki, T. Araki, and H. Tanaka, Phys. Rev. Lett. 99, 215701 (2007).
. J Lekner, Physica A. 157826J. Lekner, Physica A 157, 826 (1989).
. M Mazars, Mol. Phys. 1051927M. Mazars, Mol. Phys. 105, 1927 (2007).
|
[] |
[
"Heat current across a capacitively coupled double quantum dot for high magnetic field",
"Heat current across a capacitively coupled double quantum dot for high magnetic field"
] |
[
"A A Aligia \nCentro Atómico Bariloche\nComisión Nacional de Energía Atómica\n8400BarilocheArgentina\n\nInstituto Balseiro\nComisión Nacional de Energía Atómica\n8400BarilocheArgentina\n\nConsejo Nacional de Investigaciones Científicas y Técnicas\n1025) CABAArgentina\n",
"D Pérez Daroca \nConsejo Nacional de Investigaciones Científicas y Técnicas\n1025) CABAArgentina\n\nGerencia de Investigación y Aplicaciones\nComisión Nacional de Energía Atómica\n1650) San MartínBuenos AiresArgentina\n",
"P Roura-Bas \nCentro Atómico Bariloche\nComisión Nacional de Energía Atómica\n8400BarilocheArgentina\n\nConsejo Nacional de Investigaciones Científicas y Técnicas\n1025) CABAArgentina\n"
] |
[
"Centro Atómico Bariloche\nComisión Nacional de Energía Atómica\n8400BarilocheArgentina",
"Instituto Balseiro\nComisión Nacional de Energía Atómica\n8400BarilocheArgentina",
"Consejo Nacional de Investigaciones Científicas y Técnicas\n1025) CABAArgentina",
"Consejo Nacional de Investigaciones Científicas y Técnicas\n1025) CABAArgentina",
"Gerencia de Investigación y Aplicaciones\nComisión Nacional de Energía Atómica\n1650) San MartínBuenos AiresArgentina",
"Centro Atómico Bariloche\nComisión Nacional de Energía Atómica\n8400BarilocheArgentina",
"Consejo Nacional de Investigaciones Científicas y Técnicas\n1025) CABAArgentina"
] |
[] |
We study the heat current through two capacitively coupled quantum dots coupled in series with two conducting leads in the spinless case (valid for a high applied magnetic field). Our results are also valid for the heat current through a single quantum dot with strongly ferromagnetic leads pointing in opposite directions (so that the electrons with given spin at the dot can jump only to one lead) or through a quantum dot with two degenerate levels with destructive quantum interference and high magnetic field. Although the charge current is always zero, the heat current is finite when the interdot Coulomb repulsion is taken into account due to many-body effects. We generalize previous results for high temperatures and particular parameters obtained by Yadalam and Harbola [Phys. Rev. B 99, 195449 (2019)]. In particular we consider temperatures for which an orbital Kondo regime takes place. In contrast to previous results, we find that the heat current is finite even for U → ∞. In the Kondo regime, for temperatures much less than the Kondo energy scale, we obtain that the dependence of the thermal current with the temperature difference ∆T is ∼ (∆T ) 4 when the cold lead is at TC ≪ ∆T , and linear in ∆T if TC ≫ ∆T . For large TC the current saturates. As a function of Coulomb strength U , for high ∆T and TC = 0, the charge current has a maximum for U ∼ 3∆T and decreases with increasing U reaching a finite value for U → ∞. We also consider the case of different energy levels of the dots for which the device has rectifying properties.
|
10.1103/physrevb.101.075417
|
[
"https://arxiv.org/pdf/1909.08670v2.pdf"
] | 202,676,755 |
1909.08670
|
eb700b7a912f1104c70c6dfe3d7ef39008386743
|
Heat current across a capacitively coupled double quantum dot for high magnetic field
18 Sep 2019
A A Aligia
Centro Atómico Bariloche
Comisión Nacional de Energía Atómica
8400BarilocheArgentina
Instituto Balseiro
Comisión Nacional de Energía Atómica
8400BarilocheArgentina
Consejo Nacional de Investigaciones Científicas y Técnicas
1025) CABAArgentina
D Pérez Daroca
Consejo Nacional de Investigaciones Científicas y Técnicas
1025) CABAArgentina
Gerencia de Investigación y Aplicaciones
Comisión Nacional de Energía Atómica
1650) San MartínBuenos AiresArgentina
P Roura-Bas
Centro Atómico Bariloche
Comisión Nacional de Energía Atómica
8400BarilocheArgentina
Consejo Nacional de Investigaciones Científicas y Técnicas
1025) CABAArgentina
Heat current across a capacitively coupled double quantum dot for high magnetic field
18 Sep 2019numbers: 7220Pa7323Hk7363Kv7215Qm
We study the heat current through two capacitively coupled quantum dots coupled in series with two conducting leads in the spinless case (valid for a high applied magnetic field). Our results are also valid for the heat current through a single quantum dot with strongly ferromagnetic leads pointing in opposite directions (so that the electrons with given spin at the dot can jump only to one lead) or through a quantum dot with two degenerate levels with destructive quantum interference and high magnetic field. Although the charge current is always zero, the heat current is finite when the interdot Coulomb repulsion is taken into account due to many-body effects. We generalize previous results for high temperatures and particular parameters obtained by Yadalam and Harbola [Phys. Rev. B 99, 195449 (2019)]. In particular we consider temperatures for which an orbital Kondo regime takes place. In contrast to previous results, we find that the heat current is finite even for U → ∞. In the Kondo regime, for temperatures much less than the Kondo energy scale, we obtain that the dependence of the thermal current with the temperature difference ∆T is ∼ (∆T ) 4 when the cold lead is at TC ≪ ∆T , and linear in ∆T if TC ≫ ∆T . For large TC the current saturates. As a function of Coulomb strength U , for high ∆T and TC = 0, the charge current has a maximum for U ∼ 3∆T and decreases with increasing U reaching a finite value for U → ∞. We also consider the case of different energy levels of the dots for which the device has rectifying properties.
I. INTRODUCTION
In the last few years, the interest in the thermoelectric properties of nanodevices has increased and new relevant works were published, 1-8 which further contribute to our knowledge of these systems motivated by possible applications as well as fundamental interest. Among some recent developments, it has been shown that molecules exhibiting quantum destructive interference effects, yield a higher thermopower. 3 The differential Seebeck coefficient at finite applied bias voltage and difference of temperatures ∆T , linked to the proposal of quantum dots as possible nanoscale temperature sensors, 25 is being studied. [25][26][27] Systems of double quantum dots are also being studied. 5,35,37 In particular, Yadalam and Harbola 37 studied the full statistics of heat fluctuations in a system of two quantum dots in series, each one coupled to a corresponding lead (left dot with the left lead, right dot with the right lead) for spinless electrons (corresponding to a high magnetic field) and a Coulomb repulsion U between both dots (see Fig. 1).
The electrons cannot hop between the dots, and therefore the charge current is zero. Interestingly, for non-zero U (for which the model is not trivial) and ∆T there is however a finite thermal current. A physical explanation of this is given in Section III. This implies that the Lorenz number (the ratio of thermal to charge conductance divided by temperature) is infinite pointing to a strong violation of the Wiedemann-Franz law. For future discussions we denote this model as the "2-dot model".
This model is equivalent to that of transport between two levels with destructive interference (DESINT) under high magnetic fields (the "DESINT model"). 39 For example if a benzene molecule is doped with one electron or one hole, the many-body ground state has spin and orbital degeneracy. If the molecule is connected to one lead at an atom and the other lead is connected perpendicular to the first one so that it is coupled equally to a first and second nearest neighbor of the first atom, there is a linear combination of the many-body states that couples only to one lead and the other combination to the opposite lead. The mapping is explained in detail in Ref. 39 and applies also for other molecular quantum dots with destructive interference. The model is also equivalent to a spinfull model for one dot in which electrons with spin up can only hop to the left lead and electrons with spin down can only hop to the right lead (naturally the leads can be interchanged). This is the case for totally polarized ferromagnetic leads with opposite orientation (angle π between them). 11 We denote this model as the "spinfull model" for future reference.
U L R T R T L μ L μ R
Yadalam and Harbola studied the model at high temperatures with two methods. For small coupling to the leads ∆ ν with ν = L (left lead) or ν = R (right lead) they used the Lindblad quantum master equation approach, and for large ∆ ν and small U they used the saddle point approximation for the Schwinger-Keldysh coherent-state path integral, using a Hubbard-Stratonovich decoupling for the Coulomb repulsion. Unfortunately, due to the high temperatures used, the Kondo effect, which takes place at temperatures below the characteristic Kondo temperature T K , is lost in their work.
Here we show that the Lindblad quantum master equation method leads to the same result for the current as the atomic limit ∆ ν → 0. In this limit the Kondo effect is not captured. On the other hand, the different possible Hubbard-Stratonovich decouplings also have problems in reproducing correctly the Kondo physics (see Section 4.1 of Ref. 40).
The Kondo effect is one of the most paradigmatic phenomena in strongly correlated condensed matter systems. 41 In its simplest version, for example for the spinfull model, the phenomenon is characterized by the emergence of a many-body singlet ground state formed by an impurity spin 1/2 and the spin 1/2 of the conduction electrons near the Fermi level, below the characteristic Kondo temperature T K . As a consequence the spectral density of the impurity displays a resonance at the Fermi energy. This explains the widely observed zerobias anomaly in transport through quantum dots with an odd number of electrons. 4,5,7,12,[42][43][44][45] The Kondo effect with spin S > 1/2 has also been observed. [46][47][48] The role of the impurity spin can be replaced by other quantum degree of freedom that distinguishes degenerate states, such as orbital momentum. Orbital degeneracy leads to the orbital Kondo effect or to more exotic Kondo effects, like the SU(4) one, when both orbital and spin degeneracy coexist. Some examples are present in nanoscopic systems. 16,17,[49][50][51][52][53][54][55][56] Evidence of the orbital Kondo effect has also been observed in magnetic systems in which the spin degeneracy is broken. [57][58][59] In our case for the 2-dot model when the on-site energy at both dots is the same, the occupancy of one dot or the other places the role of the orbital degree of freedom. This role is taken by the occupancy of one or the other of the degenerate levels in the DESINT model.
In this work we calculate the heat current of the model using different diagrammatic techniques that describe correctly the Kondo effect. We extend previous results for high temperatures, 37 to all temperatures and in particular smaller than T K . We also consider the case of different energy levels of both dots. In this case, the device has the effect of rectifying the heat current, which is asymmetric for positive or negative heat bias. 6 We compare the results at high temperatures with those of the atomic limit. We analyze the dependence on U and show that there is a finite thermal current for U → ∞. Most of the results presented were obtained using non-equilibrium perturbation theory up to second order in U , which is valid for small or moderate values of U . 60,61 For infinite U we use renormalized perturbation theory, 20,62-65 and the non-crossing approximation. [66][67][68] The paper is organized as follows. In Sec. II, we describe the model, the equations for the particle and heat currents and the above mentioned theoretical methods. In Sec. III we discuss a physical picture for the heat transport. Sec. IV contains the results. Sec. V contains a summary and a discussion.
II. MODEL AND METHODS
A. Model
The Hamiltonian can be written as follows
H = ν E ν d † ν d ν + U d † L d L d † R d R + kν ε kν c † kν c kν + kνσ V kν c † kν d ν + H.c. ,(1)
where ν = L, R refers to the left and right dot or leads. The first term describes the energy of an electron in each dot, the second term is the Coulomb repulsion between electrons in different dots, the third term corresponds to a continuum of extended states for each lead, and the last term is the hybridization between electrons of each dot and the corresponding lead. For the DESINT model, the labels L, R correspond to different degenerate levels of the same dot and the continua that hybridizes with each of them, and for the spinfull model, L, R describe the different spin projections up, down.
In general, both leads are at different chemical potentials µ ν and temperatures T ν . For most of the results presented here we take µ ν = 0.
The couplings to the leads, assumed independent of frequency are expressed in terms of the half width at half maximum of the spectral density in the absence of the interaction
∆ ν = π k |V kν | 2 δ(ω − ε kν ).
(2)
B. Equations for the currents
The equations for the particle and heat current, can be obtained using the Keldysh formalism in an analogous way to previous studies of transport through a single quantum dot, 20,85 with appropriate modifications that take into account how the conducting leads are connected to the interacting part (the double dot for the 2-dot model or the single dot for the DESINT and spinfull models).
For the 2-dot model, the particle current flowing between the left lead and the dot can be written as
J L N = 2i∆ L h dω 2if L (ω)ImG r L (ω) + G < L (ω) , (3) where G r L (ω) [G < L (ω)] is the retarded [lesser] Green function of the left dot and f ν (ω) = {1 + exp[(ω − µ ν )/T ν ]} −1 the Fermi function. The corresponding charge current is J L C = eJ L N ,
where e is the electronic charge. Similarly, the particle current flowing between the dot and the right lead is
J R N = − 2i∆ R h dω 2if R (ω)ImG r R (ω) + G < R (ω) . (4)
The heat currents J L Q flowing from the left lead to the dot and J R Q flowing from the dot to the right lead are
J ν Q = J ν E − µ ν J ν N ,(5)
where J ν E are the energy currents given by
J ν E = ± 2i∆ ν h ωdω 2if v (ω)ImG r ν (ω) + G < ν (ω) ,(6)
where upper (lower) sign corresponds to ν = L (R). These results can be extended to the equivalent DESINT or spinfull models. In the former case, the labels L and R denote two different energy levels, and in the spinfull case L denotes spin up, and R spin down.
In the stationary state, the charge and energy currents are uniform and should be conserved:
J L N = J R N and J L E = J R E .
The heat current is not conserved under an applied voltage (µ L = µ R ) due to joule heating of the interacting part of the system. 20 For the models studied in this work also J L N = J R N = 0 because electrons cannot hop between left and right parts of the system.
C. Perturbation theory in U
For the Anderson model at equilibrium, with ∆ L = ∆ R = ∆, perturbation theory in U/(π∆) has been a popular method used during several years, 69,70 also applied to nanoscopic systems, [71][72][73][74][75] and recently to superconducting systems. 76,77 . Comparison with Quantum Monte Carlo results indicate that the method is quantitatively valid in the symmetric case E L = E R = −U/2, for U/(π∆) as large as 2.42. 78 The method can be extended naturally to the nonequilibrium case using the Keldysh formalism. 60,61 One shortcoming of the approach is that the particle and energy currents are not conserved. This means that in general the approximation gives (4) and (6)], contrary to what one expects. In our case however J L N = J R N = 0 within numerical precision. In our calculations, presented in Section IV we represent the heat current defined as
J L N = J R N and J L E = J R E [see Eqs. (3),J Q = (J L Q + J R Q )/2.d0, where J ν Q = J ν E because in our system J ν N = 0 [see Eq. (5)]. The relative deviation d = |J L Q /J Q − 1| = |J R Q /J Q −
1| is usually of the order of 2% or less, but reaches a value near 14% at high temperatures and the largest values of U used with this method (U = 7∆).
D. Renormalized perturbation theory
For U ≫ ∆ the approach mentioned above fails but for energy scales below T K one can use renormalized perturbation theory (RPT). The basic idea of RPT is to reorganize the perturbation expansion in terms of fully dressed quasiparticles, taking as a basis the equilibrium Fermi liquid picture. 79 The parameters of the original model are renormalized and their values can be calculated exactly using Bethe ansatz, or with high accuracy using numerical renormalization group. 63,80 One of the main advantages is that the renormalized expansion parameter U /(π ∆) is small, being usually below 1.1 even for U → ∞. 63,64 Our RPT procedure consists in using renormalized parameters for E L = E R , U and ∆ obtained at µ L = µ R = T L = T R = 0 by a numerical-renormalizationgroup calculation, 63,64 and incorporating perturbations up to second order in the renormalized U ( U ). It has been shown explicitly that this non-equilibrium approach satisfies important Ward identities. 20,64 At equilibrium, the method provides results that coincide with state-ofthe art techniques for the dependence of the conductance with magnetic field B (c B ) 63 and temperature (c T ) 64 to second order in B or T . An analytical expression for c T in terms of the renormalized parameters was provided. 64 However, for energy scales of the order of T K or larger, the method loses accuracy and a complementary approach is needed.
E. Non-crossing approximation
For infinite U we also calculate the different Green functions entering Eqs. (3), (4) and (6), using nonequilibrium non-crossing approximation (NCA). [66][67][68] The NCA technique is one of the standard tools for calculating these Green functions in the Kondo regime, where the total occupancy of the interacting subsystem (the double dot for the 2-dot model or the dot for the DESINT and spinfull models) is near 1 and with small fluctuations (the charge is well localized in the dot or dots). The NCA has being successfully applied to the study of a variety of systems such as C 60 molecules displaying a quantum phase transition, 48,81 , a nanoscale Si transistor, 52 twolevel quantum dots, 82 and the interplay between vibronic effects and the Kondo effect. 39,83 In spite of this success, the NCA has some limitations at very low temperatures (below ∼ 0.1T K ). For example, it does not satisfy accurately the Friedel sum rule at zero temperature. 84 In this sense it is complementary to RPT, which should be accurate for T L , T R ≪ T K .
In contrast to the RPT, the NCA conserves the charge current, as shown explicitly for the DESINT model in Ref. 68. We find that the NCA also conserves the energy current.
III. PHYSICAL PICTURE FOR THE THERMAL CURRENT
Since electrons cannot hop between left and right parts of the system, it is clear that the particle current is zero in our system. It might seem surprising that the heat current is nonzero under a finite temperature difference ∆T = T L − T R in spite of the fact that exchange of particles is not possible.
The aim of this section is to provide a simple picture for the transport of heat in the presence of interactions. We assume small ∆ ν so that states with definite number of particles at each dot are relatively stable. Without loss of generality we can also assume ∆T > 0. Let us take E L = E R < µ L = µ R = 0 and E ν + U > 0. For ∆ ν = 0 one of the possible ground states of the system has occupancies (n L , n R ) = (0, 1). Let us take this state as the initial state for a cycle of transitions that transport heat. For non-zero ∆ L if the left lead is hot enough, one can perform the first step of the thermal cycle (see Fig. 2): i) take an electron from the left lead and occupy the left dot changing the state of the double dot to (1,1) [(0,1) → (1,1)]. This costs energy U + E L which is taken from the left lead. Next (ii) take an electron from the right dot and transfer it to the right lead [(1,1) → (1,0)]. This relaxes the energy U + E R which is then transferred to the right lead. Next (iii) the electron from the left dot jumps to the corresponding lead [(1,0) → (0,0)]. This requires an energy |E L | taken form the left lead. Finally (iv) take an electron from the right lead an occupy the right dot closing the cycle [(0,0) → (0,1)] and transferring the energy |E R | to the right lead. As a result of the cycle an amount of energy U is transferred from the left lead to the right one.
The resulting thermal current depends on the probability per unit time of each process and requires an explicit calculation. In addition, while this picture provides a qualitative understanding for the general case, it is not enough to describe the thermal transport in the Kondo regime in which cotunneling events are important and does not explain what happens in the U → ∞ limit. However, the physical grounds of the RPT explained in Section II D allows us to apply the above picture to the Kondo regime even for U → ∞. Following these ideas, it is convenient to think the low-energy excitations near the Fermi energy in terms of dressed quasiparticles rather than free electrons. These quasiparticles feel a renormal- ized repulsion U ≪ U which is finite even for infinite U , and therefore a scheme like that explained above can be applied for the quasiparticles. Fig. 2 is also useful to represent the fluctuations involved in the Kondo effect. For the spinfull (DESINT) model they correspond to spin (orbital) fluctuations, and for the 2-dot model to the occupancy of one of the dots keeping the total occupancy of the dots in 1. The sequence of the two steps (i) and (ii) and its time-reversed sequence correspond to fluctuations through the virtual state with double occupancy. Note that a temperature gradient favors the sequence (i)-(ii) with respect to the reciprocal one. Similarly, the process (iii)-(iv) and the reciprocal one correspond to fluctuations through the virtual empty double dot and the former is favored by the temperature gradient. As a consequence of the inequivalence between the above mentioned direct processes and the time reversed ones, the occupancies of the left an right dots become different even if E L = E R , except for the symmetric case E L = E R = −U/2. This effect is similar to that caused by a magnetic field in the spinfull model.
IV. RESULTS
A. The atomic limit
In order to compare with some previous results, 37 and our own ones at high temperatures, we discuss the limit ∆ L , ∆ R → 0.
One possible approach for this task would be to perform equations of motion for the different Keldysh Green functions, 86 truncate them with some approximation and take the limit ∆ L , ∆ R → 0. However, the presence of both interactions and the hopping terms renders the process very cumbersome. Therefore we start from the equations of motion for strictly ∆ L = ∆ R = 0 and solve some undetermined coefficients using conservation laws and general arguments.
For our Hamiltonian, the particles cannot jump between the dots and this implies that in the stationary state J ν N = 0 and therefore J ν Q = J ν E . In the atomic limit ∆ ν = 0, from equations of motion one has 61
G r ν (ω) = 1 − nν ω − E ν + nν ω − E ν − U , G < ν (ω) = 2πi [a ν δ(ω − E ν ) + b ν δ(ω − E ν − U )] ,(7)
where n ν = d † ν d ν is the expectation value of the occupancy of the dot ν andν = R (L) if ν = L (R). The functions a ν and b ν are undetermined for ∆ ν = 0 and one would need to include finite ∆ ν in the equations of motion to determine them. This however introduces more involved Green functions and approximations are necessary to solve the resulting equations of motion. As an alternative we use conservation laws and a simple assumption to determine them.
Note that
n ν = −i 2π dωG < ν (ω) = a ν + b ν .(8)
Replacing Eqs. (7) and (8) in Eqs. (3) and (4) and imposing J L N = J R N = 0 one arrives at the following set of two equations
n ν = (1 − nν)f ν (E ν ) + nνf ν (E ν + U )(9)
from which n ν can be determined. The result is
n ν = f ν (E ν ) − fν(Eν )D ν 1 − D L D R , D ν = f ν (E ν ) − f ν (E ν + U ).(10)
Using Eqs. (9) the energy currents can be written as
J ν E = ± 4π∆ ν U h [nν f ν (E ν + U ) − b ν ] .(11)
Conservation of the energy current in the stationary state J L E = J R E leads to an equation for ∆ L b L + ∆ R b R . At this time we make the assumption b L = b R . This is justified from the form of G < ν (ω) [Eqs. (7)] and Eq. (8). One realizes that b L is the contribution to n L at an energy E L + U , which implies that the right dot is occupied (because of the presence of the Coulomb repulsion term). Therefore, one expects that b L is the probability of double occupancy and the same for b
R : b ν = d † L d L d † R d R . Using J L E = J R E and b L = b R one obtains (∆ L + ∆ R )b ν = ∆ L n R f L (E L + U ) +∆ R n L f R (E R + U ).(12)
Using Eqs. (10) and some algebra one can verify that Eq. see that (12) leads to the correct result at equilibrium:
for µ L = µ R = 0, T L = T R = 1/β one has b ν = d † L d L d † R d R = n R f L (E L + U ) = n L f R (E R + U ) = e −β(EL+ER+U) 1 + e −βEL + e −βER + e −β(EL+ER+U) .(13)
Replacing Eq. (12) in Eq. (11) one obtains the final expression for the heat current
J Q = J ν E = 4π∆ L ∆ R U h(∆ L + ∆ R ) [n R f L (E L + U ) −n L f R (E R + U )].(14)
It can be checked that this expression is invariant under the replacement E ν → −E ν − U , as expected from an electron-hole transformation of the Hamiltonian: d † ν → d ν , c † kν → −c k ′ ν with ε k ′ ν = −ε kν . For the symmetric case E ν = −U/2, one has n ν = 1/2 and Eq. (14) reduces to
J Q = 2π∆ L ∆ R U h(∆ L + ∆ R ) [f L (E L + U ) − f R (E R + U )],(15)
which coincides with the expression obtained by Yadalam and Harbola (see the expression of C 1 in appendix A of Ref. 37, note that in their notation Γ ν = 2∆ ν ).
B. Dependence of thermal current on ∆ν
In the following, we take
E L = E R = E, ∆ L = ∆ R = ∆, µ L = µ R = 0 and ∆T = T L − T R > 0.
In this short subsection we take parameters corresponding to Fig. 6 of Ref. 37, and calculate J Q as a function of ∆ using perturbation theory in U (see Section II C) in the symmetric case E = −U/2 and for ∆ ≥ U/10. For smaller values of ∆ the method is not reliable. The result is represented in Fig. 3. For small ∆ the heat current increases linearly with ∆ [as expected from Eqs. (14), (15)]. For larger ∆ the slope decreases and J Q reaches a maximum for ∆ ∼ 0.8T R and then decreases for larger ∆. For all values of ∆ the relative deviation Below we discuss the thermal current at low temperatures.
C. Dependence of thermal current on ∆T
In this subsection we take T R = 0 and analyze the dependence of J Q on T L = ∆T , using perturbation theory in U for the symmetric case E = E L = E R = −U/2 as above. We consider several values of U within the validity of the perturbative approach. The results are shown in Fig. 4. One signature of the limits of this perturbative approach is the relative error in the conservation of the energy current d = |J R Q /J Q − 1| (see Section II C). While it is negligible for very small temperatures and moderate values of U , reaches a value of 12.6 % for U = 7∆ and T L ∼ 2∆, decreasing slowly as T L increases.
For U = 7∆, near the limit of validity of this approach, the system has the characteristics of the Kondo regime (−E, E + U ≫ ∆) at equilibrium. The spectral density has a well defined peak at the Fermi energy (the Kondo peak) separated from the charge-transfer peaks near E and E + U . From the half-width at half maximum of the Kondo peak one has an estimation of the Kondo temperature T K ∼ 0.27∆. We find that for T L well below T K (we verify this for T L < 0.04∆), the heat current behaves as J Q ∼ (∆T ) 4 . This remains true as long as the smaller temperature (T R in our case) is also much smaller than T K . For large T R , J Q is linear in ∆T for small ∆T .
For all values of U , after the initial flat increase of the thermal current with ∆T , for ∆T ∼ ∆, J Q increases approximately linearly with ∆T and when it reaches a few times U if finally saturates. For U, ∆T ≫ ∆, the thermal current is qualitatively described by Eq. (15), although the saturation value is larger for this expression, particularly for small values of U . In contrast, for small T L the analytical expression has an exponential dependence and falls below the value given by perturbation theory.
D. The limit U → ∞
Here we choose parameters corresponding to the Kondo regime: E L = E R = −4∆, and U → ∞, and calculate the current using RPT and NCA (see Sections II D, II E) as a function of ∆T , keeping T L = 0 (RPT) or a small fraction of the Kondo temperature (NCA) so that the results are indistinguishable from those of T L = 0.
In order to compare the results of both approximations, it is convenient to represent the results taking the unit of energy as the Kondo temperature T K which is the only relevant energy scale at small temperatures. Because of some details of the approximations (like the high-energy cutoff for example), the T K of both approximations differ, although they are of the same order of magnitude. We have shown recently that extracting T K from the temperature dependence of the conductance G(T /T K ) of an equivalent model H eq at equilibrium is more reliable than fitting the spectral density or the nonequilibrium conductance. 87 This equivalent model consists in the usual spin-degenerate Anderson model for a dot connected to left and right leads with both spins. In particular for ∆ L = ∆ R = ∆, H eq has coupling ∆/2 for each spin and each lead. At equilibrium H eq has the same spectral density than our 2-dot model. However the transport properties are completely different because both models are connected in a different way to the leads and therefore the models differ out of equilibrium.
In any case we can use the mapping at equilibrium to define T K . This task has been already done in Ref. 87 fitting a popular phenomenological expression for G(T /T K ). The renormalized parameters for RPT were taken from previous calculations. 63,64 The result was T K = 0.00441∆ for the RPT and T K = 0.00796∆ for the NCA. The result for J Q as a function of ∆T is shown in Fig. 5. For small temperatures T < 0.2T K , the RPT result is more reliable and shows a (∆T ) 4 dependence. For T > T K , the RPT breaks down and only the NCA is reliable. In the transition region both approaches agree semiquantitatively taking the corresponding T K as the unit of energy, being the NCA result larger.
From the inset of figure 5, we observe that for much higher ∆T > 10∆, the thermal current saturates to a finite value, as already found for other values of U in the symmetric case (see Fig. 4). From the expression in the limit ∆ → 0, Eq. (14) one might expect that J Q → 0 for U → ∞ and very large but finite ∆T . However, this expression is linear in ∆ while the result shown in Fig. 5 for large ∆T is quadratic in ∆. This suggests that Eq. (14) is valid to first order in ∆ and a finite value of the thermal current can be obtained expanding the current to higher order near the atomic limit ∆ ν → 0. In Fig. 6 we represent the thermal current as a function of U calculated by perturbation theory in the symmetric case E L = E R = −U/2, for different ∆T = T L , keeping T R = 0. Since the thermal current strongly depends on ∆T for small ∆T , the values have been multiplied by a factor indicated in the figure in order to represent them. In spite of the different magnitude, the different curves show a similar dependence, with a U 2 behavior for small U . At intermediate T L , (0.5∆ and ∆), the curves show a maximum within the interval of U shown (determined by the validity of the perturbative approach).
According to the limit of small ∆ [Eqs. (14), (15)], one expects that for large ∆T there is a maximum in the thermal current at an intermediate value of U . Since at high temperatures, the effects of correlations is expected to be less important, we have also calculated J Q for ∆T = 10∆ as a function of U for a larger interval, which in principle is beyond the validity of the approach and compare it with the result in the atomic limit ∆ ν → 0 [Eq. (15)]. The result is shown in Fig. 7. Taking into account the limitations of both approximations, the results are surprisingly similar. In particular both approaches lead to a maximum in the thermal current for U ∼ 3∆T . For small U the perturbative approach gives a quadratic dependence in U , while it is linear in Eq. (15).
F. Effects of different energy of the two dots
In most of the calculations presented before we have considered E L = E R , although the analytical results in the atomic limit ∆ ν → 0 [Eq. (14)] are valid for arbitrary E ν . One effect of having different E ν is the loss of the Kondo effect, in a similar way as the application of a magnetic field in the simplest impurity Anderson model. Another effect is that the current has a different magnitude for positive and negative ∆T of the same magnitude. This rectification effect might be important for applications. 6 In Fig. 8 we show an example of this rectification effect in the atomic limit. We have taken one level below but near to the Fermi energy and the other one below and separated from the Fermi energy. We obtain that the magnitude of the heat current is larger when the former level is next to the lead with the lower temperature. The ratio between both currents is larger than a factor two for small or moderate values of the temperature difference ∆T .
In Fig. 9 we show similar results obtained with the NCA for infinite U . We define ∆ = (∆ L + ∆ R )/2. To keep our convention T L > T R , we have inverted the device through the middle point instead of inverting the temperature. The result for the magnitude of the current is the same. While in the atomic limit used in the previous figure, the ratio of the currents is independent of the asymmetry between the ∆ ν (only a multiplicative factor is affected), this is not the case of the NCA, although the dependence on the asymmetry is very weak for large temperature difference ∆T . As in the previous case, the largest magnitude of the thermal current is obtained when the level nearest to the Fermi energy is next to the cold lead. Also the rectification ratio is larger than two for small or moderate ∆T . A difference with the previous case is that for large ∆T a certain degree of rectification remains. We have also done some calculations for E L = E R using perturbation theory. However for small U the rectification properties are too small, while for large U the error in the conservation of the current increased rapidly and we considered that the results were not reliable enough.
V. SUMMARY AND DISCUSSION
We have studied the thermal current through a system of two capacitively coupled quantum dots connected in series with two conducting leads in the spinless case (corresponding to a high applied magnetic field). The system is also equivalent to one spinfull dot between two conducting leads fully spin polarized in opposite directions, and to a molecular quantum dot with two relevant levels connected to the leads in such a way that there is perfect destructive interference.
An interesting feature of the system is that charge transport is not possible, but heat transport is, due to the effect of the Coulomb repulsion between the electrons in the dots, leading to a strong violation of the Wiedemann-Franz law. A simple picture of the effect of the Coulomb repulsion in the heat transport is provided in Section III.
The system has been studied previously in the regime of high temperatures of both leads (including also the full counting statistics). 37 We generalize the results in the limit of small coupling to the leads for arbitrary values of the other parameters, and considering all temperatures, and in particular the Kondo regime in which there is one particle strongly localized in the double dot, but fluctuating between both dots. For high temperatures of the leads, our results agree in general with the previous ones, displaying a non-monotonic behavior as a function of Coulomb repulsion and/or coupling to the leads, with a maximum at intermediate values.
For temperatures well below the Kondo energy scale T K , we obtain that the heat current is proportional to the fourth power of the difference ∆T between the tem-peratures of both leads.
For infinite Coulomb repulsion, in contrast to the previous work, 37 we find that the heat current is finite for all non-zero values of ∆T . Within the Kondo regime, this result can be understood in the frame of renormalized perturbation theory: near the Fermi energy the main aspects of the physics can be understood in terms of dressed weakly interacting quasiparticles. Even if the original Coulomb repulsion U → ∞, the renormalized one U is small and comparable with the renormalized coupling to the leads.
When the energy of both dots E ν or the the coupling to the the leads ∆ ν are different, the system loses its inversion symmetry at the mid point of the dots, and therefore, one expects that the absolute value of the heat current J Q is different for positive or negative temperature difference ∆T . This means that the device has some rectifying properties. In the case in which only the thermal gradient breaks inversion symmetry one has J Q (−∆T ) = −J Q (∆T ). Our results suggest that the asymmetry in the couplings ∆ L = ∆ R modifies the amplitude of the current but has little effect on the rectifying properties. Instead, when E L = E R a factor larger than two between the current flowing in opposite senses can be obtained. It is possible that this effect might be increased adding more dots in series.
FIG. 1 .
1(Color online) Scheme of the system analyzed in this work in which two capacitively quantum dots are attached to two conducting leads in the spinless case and at different temperatures and chemical potentials. It also describes two additional models as described in the main text.
. 2. (Color online) Schematic picture for the transport of heat in the presence of interactions.
FIG. 3 .
3Thermal current as a function of dot-lead couplings For TL = 2TR, U = TR/10 and EL = ER = −U/2. of the method in the conservation of the energy current d = |J R Q /J Q − 1| (see Section II C) is below 1%. Although the method used is quite different from the Schwinger-Keldysh coherent-state path integral in the saddle point approximation used to represent the heat current in Fig. 6 of Ref. 37, the result is very similar. A possible reason for this similarity is that the parameters chosen correspond to very high temperatures, not only in comparison with the Kondo temperature T K but also in comparison with U . Then possible limitations of the Hubbard-Stratonovich approximation to describe the Kondo effect, 40 are not detected.
FIG. 4 .
4Thermal current as a function of TL = ∆T for TR = 0, EL = ER = −U/2 and several values of U . The dashed line correspond to Eq. (15)
FIG. 5 .
5Thermal current as a function of ∆T for U → ∞ and E = −4∆.
FIG. 6 .
6(color online) Thermal current as a function of U for different ∆T = TL, TR = 0, and EL = ER = −U/2.
FIG. 7 .
7Same as Fig. 6 for ∆T = TL = 10∆. Dashed line corresponds to Eq. (15).
FIG. 8 .
8(color online) Thermal current given by Eq. (14) as a function of ∆T = TL for TR = 0, ∆L = ∆R = ∆, U = 20∆ and full line EL = −4∆, ER = −∆ (level nearest to the Fermi energy next to the cold lead), dashed line EL = −∆, ER = −4∆ (level nearest to the Fermi energy next to the hot lead).
FIG. 9 .
9(color online) Thermal current as a function of ∆T for TR = TK /10, U → ∞ and from top to bottom EL = −4∆, ER = −∆, ∆L = ∆R = ∆ (cold symmetric), EL = −1∆, ER = −4∆, ∆L = ∆R = ∆ (hot symmetric), EL = −4∆, ER = −∆, ∆L = 1.8∆, ∆R = 0.2∆ (cold asymmetric), and EL = −1∆, ER = −4∆, ∆L = 0.2∆, ∆R = 1.8∆ (hot asymmetric).
ACKNOWLEDGMENTS A. A. A. is sponsored by PIP 112-201501-00506 of CONICET and PICT 2013-1045 of the ANPCyT.
G. Benient, G. Casati, K. Saito, R. S. Whitney, Fundamental aspects of steady-state conversion of heat to work at the nanoscale, Phys. Rep. 694, 1 (2017).
A quantum-dot heat engine operating close to the thermodynamic efficiency limits. M Josefsson, A Svilans, A M Burke, E A Hoffmann, S Fahlvik, C Thelander, M Leijnse, H Linke, Nature Nanotechnology. 13920M. Josefsson, A. Svilans, A. M. Burke, E. A. Hoffmann, S. Fahlvik, C. Thelander, M. Leijnse, and H. Linke, A quantum-dot heat engine operating close to the thermo- dynamic efficiency limits, Nature Nanotechnology 13, 920 (2018).
R Miao, H Xu, M Skripnik, L Cui, K Wang, K G L Pedersen, M Leijnse, F Pauly, K Wärnmark, E Meyhofer, P Reddy, H Linke, Influence of Quantum Interference on the Thermoelectric Properties of Molecular Junctions. 185666R. Miao, H. Xu, M. Skripnik, L. Cui, K. Wang, K. G. L. Pedersen, M. Leijnse, F. Pauly, K. Wärnmark, E. Mey- hofer, P. Reddy, and H. Linke, Influence of Quantum In- terference on the Thermoelectric Properties of Molecular Junctions, Nano Lett. 18, 5666 (2018).
Thermoelectric Characterization of the Kondo Resonance in Nanowire Quantum Dots. A Svilans, M Josefsson, A M Burke, S Fahlvik, C Thelander, H Linke, M Leijnse, Phys. Rev. Lett. 121206801A. Svilans, M. Josefsson, A. M. Burke, S. Fahlvik, C. The- lander, H. Linke, and M. Leijnse, Thermoelectric Charac- terization of the Kondo Resonance in Nanowire Quantum Dots, Phys. Rev. Lett. 121, 206801 (2018).
A thermally driven out-of-equilibrium two-impurity Kondo system. M A Sierra, R López, J S Lim, Phys. Rev. Lett. 12196801M. A. Sierra, R. López, and J. S. Lim, A thermally driven out-of-equilibrium two-impurity Kondo system, Phys. Rev. Lett. 121, 096801 (2018).
Electron-Transfer-Induced Thermal and Thermoelectric Rectification. G T Craven, D He, A Nitzan, Phys. Rev. Lett. 121247704G. T. Craven, D. He, and A. Nitzan, Electron-Transfer- Induced Thermal and Thermoelectric Rectification Phys. Rev. Lett. 121, 247704 (2018).
Winkelmann, Direct probe of the Seebeck coefficient in a Kondocorrelated single-quantum-dot transistor. B Dutta, D Majidi, A Garcia Corral, P A Erdman, S Florens, T A Costi, H Courtois, C B , Nano Lett. 19506B. Dutta, D. Majidi, A. Garcia Corral, P. A. Erdman, S. Florens, T. A. Costi, H. Courtois, and C. B. Winkel- mann, Direct probe of the Seebeck coefficient in a Kondo- correlated single-quantum-dot transistor, Nano Lett. 19, 506 (2019).
Thermal conductance of single-molecule junctions. L Cui, S Hur, Z A Akbar, J C Klöckner, W Jeong, F Pauly, S-Y. Jang, P Reddy, E Meyhofer, Nature. 772628L. Cui, S. Hur, Z. A. Akbar, J. C. Klöckner, W. Jeong, F. Pauly, S-Y. Jang, P. Reddy, and E. Meyhofer, Thermal conductance of single-molecule junctions, Nature 772, 628 (2019).
Thermoelectric phenomena in quantum dot asymmetrically coupled to external leads. M Krawiec, K I Wysokiński, Phys. Rev. B. 75155330M. Krawiec, and K. I. Wysokiński, Thermoelectric phe- nomena in quantum dot asymmetrically coupled to exter- nal leads, Phys. Rev. B 75, 155330 (2007).
Aligia Thermal transport in one-dimensional spin heterostructures. L Arrachea, G S Lozano, A A , Phys. Rev. B. 8014425L. Arrachea, G. S. Lozano, and A. A. Aligia Thermal trans- port in one-dimensional spin heterostructures, Phys. Rev. B 80, 014425 (2009).
Thermoelectric effects in transport through quantum dots attached to ferromagnetic leads with noncollinear magnetic moments. R Świrkowicz, M Wierzbicki, J Barnaś, Phys. Rev. B. 80195409R.Świrkowicz, M. Wierzbicki, and J. Barnaś, Thermoelec- tric effects in transport through quantum dots attached to ferromagnetic leads with noncollinear magnetic moments, Phys. Rev. B 80, 195409 (2009).
Thermoelectric transport through strongly correlated quantum dots. T A Costi, V Zlatić, Phys. Rev. B. 81235127T. A. Costi and V. Zlatić, Thermoelectric transport through strongly correlated quantum dots, Phys. Rev. B 81, 235127 (2010).
Nonlinear thermoelectric properties of molecular junctions with vibrational coupling. M Leijnse, M R Wegewijs, K Flensberg, Phys. Rev. B. 8245412M. Leijnse, M. R. Wegewijs, and K. Flensberg, Nonlin- ear thermoelectric properties of molecular junctions with vibrational coupling, Phys. Rev. B 82, 045412 (2010).
Thermoelectric efficiency at maximum power in low-dimensional systems. N Nakpathomkun, H Q Xu, H Linke, Phys. Rev. B. 82235428N. Nakpathomkun, H. Q. Xu, and H. Linke, Thermoelec- tric efficiency at maximum power in low-dimensional sys- tems, Phys. Rev. B 82, 235428 (2010).
Mechanism for large thermoelectric power in molecular quantum dots described by the negative-U Anderson model. S Andergassen, T A Costi, V Zlatić, Phys. Rev. B. 84241007S. Andergassen, T. A. Costi, and V. Zlatić Mechanism for large thermoelectric power in molecular quantum dots de- scribed by the negative-U Anderson model, Phys. Rev. B 84, 241007(R) (2011).
Kondo physics and orbital degeneracy interact to boost thermoelectrics on the nanoscale. J Azema, A.-M Daré, S Schäfer, P Lombardo, Phys. Rev. B. 8675303J. Azema, A.-M. Daré, S. Schäfer, and P. Lombardo, Kondo physics and orbital degeneracy interact to boost thermoelectrics on the nanoscale, Phys. Rev. B 86, 075303 (2012).
Thermopower of an SU(4) Kondo resonance under an SU(2) symmetry-breaking field. P Roura-Bas, L Tosi, A A Aligia, P S Cornaglia, Phys. Rev. B. 86165106P. Roura-Bas, L. Tosi, A. A. Aligia, and P. S. Cornaglia, Thermopower of an SU(4) Kondo resonance under an SU(2) symmetry-breaking field, Phys. Rev. B 86, 165106 (2012).
Single Molecule Conductance, Thermopower, and Transition Voltage. S Guo, G Zhou, N Tao, Nano Lett. 134326S. Guo, G. Zhou, and N. Tao, Single Molecule Conduc- tance, Thermopower, and Transition Voltage, Nano Lett. 13, 4326 (2013).
Effect of assisted hopping on thermopower in an interacting quantum dot. S B Tooski, A Ramšak, B R Bulka, R Žitko, New J. Phys. 1655001S. B. Tooski, A. Ramšak, B. R. Bulka, and R.Žitko, Ef- fect of assisted hopping on thermopower in an interacting quantum dot, New J. Phys. 16, 055001 (2014).
Nonequilibrium self-energies, Ng approach, and heat current of a nanodevice for small bias voltage and temperature. A A Aligia, Phys. Rev. B. 89125405references thereinA. A. Aligia, Nonequilibrium self-energies, Ng approach, and heat current of a nanodevice for small bias voltage and temperature, Phys. Rev. B 89, 125405 (2014); references therein.
Conditions for requiring nonlinear thermoelectric transport theory in nanodevices. J Azema, P Lombardo, A.-M Daré, Phys. Rev. B. 90205437J. Azema, P. Lombardo, and A.-M. Daré, Conditions for requiring nonlinear thermoelectric transport theory in nan- odevices, Phys. Rev. B 90, 205437 (2014).
Length-dependent thermopower determination of amine-terminated oligophenyl single molecular junctions formed with Ag electrodes. D Kim, P S Yoo, T Kim, J. Korean Phys. Soc. 66602D. Kim, P. S. Yoo, and T. Kim, Length-dependent ther- mopower determination of amine-terminated oligophenyl single molecular junctions formed with Ag electrodes, J. Korean Phys. Soc. 66, 602 (2015).
Bismuth telluride nanostructures: preparation, thermoelectric properties and topological insulating effect. E Ashalley, H Chen, X Tong, H Li, Z M Wang, Front. Mater. Sci. 9103E. Ashalley, H. Chen, X. Tong, H. Li, and Z. M. Wang, Bismuth telluride nanostructures: preparation, thermo- electric properties and topological insulating effect, Front. Mater. Sci. 9, 103 (2015).
Thermopower measurement in molecular junctions. L Rincón-García, C Evangeli, G Rubio-Bollinger, N Agraït, Chem. Soc. Rev. 454285references thereinL. Rincón-García, C. Evangeli, G. Rubio-Bollinger, and N. Agraït, Thermopower measurement in molecular junc- tions, Chem. Soc. Rev., 45, 4285 (2016); references therein.
Thermoelectric response of a correlated impurity in the nonequilibrium Kondo regime. A Dorda, M Ganahl, S Andergassen, W Der Linden, E Arrigoni, Phys. Rev. B. 94245125A. Dorda, M. Ganahl, S. Andergassen, W. von der Linden, and E. Arrigoni, Thermoelectric response of a correlated impurity in the nonequilibrium Kondo regime, Phys. Rev. B 94, 245125 (2016).
Enhancing of nonlinear thermoelectric response of a correlated quantum dot in the Kondo regime by asymmetrically coupling to the leads. D Daroca, P Roura-Bas, A A Aligia, Phys. Rev. B. 97165433D. Pérez Daroca, P. Roura-Bas, and A. A. Aligia, En- hancing of nonlinear thermoelectric response of a corre- lated quantum dot in the Kondo regime by asymmetrically coupling to the leads, Phys. Rev. B 97, 165433 (2018).
U Eckern, K I Wysokiński, arXiv:1904.05064Multi-terminal far-fromequilibrium thermoelectric nano-devices in the Kondo regime. U. Eckern and K. I. Wysokiński, Multi-terminal far-from- equilibrium thermoelectric nano-devices in the Kondo regime, arXiv:1904.05064
Thermoelectric properties of an interacting quantum dot based heat engine, and F. Taddei. P A Erdman, F Mazza, R Bosisio, G Benenti, R Fazio, Phys. Rev. B. 95245432P. A. Erdman, F. Mazza, R. Bosisio, G. Benenti, R. Fazio, Thermoelectric properties of an interacting quantum dot based heat engine, and F. Taddei, Phys. Rev. B 95, 245432 (2017).
Thermal conductance and thermoelectric figure of merit of C60 -based single-molecule junctions: Electrons, phonons, and photons. J C Klöckner, R Siebler, J C Cuevas, F Pauly, Phys. Rev. B. 95245404J. C. Klöckner, R. Siebler, J. C. Cuevas and F. Pauly, Thermal conductance and thermoelectric figure of merit of C60 -based single-molecule junctions: Electrons, phonons, and photons, Phys. Rev. B 95, 245404 (2017).
Fate of the spin-1/2 Kondo effect in the presence of temperature gradients. M A Sierra, R López, D Sánchez, Phys. Rev. B. 9685416M. A. Sierra, R. López, and D. Sánchez, Fate of the spin- 1/2 Kondo effect in the presence of temperature gradients, Phys. Rev. B 96, 085416 (2017).
Enhancing Thermoelectric Performance Using Nonlinear Transport Effects. Jian-Hua Jiang, Y Imry, Phys. Rev. App. 764001Jian-Hua Jiang, and Y. Imry, Enhancing Thermoelec- tric Performance Using Nonlinear Transport Effects, Phys. Rev. App. 7, 064001 (2017).
Perspective: Thermal and thermoelectric transport in molecular junctions. L Cui, R Miao, C Jiang, E Meyhofer, P , Reddy , The Journal of Chemical Physics. 14692201L. Cui, R. Miao, C. Jiang, E. Meyhofer, and P, Reddy, Per- spective: Thermal and thermoelectric transport in molec- ular junctions, The Journal of Chemical Physics 146, 092201(2017).
Perfect Diode in Quantum Spin Chains. V Balachandran, G Benenti, E Pereira, G Casati, D Poletti, Phys. Rev. Lett. 120200603V. Balachandran, G. Benenti, E. Pereira, G. Casati, and D. Poletti, Perfect Diode in Quantum Spin Chains, Phys. Rev. Lett. 120, 200603 (2018).
Enhanced thermoelectric response in the fractional quantum Hall effect. P Roura-Bas, L Arrachea, E Fradkin, Phys. Rev. B. 9781104P. Roura-Bas, L. Arrachea, and E. Fradkin, Enhanced thermoelectric response in the fractional quantum Hall ef- fect, Phys. Rev. B 97, 081104(R) (2018).
Kondopeak splitting and resonance enhancement caused by interdot tunneling in coupled double quantum dots. Z Li, Y Cheng, J Wei, X Zheng, Y Yan, Phys. Rev. B. 98115133Z. Li, Y. Cheng, J. Wei, X. Zheng, and Y. Yan, Kondo- peak splitting and resonance enhancement caused by in- terdot tunneling in coupled double quantum dots, Phys. Rev. B 98, 115133 (2018).
Nonlinear thermovoltage in a single-electron transistor. P A Erdman, J T Peltonen, B Bhandari, B Dutta, H Courtois, R Fazio, F Taddei, J P Pekola, Phys. Rev. B. 99165405P. A. Erdman, J. T. Peltonen, B. Bhandari, B. Dutta, H. Courtois, R. Fazio, F. Taddei, and J. P. Pekola, Nonlinear thermovoltage in a single-electron transistor, Phys. Rev. B 99, 165405 (2019).
Statistics of heat transport across a capacitively coupled double quantum dot circuit. H K Yadalam, U Harbola, Phys. Rev. B. 99195449H. K. Yadalam and U. Harbola, Statistics of heat transport across a capacitively coupled double quantum dot circuit, Phys. Rev. B 99, 195449 (2019).
D B Karki, M N Kiselev, arXiv:1908.00415Nonlinear Seebeck effect of SU(N) Kondo impurity. D. B. Karki and M. N. Kiselev, Nonlinear Seebeck effect of SU(N) Kondo impurity, arXiv:1908.00415
Destructive quantum interference in transport through molecules with electronelectron and electron-vibration interactions. P Roura-Bas, F Güller, L Tosi, A A Aligia, J. Phys.: Condens. Matter. 31465602P. Roura-Bas, F. Güller, L. Tosi and A. A. Aligia, Destruc- tive quantum interference in transport through molecules with electronelectron and electron-vibration interactions J. Phys.: Condens. Matter 31, 465602 (2019).
The Functional Integral formulation of the Schrieffer-Wolff transformation. F Zamani, P Ribeiro, S Kirchner, New J. Phys. 1863024F. Zamani, P. Ribeiro, and S. Kirchner, The Functional Integral formulation of the Schrieffer-Wolff transformation, New J. Phys. 18, 063024 (2016).
A C Hewson, The Kondo Problem to Heavy Fermions. Cambridge, EnglandCambridge University Press9780521599474A. C. Hewson, The Kondo Problem to Heavy Fermions (Cambridge University Press, Cambridge, England, 1997), ISBN 9780521599474.
Kondo effect in a single-electron transistor. D Goldhaber-Gordon, H Shtrikman, D Mahalu, D Abusch-Magder, U Meirav, M A Kastner, Nature. 391156D. Goldhaber-Gordon, H. Shtrikman, D. Mahalu, D. Abusch-Magder, U. Meirav, and M. A. Kastner, Kondo ef- fect in a single-electron transistor, Nature 391, 156 (1998).
S M Cronenwett, T H Oosterkamp, L P Kouwenhoven, A Tunable Kondo Effect in Quantum Dots. 281540S. M. Cronenwett, T. H. Oosterkamp, and L. P. Kouwen- hoven, A Tunable Kondo Effect in Quantum Dots, Science 281, 540 (1998).
The Kondo Effect in the Unitary Limit. W G Van Der Wiel, S De Franceschi, T Fujisawa, J M Elzerman, S Tarucha, L P Kowenhoven, Science. 2892105W.G. van der Wiel, S. de Franceschi, T. Fujisawa, J.M. Elzerman, S. Tarucha, and L.P. Kowenhoven, The Kondo Effect in the Unitary Limit, Science 289, 2105 (2000).
Kondo resonance in a single-molecule transistor. W Liang, M P Shores, M Bockrath, J R Long, H Park, Nature. 417725W. Liang, M. P. Shores, M. Bockrath, J. R. Long, and H. Park, Kondo resonance in a single-molecule transistor, Nature 417, 725 (2002).
Quantum phase transition in a single-molecule quantum dot. N Roch, S Florens, V Bouchiat, W Wernsdorfer, F Balestro, Nature. 453633N. Roch, S. Florens, V. Bouchiat, W. Wernsdorfer, and F. Balestro, Quantum phase transition in a single-molecule quantum dot, Nature 453, 633 (2008).
. J J Parks, A R Champagne, T A Costi, W W Shum, A N Pasupathy, E Neuscamman, S Flores-Torres, P , J. J. Parks, A. R. Champagne, T. A. Costi, W. W. Shum, A. N. Pasupathy, E. Neuscamman, S. Flores-Torres, P. S.
Mechanical Control of Spin States in Spin-1 Molecules and the Underscreened Kondo Effect. A A Cornaglia, C A Aligia, G K Balseiro, H D Chan, D C Ralph, Science. 3281370Cornaglia, A. A. Aligia, C. A. Balseiro, G. K.-L. Chan, H. D. Abruñ a, and D. C. Ralph, Mechanical Control of Spin States in Spin-1 Molecules and the Underscreened Kondo Effect, Science 328, 1370 (2010).
Universal transport signatures in two-electron molecular quantum dots: gate-tunable Hund's rule, underscreened Kondo effect and quantum phase transitions. S Florens, A Freyn, N Roch, W Wernsdorfer, F Balestro, P Roura-Bas, A A Aligia, J. Phys. Condens. Matter. 23243202references thereinS. Florens, A, Freyn, N. Roch, W. Wernsdorfer, F. Bale- stro, P. Roura-Bas and A. A. Aligia, Universal trans- port signatures in two-electron molecular quantum dots: gate-tunable Hund's rule, underscreened Kondo effect and quantum phase transitions, J. Phys. Condens. Matter 23, 243202 (2011); references therein.
Orbital Kondo effect in carbon nanotubes. P Jarillo-Herrero, J Kong, H S J Van Der Zant, C Dekker, L P Kouwenhoven, S De Franceschi, Nature. 434484P. Jarillo-Herrero, J. Kong, H. S. J. van der Zant, C. Dekker, L. P. Kouwenhoven, and S. De Franceschi, Or- bital Kondo effect in carbon nanotubes, Nature 434, 484 (2005).
Finkelstein, Zero-Bias Conductance in Carbon Nanotube Quantum Dots. F B Anders, D E Logan, M R Galpin, G , Phys. Rev. Lett. 10086809F. B. Anders, D. E. Logan, M. R. Galpin, and G. Finkel- stein, Zero-Bias Conductance in Carbon Nanotube Quan- tum Dots Phys. Rev. Lett. 100, 086809 (2008).
Transport in carbon nanotubes: Two-level SU(2) regime reveals subtle competition between Kondo and intermediate valence states. C A Büsser, E Vernek, P Orellana, G A Lara, E H Kim, A E Feiguin, E V Anda, G B Martins, Phys. Rev. B. 83125404C. A. Büsser, E. Vernek, P. Orellana, G. A. Lara, E. H. Kim, A. E. Feiguin, E. V. Anda, and G. B. Martins, Trans- port in carbon nanotubes: Two-level SU(2) regime reveals subtle competition between Kondo and intermediate va- lence states, Phys. Rev. B 83, 125404 (2011).
Magnetic-Field Probing of an SU(4) Kondo Resonance in a Single-Atom Transistor. G C Tettamanzi, J Verduijn, G P Lansbergen, M Blaauboer, M J Calderón, R Aguado, S Rogge, Phys. Rev. Lett. 10846803G. C. Tettamanzi, J. Verduijn, G. P. Lansbergen, M. Blaauboer, M. J. Calderón, R. Aguado, and S. Rogge, Magnetic-Field Probing of an SU(4) Kondo Resonance in a Single-Atom Transistor, Phys. Rev. Lett. 108, 046803 (2012).
Magnetic-Field Dependence of Tunnel Couplings in Carbon Nanotube Quantum Dots. K Grove-Rasmussen, S Grap, J Paaske, K Flensberg, S Andergassen, V Meden, H I Jorgensen, K Muraki, T Fujisawa, Phys. Rev. Lett. 108176802K. Grove-Rasmussen, S. Grap, J. Paaske, K. Flensberg, S. Andergassen, V. Meden, H. I. Jorgensen, K. Muraki, and T. Fujisawa, Magnetic-Field Dependence of Tunnel Couplings in Carbon Nanotube Quantum Dots Phys. Rev. Lett. 108, 176802 (2012).
Symmetry-Driven Novel Kondo Effect in a. E Minamitani, N Tsukahara, D Matsunaka, Y Kim, N Takagi, M Kawai, Molecule Phys. Rev. Lett. 10986602E. Minamitani, N. Tsukahara, D. Matsunaka, Y. Kim, N. Takagi, and M. Kawai, Symmetry-Driven Novel Kondo Ef- fect in a Molecule Phys. Rev. Lett. 109, 086602 (2012).
Spectral evolution of the SU(4) Kondo effect from the single impurity to the two-dimensional limit. A M Lobos, M Romero, A A Aligia, Phys. Rev. B. 89121406A. M. Lobos, M. Romero, and A. A. Aligia, Spectral evo- lution of the SU(4) Kondo effect from the single impurity to the two-dimensional limit, Phys. Rev. B 89, 121406(R) (2014)
Aligia, Two-stage three-channel Kondo physics for an FePc molecule on the Au(111) surface. J Fernández, P Roura-Bas, A Camjayi, A A , J. Phys.: Condens. Matter. 30374003J. Fernández, P. Roura-Bas, A. Camjayi, and A. A. Ali- gia, Two-stage three-channel Kondo physics for an FePc molecule on the Au(111) surface, J. Phys.: Condens. Mat- ter 30, 374003 (2018);
. Corrigendum J. Phys. Condens. Matter. 3129501Corrigendum J. Phys. Condens. Matter 31, 029501 (2018)
Real-space imaging of an orbital Kondo resonance on the Cr (001) surface. O Yu, R Kolesnychenko, M I De Kort, A I Katsnelson, H Lichtenstein, Van Kempen, Nature. 415507O. Yu. Kolesnychenko, R. de Kort, M. I. Katsnelson, A. I. Lichtenstein, and H. van Kempen, Real-space imaging of an orbital Kondo resonance on the Cr (001) surface, Nature (London) 415, 507 (2002).
Electronic structure near the quantum critical point in Vdoped CrA high-resolution photoemission study. G Adhikary, R Bindu, S K Pandey, K Maiti, EPL (Europhysics Letters). 9937009G. Adhikary, R. Bindu, S. K. Pandey, and K. Maiti, Elec- tronic structure near the quantum critical point in V- doped CrA high-resolution photoemission study, EPL (Eu- rophysics Letters) 99, 37009 (2012).
Orbital Kondo effect in V-doped 1T-CrSe2. M Núñez, D C Freitas, F Gay, J Marcus, P Strobel, A A Aligia, M Núñez-Regueiro, Phys. Rev. B. 88245129M. Núñez, D. C. Freitas, F. Gay, J. Marcus, P. Strobel, A. A. Aligia, and M. Núñez-Regueiro, Orbital Kondo effect in V-doped 1T-CrSe2, Phys. Rev. B 88, 245129 (2013).
Resonant tunneling through an Anderson impurity. I. Current in the symmetric model. S Hershfield, J H Davies, J W Wilkins, Phys. Rev. B. 467046S. Hershfield, J.H. Davies, and J.W. Wilkins, Resonant tunneling through an Anderson impurity. I. Current in the symmetric model, Phys. Rev. B 46, 7046 (1992).
Nonequilibrium magnetotransport through a quantum dot: An interpolative perturbative approach. A A Aligia, Phys. Rev. B. 74155125A. A. Aligia, Nonequilibrium magnetotransport through a quantum dot: An interpolative perturbative approach, Phys. Rev. B 74, 155125 (2006).
Non-equilibrium differential conductance through a quantum dot in a magnetic field. A C Hewson, J Bauer, A Oguri, J.Phys.: Condens. Matter. 175413references thereinA. C. Hewson, J. Bauer, and A. Oguri, Non-equilibrium differential conductance through a quantum dot in a mag- netic field, J.Phys.: Condens. Matter 17, 5413 (2005); ref- erences therein.
Scaling of conductance through quantum dots with magnetic field. I J Hamad, C Gazza, J A Andrade, A A Aligia, P S Cornaglia, P Roura-Bas, Phys. Rev. B. 92195113references thereinI. J. Hamad, C. Gazza, J. A. Andrade, A. A. Aligia, P. S. Cornaglia, and P. Roura-Bas, Scaling of conductance through quantum dots with magnetic field, Phys. Rev. B 92, 195113 (2015); references therein.
Leading temperature dependence of the conductance in Kondo-correlated quantum dots. A A Aligia, J. Phys. Condens. Matter. 30155304references thereinA. A. Aligia, Leading temperature dependence of the con- ductance in Kondo-correlated quantum dots, J. Phys. Con- dens. Matter 30, 155304 (2018); references therein.
Higher-order Fermi-liquid corrections for an Anderson impurity away from half-filling III: non-equilibrium transport. A Oguri, A C Hewson, Phys. Rev. B. 9735435references thereinA. Oguri and A. C. Hewson, Higher-order Fermi-liquid cor- rections for an Anderson impurity away from half-filling III: non-equilibrium transport. Phys. Rev. B 97, 035435 (2018); references therein.
Anderson model out of equilibrium: Noncrossing-approximation approach to transport through a quantum dot. N S Wingreen, Y Meir, Phys. Rev. B. 4911040N. S. Wingreen and Y. Meir, Anderson model out of equi- librium: Noncrossing-approximation approach to trans- port through a quantum dot, Phys. Rev. B 49, 11040 (1994).
Nonequilibrium transport through magnetic vibrating molecules. P Roura-Bas, L Tosi, A A Aligia, Phys. Rev. B. 87195136references thereinP. Roura-Bas, L. Tosi and A. A. Aligia, Nonequilibrium transport through magnetic vibrating molecules, Phys. Rev. B 87, 195136 (2013); references therein.
Non-equilibrium conductance through a benzene molecule in the Kondo regime. L Tosi, P Roura-Bas, A A Aligia, J. Phys. Condens. Matter. 24365301L. Tosi, P. Roura-Bas, and A. A. Aligia, Non-equilibrium conductance through a benzene molecule in the Kondo regime, J. Phys. Condens. Matter 24, 365301 (2012)
Perturbation Expansion for the Anderson Hamiltonian. K Yamada, II Prog. Theor. Phys. 53970references thereinK. Yamada, Perturbation Expansion for the Anderson Hamiltonian. II Prog. Theor. Phys. 53, 970 (1975); ref- erences therein
Finitetemperature spectral density for the Anderson model. B Horvatić, D Šokčević, V Zlatić, Phys. Rev. B. 36675B. Horvatić, D.Šokčević, and V. Zlatić, Finite- temperature spectral density for the Anderson model Phys. Rev. B 36, 675 (1987).
Electron correlation resonances in the transport through a single quantum level. A Levy-Yeyati, A Martín-Rodero, F Flores, Phys. Rev. Lett. 712991A. Levy-Yeyati, A. Martín-Rodero, and F. Flores, Electron correlation resonances in the transport through a single quantum level, Phys. Rev. Lett. 71, 2991 (1993).
Quasiparticle description for transport through a small interacting system. A Oguri, Phys. Rev. B. 63115305A. Oguri, Quasiparticle description for transport through a small interacting system Phys. Rev. B 63, 115305 (2001);
. Erratum Phys. Rev. B. 63249901Erratum Phys. Rev. B 63, 249901 (2001).
Many-body theory of the quantum mirage. A A Aligia, Phys. Rev. B. 64121102A.A. Aligia, Many-body theory of the quantum mirage, Phys. Rev. B 64, 121102(R) (2001);
One-and many-body effects on mirages in quantum corrals. A Lobos, A A Aligia, Phys. Rev. B. 6835411A.Lobos and A. A. Aligia, One-and many-body effects on mirages in quantum corrals Phys. Rev. B 68, 035411 (2003)
Kondo and anti-Kondo resonances in transport through nanoscale devices. A A Aligia, C R Proetto, Phys. Rev. B. 65165305A.A. Aligia and C.R. Proetto, Kondo and anti-Kondo reso- nances in transport through nanoscale devices, Phys. Rev. B 65, 165305 (2002).
Magnetotransport through a quantum wire side coupled to a quantum dot. A A Aligia, L A Salguero, Phys. Rev. B. 7075307A. A. Aligia and L. A. Salguero, Magnetotransport through a quantum wire side coupled to a quantum dot, Phys. Rev. B 70, 075307 (2004);
. Phys. Rev. B. 71E169903Phys. Rev. B 71, 169903(E) (2005).
Perturbation theory of a superconducting 0−π impurity quantum phase transition. M Žonda, V Pokorný, V Janiś, T Novotný, Scientific Reports. 58821M.Žonda, V. Pokorný, V. Janiś, and T. Novotný, Pertur- bation theory of a superconducting 0−π impurity quantum phase transition, Scientific Reports 5, 8821 (2015).
Perturbation theory for an Anderson quantum dot asymmetrically attached to two superconducting leads. M Žonda, V Pokorný, V Janiś, T Novotný, Phys. Rev. B. 9324523M.Žonda, V. Pokorný, V. Janiś, and T. Novotný, Pertur- bation theory for an Anderson quantum dot asymmetri- cally attached to two superconducting leads, Phys. Rev. B 93, 024523 (2016).
Spectral densities of the symmetric Anderson model. R N Silver, J E Gubernatis, D S Sivia, M Jarrell, Phys. Rev. Lett. 65496R. N. Silver, J. E. Gubernatis, D. S. Sivia, and M. Jarrell, Spectral densities of the symmetric Anderson model, Phys. Rev. Lett. 65, 496 (1990).
Renormalized perturbation expansions and Fermi liquid theory. A C Hewson, Phys. Rev. Lett. 704007A. C. Hewson, Renormalized perturbation expansions and Fermi liquid theory, Phys. Rev. Lett. 70, 4007 (1993).
Renormalized parameters for impurity models. A C Hewson, A Oguri, D Meyer, Euro. Phys. J. B. 40177A. C. Hewson, A. Oguri and D. Meyer, Renormalized pa- rameters for impurity models, Euro. Phys. J. B 40, 177 (2004).
Nonequilibrium transport through a singlet-triplet Anderson impurity. P Roura-Bas, A A Aligia, Phys. Rev. B. 8035308P. Roura-Bas, A. A. Aligia, Nonequilibrium transport through a singlet-triplet Anderson impurity, Phys. Rev. B 80, 035308 (2009).
Orbital Kondo spectroscopy in a double quantum dot system. L Tosi, P Roura-Bas, A A Aligia, Phys. Rev. B. 88235427L. Tosi, P. Roura-Bas and A. A. Aligia, Orbital Kondo spectroscopy in a double quantum dot system, Phys. Rev. B 88, 235427 (2013).
Replicas of the Kondo peak due to electron-vibration interaction in molecular transport properties. P Roura-Bas, L Tosi, A A Aligia, Phys. Rev. B. 93115139P. Roura-Bas, L. Tosi, and A. A. Aligia, Replicas of the Kondo peak due to electron-vibration interaction in molec- ular transport properties Phys. Rev. B 93, 115139 (2016).
Transition between SU(4) and SU(2) Kondo effect. L Tosi, P Roura-Bas, A A Aligia, Physica B. 4073259L. Tosi, P. Roura-Bas, and A. A. Aligia, Transition be- tween SU(4) and SU(2) Kondo effect, Physica B 407, 3259 (2012).
Landauer formula for the current through an interacting electron region. Y Meir, N S Wingreen, Phys. Rev. Y. Meir and N. S. Wingreen, Landauer formula for the current through an interacting electron region, Phys. Rev.
. Lett, 682512Lett. 68, 2512 (1992).
Spin fluctuation effects on the conductance through a single Pd atom contact. M A Romero, S C Gómez-Carrillo, P G Bolcatto, E C Goldberg, J. Phys.: Condens. Matter. 21215602M A Romero, S. C. Gómez-Carrillo, P. G. Bolcatto, and E. C. Goldberg, Spin fluctuation effects on the conductance through a single Pd atom contact, J. Phys.: Condens. Mat- ter 21, 215602 (2009)
Relation between width of zero-bias anomaly and Kondo temperature in transport measurements through correlated quantum dots: Effect of asymmetric coupling to the leads. D Daroca, P Roura-Bas, A A Aligia, Phys. Rev. B. 98245406D. Pérez Daroca, P. Roura-Bas, and A. A. Aligia, Relation between width of zero-bias anomaly and Kondo tempera- ture in transport measurements through correlated quan- tum dots: Effect of asymmetric coupling to the leads, Phys. Rev. B 98, 245406 (2018).
|
[] |
[
"The Thermodynamic Uncertainty Relation in Biochemical Oscillations",
"The Thermodynamic Uncertainty Relation in Biochemical Oscillations"
] |
[
"Robert Marsland Iii \nDepartment of Physics\nBoston University\n590 Commonwealth Avenue02215BostonMA\n",
"Wenping Cui \nDepartment of Physics\nBoston University\n590 Commonwealth Avenue02215BostonMA\n\nDepartment of Physics\nBoston College\n140 Commonwealth Avenue, Chestnut Hill02467MA\n",
"Jordan M Horowitz \nPhysics of Living Systems Group\nDepartment of Physics\nMassachusetts Institute of Technology\n400 Technology Square02139CambridgeMA\n\nDepartment of Biophysics\nUniversity of Michigan\n48109Ann ArborMI\n\nCenter for the Study of Complex Systems\nUniversity of Michigan\n48104Ann ArborMI\n"
] |
[
"Department of Physics\nBoston University\n590 Commonwealth Avenue02215BostonMA",
"Department of Physics\nBoston University\n590 Commonwealth Avenue02215BostonMA",
"Department of Physics\nBoston College\n140 Commonwealth Avenue, Chestnut Hill02467MA",
"Physics of Living Systems Group\nDepartment of Physics\nMassachusetts Institute of Technology\n400 Technology Square02139CambridgeMA",
"Department of Biophysics\nUniversity of Michigan\n48109Ann ArborMI",
"Center for the Study of Complex Systems\nUniversity of Michigan\n48104Ann ArborMI"
] |
[] |
Living systems regulate many aspects of their behavior through periodic oscillations of molecular concentrations, which function as "biochemical clocks." These clocks are intrinsically subject to thermal fluctuations, so that the duration of a full oscillation cycle is random. Their success in carrying out their biological function is thought to depend on the degree to which these fluctuations in the cycle period can be suppressed. Biochemical oscillators also require a constant supply of free energy in order to break detailed balance and maintain their cyclic dynamics. For a given free energy budget, the recently discovered 'thermodynamic uncertainty relation' yields the magnitude of period fluctuations in the most precise conceivable free-running clock. In this paper, we show that computational models of real biochemical clocks severely underperform this optimum, with fluctuations several orders of magnitude larger than the theoretical minimum. We argue that this suboptimal performance is due to the small number of internal states per molecule in these models, combined with the high level of thermodynamic force required to maintain the system in the oscillatory phase. We introduce a new model with a tunable number of internal states per molecule, and confirm that it approaches the optimal precision as this number increases.
|
10.1098/rsif.2019.0098
|
[
"https://arxiv.org/pdf/1901.00548v2.pdf"
] | 57,375,721 |
1901.00548
|
2ae4e2044e7bb347d3f1a966504a4e82386187d1
|
The Thermodynamic Uncertainty Relation in Biochemical Oscillations
(Dated: February 6, 2019)
Robert Marsland Iii
Department of Physics
Boston University
590 Commonwealth Avenue02215BostonMA
Wenping Cui
Department of Physics
Boston University
590 Commonwealth Avenue02215BostonMA
Department of Physics
Boston College
140 Commonwealth Avenue, Chestnut Hill02467MA
Jordan M Horowitz
Physics of Living Systems Group
Department of Physics
Massachusetts Institute of Technology
400 Technology Square02139CambridgeMA
Department of Biophysics
University of Michigan
48109Ann ArborMI
Center for the Study of Complex Systems
University of Michigan
48104Ann ArborMI
The Thermodynamic Uncertainty Relation in Biochemical Oscillations
(Dated: February 6, 2019)
Living systems regulate many aspects of their behavior through periodic oscillations of molecular concentrations, which function as "biochemical clocks." These clocks are intrinsically subject to thermal fluctuations, so that the duration of a full oscillation cycle is random. Their success in carrying out their biological function is thought to depend on the degree to which these fluctuations in the cycle period can be suppressed. Biochemical oscillators also require a constant supply of free energy in order to break detailed balance and maintain their cyclic dynamics. For a given free energy budget, the recently discovered 'thermodynamic uncertainty relation' yields the magnitude of period fluctuations in the most precise conceivable free-running clock. In this paper, we show that computational models of real biochemical clocks severely underperform this optimum, with fluctuations several orders of magnitude larger than the theoretical minimum. We argue that this suboptimal performance is due to the small number of internal states per molecule in these models, combined with the high level of thermodynamic force required to maintain the system in the oscillatory phase. We introduce a new model with a tunable number of internal states per molecule, and confirm that it approaches the optimal precision as this number increases.
Many living systems regulate their behavior using an internal "clock," synchronized to the daily cycles of light and darkness. In the past 15 years, the isolation of the key components of several bacterial circadian clocks has opened the door to systematic and quantitative study of this phenomenon. In particular, a set of three proteins purified from the bacterium Synechococcus elongatus are capable of executing sustained periodic oscillations in vitro when supplied with ATP [16]. One of the proteins, KaiC, executes a cycle in a space of four possible phosphorylation states, as illustrated in figure 1. This cycle is coupled to the periodic association and dissociation from the other two proteins, KaiA and KaiB.
Steady oscillations break detailed balance, and must be powered by a chemical potential gradient or other free energy source. In this system and in related experiments and simulations, it is commonly observed that the oscillator precision decreases as this thermodynamic driving force is reduced [4]. At the same time, recent theoretical work indicates that the precision of a generic biochemical clock is bounded from above by a number that also decreases with decreasing entropy production per cycle [1-3, 8, 9, 11, 22, 29]. This has led to speculation that this universal bound may provide valuable information about the design principles behind real biochemical clocks.
So far, most discussion of this connection has focused on models with cyclic dynamics hard-wired into the dynamical rules [3,14,29]. But real biochemical oscillators operate in a high-dimensional state space of concentration profiles, and the cyclic behavior is an emergent, col- * Email: [email protected] lective phenomenon [3,4,17]. These oscillators typically exhibit a nonequilibrium phase transition at a finite value of entropy production per cycle. As this threshold is approached from above, the oscillations become more noisy due to critical fluctuations [10,13,17,26]. Below the threshold, the system relaxes to a single fixed point in concentration space, with no coherent oscillations at all. In some systems, the precision may still be well below the theoretical bound as the system approaches this threshold. In these cases, the precision will never come close to the bound, for any size driving force.
As we show in Section II, computational models of real chemical oscillations typically fall into this regime, never approaching to within an order of magnitude of the bound. Macroscopic in vitro experiments on the Ka-iABC system perform even worse, remaining many orders of magnitude below the bound. Previous theoretical work suggests that the performance could be improved by increasing the number of reactions per cycle at fixed entropy production, and by making the reaction rates more uniform [1,23,29]. In Section I, we elaborate on these ideas, introducing an effective number of states per cycle and showing how the relationship of this quantity to the location of the phase transition threshold controls the minimum distance to the bound. In Section III, we introduce a new model based on these design principles, with nearly uniform transition rates in the steady state and with a tunable number of reactions per cycle. We show that this model approaches the optimal precision as the number of reaction steps per cycle grows. As illustrated in figure 1(a), an isolated KaiC monomer has four phosphorylation states, which form a single cycle of reactions. The fluctuations in the time required to traverse this cycle has recently been studied in [3]. But the biological function of this clock demands more than precise oscillations of isolated molecules; rather, it has evolved to generate oscillations in the concentrations of various chemical species. These concentrations are global variables, which simultaneously affect processes throughout the entire cell volume. These global oscillations can still be described by a Markov process on a set of discrete states, but with a very different topology from the unicyclic network of an isolated monomer. For a wellmixed system, each state can be labeled by a list of copy numbers of all molecules in the reaction volume, with each distinct internal state counted as a different kind of molecule.
In the KaiC system, molecules in one of the phosphorylation states can suppress further phosphorylation of other molecules, by sequestering the enzyme (KaiA) re-quired to catalyze the phosphorylation. This mechanism can stably synchronize the progress of all the molecules around the phosphorylation cycle, slowing down the ones that happen to run too far ahead of the rest. This leads to sustained oscillations in the concentration of free KaiA and of each of the four forms of KaiC. Figure 1(b) shows a sample trajectory of the concentration of one of the KaiC phosphorylation states in a detailed computational model of the system adapted from [19].
Unlike the cycles of an idealized mechanical clock, the period τ 1 of these oscillations is subject to random fluctuations, due to the stochastic nature of the underlying chemical reactions. The precision can be quantified by considering an ensemble of identical reaction volumes that are initially synchronized. The histogram of times τ n for each molecule to complete n cycles will widen as n increases and the clocks lose their initial synchronization, as illustrated in figure 1(c). When the width exceeds the mean period T ≡ τ 1 , the clocks are totally desynchronized. This leads to a natural measure of the precision of the clock in terms of the number of coherent cycles N that take place before the synchronization is destroyed.
To measure N in a systematic way, we first note that the variance var(τ n ) = Dn, for some constant of propor- tionality D. This is exactly true in a renewal process [25], such as the isolated KaiC monomer, where each period is an independent random variable (cf. [29]). It remains asymptotically valid for arbitrarily complex models in the large n limit, as long as the correlation time is finite. The number of cycles required for the width var(τ n ) of the distribution to reach the average period T is therefore given by
N ≡ T 2 D .(1)
Any chemical oscillator must be powered by a detailedbalance-breaking thermodynamic driving force that generates a positive average rate of entropy productionṠ. The number of coherent cycles is subject to a universal upper bound as a function ofṠ, holding for arbitrarily complex architectures [1-3, 6, 9, 11, 22]. The bound says that N is never greater than half the entropy production per cycle ∆S ≡ṠT (setting Boltzmann's constant k B = 1 from here on) [8]:
N ≤ ∆S 2 .(2)
The validity of this bound depends on the proper definition of N , which in our formulation also depends on the definition of τ n . Determining τ n is a subtle matter for systems of interacting molecules. Our solution is presented in detail in the Appendix, but it always roughly corresponds to the peak-to-peak distance in figure 1(a).
As ∆S → ∞, Equation (2) says that N is also allowed to become arbitrarily large. But as the entropy released in the reactions coupled to the driving force increases, detailed balance implies that the reverse reaction rates tend towards zero. Once the reverse rates are negligible compared to the other time scales of the problem, these reactions can be ignored, and further changes in ∆S produce no effect. In any given biochemical model, therefore, N approaches some finite value as ∆S → ∞, which depends on the network topology and the rest of the reaction rates. We can see this in our detailed computational model in figure 2. For unicyclic networks in particular, where the topology is a single closed cycle like the isolated KaiC monomer, the maximum possible value for this asymptote is the number of states N [5,14]. By analogy, we will refer to the ∆S → ∞ limit of N for any model as the effective number of states per cycle N eff = lim ∆S→∞ N . For an oscillator built from coupled cycles of internal states, such as the KaiC system, N eff reaches its maximum value when the dynamics constrain the oscillations to a single path through concentration space, and when all reaction rates along this path are equal. In this case, the dynamics are equivalent to a single ring of N M states, where N is the number of internal states per molecule and M is the number of molecules. Therefore the effective number of states is bounded N eff ≤ N M , which can be easily computed for any model or experiment from a basic knowledge of the component parts.
In all five models we will analyze below, N monotonically increases as a function of ∆S. The existence of the finite ∆S → ∞ limit thus implies that N can only approach the thermodynamic bound of Equation (2) when ∆S < ∆S b ≡ 2N eff . But the collective oscillations of these models also exhibit a nonequilibrium phase transition as a function of ∆S, whose critical behavior has recently been studied [4,18]. In the thermodynamic limit, the inverse precision 1/N diverges as ∆S approaches a critical value ∆S c from above, in a way that depends on the architecture of the reaction network. Below ∆S c , there are no collective oscillations, and the concentrations relax to a single fixed point. Since the oscillations cease to exist below ∆S c , the bound is only relevant for ∆S > ∆S c . Combining these two observations, we see that models with ∆S b < ∆S c can never approach the thermodynamic bound.
II. MODELS OF REAL CHEMICAL OSCILLATORS SEVERELY UNDERPERFORM THE BOUND
Cao et. al. recently measured N as a function of ∆S in computational models of four representative chemical clock architectures: activator-inhibitor, repressilator, Brusselator, and the glycolysis network [4]. The data for all four models produced an acceptable fit to a fourparameter phenomenological equation, which is repro- Models of collective oscillations compared with thermodynamic bound. Same as figure 2, but including all four models studied in [4]. The black dotted line is the thermodynamic bound N = ∆S/2. Curves for the first four models are phenomenological fits obtained in [4].
duced in the Appendix along with the parameter values obtained by Cao et. al. for each model. In figure 3, we plot these phenomenological curves and the thermodynamic bound of Equation (2). We also obtained N and ∆S for a detailed model of the KaiC system based on [19] as described in the Appendix, with the parameters obtained in that paper by extensive comparison with experimental data, for twenty values of the ATP/ADP ratio.
The values of N eff , ∆S b and ∆S c can be estimated directly from figure 3, by noting where each curve saturates and where it drops to zero. Both axes are scaled by the system size M , which equals the number of KaiC hexamers for the KaiC model, and the number of kinases in the activator-inhibitor model. The other three models lack a direct physical interpretation of M , since there are no conserved molecular species, but it still defines a generic molecular scale. For any physically reasonable model, N is expected to be an extensive parameter, proportional to M , as is ∆S. This has been confirmed numerically for a number of models, and appears to break down significantly only in the immediate vicinity of the critical point [4,13,18]. Most of the models plotted here have N eff /M ≈ 2, which is reasonable for molecules that only have a few internal states and highly non-uniform reaction rates.
But N eff /M ≈ 2 implies that ∆S b /M ≈ 4, which means that the entropy production per molecule per cycle must be less than 4 for the thermodynamic bound to become relevant. This is a very small number even by biochemical standards, equal to the entropy change from forming four hydrogen bonds between protein residues in solution. The activator-inhibitor, Brusselator, and glycolysis models have phase transitions at ∆S c /M values of 360, 100.4 and 80.5, respectively, under the parameter choices of [4]. They all exceed ∆S b /M by at least an order of magnitude, guaranteeing that the precision can never come close to the thermodynamic bound.
The KaiC model appears to have ∆S c /M ∼ 1, 000 and N eff /M = 1.1, so that ∆S c exceeds ∆S b by more than two orders of magnitude. The only model with ∆S b > ∆S c is the repressilator model, where ∆S c /M = 1.75 and ∆S b ≈ 4. But even here, the critical fluctuations begin to severely degrade the precision when ∆S is still much greater than ∆S b .
Estimates of N can also be extracted directly from experiments, as shown by Cao et. al. for an in vitro reconstitution of the KaiC system with purified components in a macroscopic reaction volume [4]. They analyzed timeseries data from a set of experiments at different ATP/ADP ratios, and fit their phenomenological equation to describe N as a function of this ratio. As we show in the Appendix, this fit implies that ∆S c /M ∼ 1, 000, consistent with our model results. The ∆S → ∞ asymptote of the fit, however, gives N eff /M ∼ 10 −11 , which is astronomically small compared to the model prediction N eff /M ≈ 1. This surprising result reflects the fact that the dominant sources of uncertainty in these macroscopic experiments are fluctuations in temperature and other environmental perturbations, rather than intrinsic thermal noise. Since these fluctuations are independent of the system size, their effect is inflated when we divide by the number of hexamers M ∼ 10 14 in a 100 µL reaction volume at 1 µM concentration. The only way to observe the effect of thermal fluctuations in such a noisy environment is to decrease the reaction volume. Assuming that the minimum contribution of the external noise to N remains fixed at the value of 500 found in the experiments, and that the thermal contribution is of order N eff ≈ M as given by the model, we find that M = 500 hexamers is the system size at which the thermal fluctuations become detectable. At 1 µM concentration of hexamers, the corresponding reaction volume is about 1µm 3 , the size of a typical bacterial cell.
Because of this separation of scales, the apparent divergence of 1/N at a critical value of the ATP/ADP ratio in the experiments is probably due to the expected divergence of susceptibility at the critical point, which makes the oscillation period increasingly sensitive to environmental fluctuations as the ATP/ADP ratio is reduced. In any case, this analysis suggests that the primary design consideration for oscillators in living systems is robustness against external perturbations, as recently explored in [7,15,24].
III. TOY MODEL WITH VARIABLE NUMBER OF STATES CAN SATURATE THE BOUND
The preceding discussion shows why we should not expect biological oscillators to have optimal architecture for suppressing thermal fluctuations. Further suppression of these fluctuations below the level of externally induced noise carries rapidly diminishing marginal returns. But we can still ask whether it is possible to build a synthetic chemical oscillator with sufficiently large N eff /M and sufficiently small ∆S c /M to run up against the thermodynamic bound. In a simple unicyclic network of N states, N eff = N when all the reaction rates in the forward direction are equal, and so one can always approach arbitrarily close to the bound by increasing N . But a chemical oscillator cannot have uniform rates, since the transition rates of each molecule have to change based on the states of the others in order to achieve collective oscillations. Furthermore, it is not known how changing the number of internal states affects ∆S c /M , and so it is not obvious whether ∆S c < ∆S b is achievable at all.
To answer this question, we devised a toy model inspired by the KaiC model of figure 1, with M interacting molecules each containing N distinct internal states. Molecules in any one of these states suppress the transition rates for other molecules that are further ahead in the cycle, as illustrated in figure 4 and described in detail in the Appendix. All internal states have the same energy, and each reaction carries the same fraction of the total thermodynamic force.
In this highly symmetrized model, ∆S c /M can easily be reduced to between 2 and 3 by choosing a sufficiently high coupling strength, as shown in the Appendix. At the same time N eff /M scales linearly with N , and can be made arbitrarily large by increasing this parameter. In figure 4, we plot N /M versus ∆S/M for three different values of N , and show that the data does indeed approach the thermodynamic bound as N increases. This extends the validity of design principles obtained for unicyclic networks in various context to these collective dynamics: the rates should be made as uniform as possible, while the number of internal states is made as large as possible at fixed thermodynamic driving force [3,12,29].
IV. DISCUSSION
The thermodynamic uncertainty relation is a powerful result with impressive universality. It has been widely assumed that the relation should have some relevance for the evolution of biochemical oscillators. Based on data from experiments and extensive simulations with realistic parameters, we have argued that the precision of these oscillators is typically limited by environmental fluctuations rather than by the intrinsic thermodynamic uncertainty.
We have also derived a simple criterion for estimating how closely a given oscillator can approach the thermodynamic bound, in terms of an effective number of states N eff and the entropy production per cycle ∆S c at the onset of oscillatory behavior. For an oscillator based on coupled cycles through N internal states of M identical molecules, N eff ≤ N M , with equality only when all the cycles are perfectly synchronized, and when all reactions that actually occur have identical rates. Assuming that the number of coherent periods N is monotonic in the entropy production per cycle ∆S, the thermodynamic bound can only be approached when N M ≥ N eff ∆S c . This criterion is difficult to satisfy in practice, since it is hard to make a molecule with more than 10 internal states, or to make a system oscillate with less than 100k B T of free energy dissipated per molecule per cycle. To show that it is theoretically possible, we devised a toy model that oscillates with less than 3 k B T of free energy per molecule per cycle and can contain an arbitrary number of internal states per molecule.
ACKNOWLEDGMENTS
We thank J. Paijmans for his help with adjustments to the KaiC simulation, and Y. Cao for valuable discussions on the technical details of reference [4]. RM acknowledges Government support through NIH NIGMS grant 1R35GM119461. JMH is supported by the Gordon and Betty Moore Foundation as a Physics of Living Systems Fellow through Grant No. GBMF4513. The computational work reported on in this paper was performed on the Shared Computing Cluster which is administered by Boston Universitys Research Computing Services.
The definition of the number of coherent cycles N depends on a prior notion of the n-cycle completion time τ n . In a unicyclic transition network, this time can be straightforwardly defined in terms of the integrated current J through an arbitrarily chosen transition in the network. Each time the transition is executed in the forward direction, J increases by 1, and each time it is executed in the reverse direction, J decreases by 1. The n-cycle completion time τ n is then naturally defined as the time when the system first reaches J = n, given that it was initialized in the state immediately adjacent to the measured transition in the positive direction [8,25,29].
For a chemical oscillator, the definition is not so clear. One common approach is to fit the autocorrelation function of some observable to a sine wave with exponentially decaying amplitude. If the autocorrelation function exactly fits this functional form, then N can be obtained from the ratio of the decay time to oscillation period via a numerical conversion factor [4]. One can also evaluate the ratio of imaginary to real parts of the leading eigen- value of the rate matrix for the Master equation of the dynamics, which gives the same result when all the other eigenvalues are much smaller in amplitude [3,17,26]. While this ratio is conjectured to be bounded by the thermodynamic driving force powering the oscillations, it is not the approach we study here [3]. Instead, we note that the value of N generated by these preceding procedures only satisfies the hypotheses of the thermodynamic uncertainty relation under the specific conditions of a perfectly sinusoidal autocorrelation function (cf. [3]). For our analysis of the KaiC model and our new toy model, we instead utilize a definition of τ n that treats the oscillations the full concentration space as one large cycle. Figure 5 illustrates our procedure. We started by projecting the state of the system from the high-dimensional concentration space to two dimensions. We projected onto the plane that captured the largest percentage of the total variation in system state over a cycle, using a Principal Component Analysis (PCA) of a trajectory containing at least one full cycle (using the Python pack-age scikit-learn [20]). In this plane, the oscillating trajectories describe a noisy ring, as shown in Figure 5a. Because the ring has a finite width, we cannot select a single transition to count the integrated current J. Instead, we draw a half-line starting from the middle of the ring, representing a half-hyperplane in the full state space, and include all the transitions cut by this hyperplane in the current. Each time the line is crossed in the clockwise direction, J increases by 1, and each time it is crossed in the counterclockwise direction, J decreases by 1. Sample traces of J(t) are plotted in figure 5 (b). The n-cycle completion time τ n can now be defined as before, measuring the first-passage time for reaching J = n. These definitions fulfill the hypotheses of the thermodynamic uncertainty relations for currents and for first passage times [1,8,9]. In the notation of [8,9], they correspond to setting d(y, z) = 1 for all transitions from y to z cut by the hyperplane, and d(y, z) = 0 for all other transitions. Note that the uncertainty relations are obtained in the J → ∞ limit, where initial conditions are irrelevant, but for our numerical analysis we chose special initial conditions that gave rapid convergence to the asymptotic form. Specifically, we employed a conditional steady-state distribution over states adjacent to the hyperplane. This was achieved by running the simulation longer than the relaxation time, and then starting the counter at J = 0 the next time the hyperplane was crossed.
B. Phenomenological fits from reference [4] The extensive numerical simulations performed by Cao et. al. on four different models of chemical oscillators can be summarized by the parameters of a phenomenological fitting function
N M = A + B ∆S − ∆S c M α −1(3)
with four parameters A, B, ∆S c and α. The exponent α is always negative, so N /M goes to zero as ∆S → ∆S c .
(To convert from their notation to ours, use V → M ,
W 0 → B, C → A, W c → ∆S c /M , D/T → N −1 )
. The parameters for these fits are given in the figure captions of [4], and are reproduced in the following table: Note that A controls the ∆S → ∞ asymptote, and is equal to N −1 eff .
Model A B ∆S c /M α
C. Analysis of experimental data
Cao et. al. also analyze experimental data on the KaiC system, extracting the ratio of decay time to oscillation period from fits to the autocorrelation function at different values of the ATP/ADP ratio. They fit Equation (3) above to the data, but using ln([ATP]/[ADP]) instead of ∆S/M , and without converting the autocorrelation ratio to N or dividing by volume.
To compare these results to the thermodynamic bound, we first had to convert from the logarithm of the ATP/ADP ratio to entropy production per cycle. In the text, they estimate that the critical value ln([ATP]/[ADP]) c = −1.4 obtained from the fit corresponds to an entropy production per ATP hydrolysis event of 10.6. Combining this with the measurement from [28] of 16 hydrolysis events per cycle per KaiC monomer, we find a critical entropy production per cycle per hexamer of ∆S c /M ≈ 10.6 × 16 × 6 ≈ 1, 020.
Next, we had to convert the observed ratio of decay time/period to N /M . Since the autocorrelation function was well fit by an exponentially decaying sinusoid, we applied the corresponding conversion factor N = 2π 2 τ T where T is the period and τ is the decay time [4,Eq. 2]. We then estimated the number of hexamers M ≈ 3 × 10 13 using the KaiC monomer concentration of 3.4 µM reported with the original publication of the data [21,27], and the typical reaction volume in a 96-well plate of 100 µL. With these two conversions, we found that the ∆S → ∞ value of N /M was 2 × 10 −11 .
D. Thermodynamically consistent KaiC model
Paijmans et. al. have recently developed a mechanistically explicit computational model of the KaiC oscillator [19]. This model is particularly interesting from the theoretical point of view because it captures the extremely large dimensionality characteristic of real biochemical systems. Each KaiC hexamer contains six KaiC proteins, which each contain two nucleotide binding sites and two possible phosphorylation sites ("S" and "T" from figure 1). Each of these sites can be in one of two possible states (ATP-bound/ADP-bound, or phosphorylated/unphosphorylated). Furthermore, the whole hexamer can be in an "active" or an "inactive" conformation. Thus each hexamer has (2 · 2 · 2 · 2) 6 · 2 = 2 25 possible internal states. As noted in the main text, the state space for a well-mixed chemical system is the vector of concentrations of all molecular types. For the Paijmans et. al. KaiC model, this vector therefore lives in a space of dimension 2 25 = 3.4 × 10 7 .
The original implementation of this model in [19] lacked the reverse hydrolysis reaction ADP + P → ATP, which never spontaneously happens in practice under physiological conditions. To obtain full thermodynamic consistency, we added this reaction to the model with the assistance of one of the original authors. This required in-troducing a new parameter ∆G 0 , the free energy change of the hydrolysis reaction at standard concentrations. For all the simulations analyzed here, we chose ∆G 0 and the concentration of inorganic phosphate [Pi] such that
[Pi] 0 [Pi] e −∆G0 = 10 8 .(4)
In other words, the entropy generated during a single hydrolysis reaction when nucleotide concentrations are equal ([ATP] = [ADP]) is ∆S hyd = ln 10 8 ≈ 18.4. Since the steady-state supply of free energy in this model comes entirely from the fixed nonequilibrium concentrations of ATP and ADP, we can measure the average rate of entropy productionṠ by simply counting how many ATP molecules are hydrolyzed over the course of a long simulation, multiplying by ∆S hyd , and dividing by the total time elapsed in the simulation.
All parameters other than ∆G 0 are described and tabulated in the original publication [19], and the only parameter altered during our simulations was the ATP/ADP ratio.
The revised C code and scripts for generating data can be found on GitHub at https://github.com/ robertvsiii/kaic-analysis.
E. Symmetric toy model
We also developed our own abstract toy model to isolate the operating principles of the KaiC oscillator, and to check whether the thermodynamic bound could be saturated by a collective oscillator with the right design.
Consider a molecule with N states, as sketched in figure 6 (a). Transitions are allowed from each state to two other states, such that the network of transitions has the topology of a ring. The rates for "clockwise" and "counterclockwise" transitions around this ring are k + = N k and k − = N ke −A/N , respectively, where A is the cycle affinity. Under these definitions, the total entropy produced when one ring executes a full cycle is always equal to A. Now consider M copies of this molecule in the same well-mixed solution with M i copies at position i, where i increases in the "clockwise" direction from 1 to N . We can couple their dynamics together by allowing the bare rate k to vary around the ring. Specifically, we make the bare rate k i for transitions between states i and i + 1 depend on the occupancy fractions f j = M j /M of all N states:
k i = exp −C j f j sin[2π(i − j)/N ] (5)
The constant C controls the strength of the coupling, and the rate for a uniform distribution over states is 1.
This dependence of the rates on the f i mimics the effect of KaiA sequestration in the KaiC system. Recall that high occupancy of the inactive conformation of KaiC causes KaiA to be sequestered, slowing down nucleotide exchange in other hexamers, as illustrated in figure 1. In this toy model, high occupancy of any one state slows down transition rates ahead of that state in the cycle, by up to a factor of e −C for transitions a quarter-cycle ahead. Due to the symmetry of our model, high occupancy of a given state also speeds up transition rates behind that state.
The data for figure 4 was obtained with C = 5, for 18 values of A from 1 to 30. Note that ∆S/M ≈ A, since all M molecules execute approximately one cycle during a given oscillation period. We simulated this model using a Gillespie algorithm with the reaction rates specified above. The Python code can be found in the GitHub repository https://github. com/robertvsiii/kaic-analysis.
In the limit of infinite system size, the dynamics become deterministic, and are described by the following set of N ODE's:
df i dt = f i−1 k + i−1 + f i+1 k − i − f i (k + i + k − i−1 ),(6)
with k + i = N k i and k − i = N k i e −A/N . These equations always have a fixed point at the uniform state where f i = 1 N for all i. The linearized dynamics around the uniform state can be written as:
dδf i dt = 1 N j ∂k + i−1 ∂f j + ∂k − i ∂f j − ∂k + i ∂f j + ∂k − i−1 ∂f j δf j + (δf i−1 − δf i )N + (δf i+1 − δf i )N e −A/N (7) = j K ij δf j(8)
where the last line defines the matrix K ij . Oscillating solutions are possible when K ij acquires an eigenvalue with a positive real part, making this fixed point unstable. In figure 6 (b), we plot the critical affinity A c where these positive real parts first appear, as a function of the number of internal states N . We confirmed that the dominant pair of eigenvalues contains nonzero imaginary parts at A = A c for all points plotted, so that the transition is a true Hopf bifurcation to an oscillatory phase. Note that in the limit A → ∞, N → ∞, this model becomes identical to the irreversible limit of a fullyconnected driven XY model.
V. SIMULATIONS AND ANALYSIS
To measure N and ∆S in the KaiC model and our new toy model, we generated an ensemble of trajectories for each set of parameters. Each ensemble of the KaiC model contained 1,200 trajectories, while each toy model ensemble contained 1,120 trajectories. Before collecting data, we initialized each trajectory by running the dynamics for longer than the empirically determined relaxation time of the system, in order to obtain a steady-state ensemble.
After projecting each trajectory onto the first two principal components and computing the n-cycle first passage times as described above, we obtained the variance in τ n as a function of n for each ensemble. We computed bootstrapped 64% confidence intervals for the estimate of the variance using the Python module "bootstrapped," available at https://github.com/facebookincubator/ bootstrapped. This data is plotted for all the KaiC ensembles in figure 7. The data is well fit by a straight line even for low n in each of the plots. We obtained the slope D of these lines using a weighted least-squares fit, also shown in the figure, with the weights provided by the inverse of the bootstrapped confidence intervals.
We used these confidence intervals to obtain the bootstrap estimate for the uncertainty in D. The size of the confidence interval was proportional to n, as expected from a simple multiplicative noise model where the slope D is a random variable. The constant of proportionality yields an estimate for the standard deviation of the distribution from which D is sampled. We obtained this value for each data point with another least-squares linear fit, and used it to set the size of the error bars in figures 2 and 4.
FIG. 2 .
2Number of coherent oscillations saturates as ∆S → ∞. The number of coherent cycles N is plotted as a function of the entropy production per cycle ∆S for the KaiC model discussed infigure 1above. Both axes are scaled by the system size M , so that all quantities are molecularscale values. Error bars are ±1 standard deviation, estimated with the bootstrap procedure described in the Appendix. The black dotted line is the estimated ∆S → ∞ limit N = N eff . See Appendix and[19] for model parameters.
FIG. 3. Models of collective oscillations compared with thermodynamic bound. Same as figure 2, but including all four models studied in [4]. The black dotted line is the thermodynamic bound N = ∆S/2. Curves for the first four models are phenomenological fits obtained in [4].
FIG. 4 .
4Symmetric toy model compared with thermodynamic bound. (a) Schematic of toy model inspired by the KaiC system. Each protein has N distinct conformations whose transitions are arranged in a ring topology with a net clockwise drift. Each protein suppresses the transition rate for proteins further along in the circulation around the ring. (b) N /M versus ∆S/M in this model for three different values of the number of internal states N . Error bars are ±1 standard deviation, estimated with the same bootstrap procedure used in figure 2. See Appendix for model details and parameters.
FIG. 5 .
5Measuring the n-cycle completion time τn (a) Trajectories of the toy model of figure 4 with N = 6 for two different values of the thermodynamic driving force, projected onto their first two principle components. A cut from the origin along the positive vertical axis provides the criterion for cycle completion. (b) Integrated current (net number of completed cycles) J as a function of time for the same two trajectories. The first-passage time for a net increase n in the integrated current defines the n-cycle completion time τn.
FIG. 6 .
6Toy model with variable number of internal states (a) Transition rates for a single molecule with N internal states. Multiple copies of the molecule are coupled together kinetically, by making ki depend on the fraction of molecules fi in each state. (b) Dependence of critical affinity Ac on N for different values of the coupling C.
FIG. 7 .
7Estimating N from KaiC simulations. Each panel shows the estimated variance and bootstrapped 64% confidence intervals in the n-cycle first passage time τn. Straight black lines are linear fits, whose slopes provide the values of D used in the computation of N for the main text figures.
FIG. 1. Coherent cycles in a biochemical oscillator. (a) Schematic of the KaiC biochemical oscillator: This simplified diagram shows four different internal states of the KaiC molecule, labeled U, T, S and TS (unphosphorylated, phosphorylated on threonine, phosphorylated on serine, and phosphorylated on both residues). The molecules execute cycles around these four states in the indicated direction. They interact with each other via additional molecular components, in such a way that molecules in state S slow down the forward reaction rate for other molecules in state U. (b) Time-evolution of a detailed kinetic model of the KaiC system (adapted from [19]) with 360 interacting KaiC hexamers. (c) Histograms of the time τn required for 1,200 independent sets of 360 interacting KaiC hexamers to complete n collective cycles, for n = 1 through 10. (d) Variances var(τn) of the histograms as a function of n. Error bars are bootstrapped 95% confidence intervals. Black line is a linear fit, with slope D = 2.05 ± 0.05 hours 2 .(c)
(d)
τ 1 τ 2 τ 3
τ 4 τ 5
τ 1
τ 2
τ 3
τ 4
τ 5
I. EFFECTIVE NUMBER OF STATES AND
CRITICAL ENTROPY PRODUCTION CONTROL
DISTANCE TO THERMODYNAMIC BOUND
Thermodynamic Uncertainty Relation for Biomolecular Processes. A C Barato, U Seifert, Physical Review Letters. 11415158101A. C. Barato and U. Seifert. Thermodynamic Uncer- tainty Relation for Biomolecular Processes. Physical Re- view Letters, 114(15):158101, apr 2015.
Cost and precision of brownian clocks. A C Barato, U Seifert, Physical Review X. 6441053A. C. Barato and U. Seifert. Cost and precision of brow- nian clocks. Physical Review X, 6(4):041053, 2016.
Coherence of biochemical oscillations is bounded by driving force and network topology. A C Barato, U Seifert, Phys. Rev. E. 9562409A. C. Barato and U. Seifert. Coherence of biochemi- cal oscillations is bounded by driving force and network topology. Phys. Rev. E, 95:062409, 2017.
The free-energy cost of accurate biochemical oscillations. Y Cao, H Wang, Q Ouyang, Y Tu, Nature Physics. 11Y. Cao, H. Wang, Q. Ouyang, and Y. Tu. The free-energy cost of accurate biochemical oscillations. Nature Physics, 11:772-778, 2015.
The least variable phase type distribution is erlang. A David, S Larry, Stochastic Models. 3467A. David and S. Larry. The least variable phase type distribution is erlang. Stochastic Models, 3:467, 1987.
Current fluctuations and transport efficiency for general langevin systems. A Dechant, S.-I Sasa, Journal of Statistical Mechanics: Theory and Experiment. 63209A. Dechant and S.-i. Sasa. Current fluctuations and transport efficiency for general langevin systems. Jour- nal of Statistical Mechanics: Theory and Experiment, 2018:063209, 2018.
High chemical affinity increases the robustness of biochemical oscillations. C Junco, S Vaikuntanathan, arXiv:1808.04914C. del Junco and S. Vaikuntanathan. High chemical affin- ity increases the robustness of biochemical oscillations. arXiv:1808.04914, 2018.
Fundamental bounds on frist passage time fluctuations for currents. T R Gingrich, J M Horowitz, Phys. Rev. Lett. 119170601T. R. Gingrich and J. M. Horowitz. Fundamental bounds on frist passage time fluctuations for currents. Phys. Rev. Lett., 119:170601, 2017.
Dissipation bounds all steady-state current fluctuations. T R Gingrich, J M Horowitz, N Perunov, J L England, Physical Review Letters. 116120601T. R. Gingrich, J. M. Horowitz, N. Perunov, and J. L. England. Dissipation bounds all steady-state current fluctuations. Physical Review Letters, 116:120601, 2016.
Collective power: minimal model for thermodynamics of nonequilibrium phase transitions. T Herpich, J Thingna, M Esposito, Phys. Rev. X. 831056T. Herpich, J. Thingna, and M. Esposito. Collective power: minimal model for thermodynamics of nonequi- librium phase transitions. Phys. Rev. X, 8:031056, 2018.
Proof of the finitetime thermodynamic uncertainty relation for steadystate currents. J M Horowitz, T R Gingrich, Phys. Rev. E. 9620103J. M. Horowitz and T. R. Gingrich. Proof of the finite- time thermodynamic uncertainty relation for steady- state currents. Phys. Rev. E, 96:020103, 2017.
Thermodynamics of statistical inference by cells. A H Lang, C K Fisher, T Mora, P Mehta, Physical Review Letters. 113148103A. H. Lang, C. K. Fisher, T. Mora, and P. Mehta. Ther- modynamics of statistical inference by cells. Physical Re- view Letters, 113:148103, 2014.
Thermodyanmic uncertainity relation of interacting oscillators in synchrony. S Lee, C Hyeon, J Junghyo, Phys. Rev. E. 9832119S. Lee, C. Hyeon, and J. Junghyo. Thermodyanmic un- certainity relation of interacting oscillators in synchrony. Phys. Rev. E, 98:032119, 2018.
Limits of predictions in thermodynamic systems: a review. R Marsland, J England, Reports on Progress in Physics. 8116601R. Marsland and J. England. Limits of predictions in thermodynamic systems: a review. Reports on Progress in Physics, 81:016601, 2018.
Robustness of clocks to input noise. M Monti, D K Lubensky, P R Ten Wolde, Phys. Rev. Lett. 12178101M. Monti, D. K. Lubensky, and P. R. ten Wolde. Ro- bustness of clocks to input noise. Phys. Rev. Lett., 121:078101, 2018.
Reconstitution of circadian oscillation of cyanobacterial kaic phosphorylation in vitro. M Nakajima, K Imai, H Ito, T Nishiwaki, Science. 308414M. Nakajima, K. Imai, H. Ito, and T. Nishiwaki. Recon- stitution of circadian oscillation of cyanobacterial kaic phosphorylation in vitro. Science, 308:414, 2005.
Phase transition in thermodynamically consistent biochemical oscillators. B Nguyen, U Seifert, A C Barato, J. Chem. Phys. 1491B. Nguyen, U. Seifert, and A. C. Barato. Phase transition in thermodynamically consistent biochemical oscillators. J. Chem. Phys., 149:1, 2018.
Dissipation induced transitions in elastic strings. M Nguyen, S Vaikuntanathan, 1803.04368arXiv preprintM. Nguyen and S. Vaikuntanathan. Dissipation induced transitions in elastic strings. arXiv preprint, 1803.04368, 2018.
A thermodynamically consistent model of the posttranslational Kai circadian clock. J Paijmans, D K Lubensky, P R Ten Wolde, PLoS Computational Biology. 131005415J. Paijmans, D. K. Lubensky, and P. R. ten Wolde. A thermodynamically consistent model of the post- translational Kai circadian clock. PLoS Computational Biology, 13:e1005415, 2017.
. F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, J Vanderplas, A Passos, D Cournapeau, M Brucher, M Perrot, E Duchesnay, F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cour- napeau, M. Brucher, M. Perrot, and E. Duchesnay.
. Scikit-Learn, Machine learning in Python. Journal of Machine Learning Research. 12Scikit-learn: Machine learning in Python. Journal of Ma- chine Learning Research, 12:2825-2830, 2011.
Robust and tunable circadian rhythms from differentially sensitive catalytic domains. C Phong, J S Markson, C M Wilhoite, M J Rust, Proceedings of the National Academy of Sciences. 1101124C. Phong, J. S. Markson, C. M. Wilhoite, and M. J. Rust. Robust and tunable circadian rhythms from differentially sensitive catalytic domains. Proceedings of the National Academy of Sciences, 110:1124, 2013.
Universal bounds on current fluctuations. P Pietzonka, A C Barato, U Seifert, Phys. Rev. E. 9352145P. Pietzonka, A. C. Barato, and U. Seifert. Universal bounds on current fluctuations. Phys. Rev. E, 93:052145, 2016.
Extreme fluctuations of active brownian motion. P Pietzonka, K Kleinbeck, U Seifert, New Journal of Physics. 1852001P. Pietzonka, K. Kleinbeck, and U. Seifert. Extreme fluctuations of active brownian motion. New Journal of Physics, 18:052001, 2016.
Continuous attractor-based clocks are unreliable phase estimators. W Pittayakanchit, Z Lu, J Chew, M J Rust, A Murugan, arXiv, 709.09579W. Pittayakanchit, Z. Lu, J. Chew, M. J. Rust, and A. Murugan. Continuous attractor-based clocks are un- reliable phase estimators. arXiv, 709.09579, 2017.
First-passage times in renewal and nonrenewal systems. K Ptaszyński, Phys. Rev. E. 9712127K. Ptaszyński. First-passage times in renewal and non- renewal systems. Phys. Rev. E, 97:012127, 2019.
Pumped biochemical reactions, nonequilibrium circulation, and stocahstic resonance. H Qian, M Qian, Phys. Rev. Lett. 842271H. Qian and M. Qian. Pumped biochemical reactions, nonequilibrium circulation, and stocahstic resonance. Phys. Rev. Lett., 84:2271, 2000.
Lightdriven changes in energy metabolism directly entrain the cyanobacterial circadian oscillator. M J Rust, S S Golden, E K Oshea, Science. 331220M. J. Rust, S. S. Golden, and E. K. OShea. Light- driven changes in energy metabolism directly entrain the cyanobacterial circadian oscillator. Science, 331:220, 2011.
ATPase activity of KaiC determines the basic timing for circadian clock of cyanobacteria. K Terauchi, Y Kitayama, T Nishiwaki, K Miwa, Y Murayama, T Oyama, T Kondo, Proceedings of the National Academy of Sciences. 10416377K. Terauchi, Y. Kitayama, T. Nishiwaki, K. Miwa, Y. Murayama, T. Oyama, and T. Kondo. ATPase ac- tivity of KaiC determines the basic timing for circa- dian clock of cyanobacteria. Proceedings of the National Academy of Sciences, 104:16377, 2007.
Quantifying fluctuations in reversible enzymatic cycles and clocks. H Wierenga, P R Ten Wolde, N B Becker, Phys. Rev. E. 9742404H. Wierenga, P. R. ten Wolde, and N. B. Becker. Quan- tifying fluctuations in reversible enzymatic cycles and clocks. Phys. Rev. E, 97:042404, 2018.
Measuring the stochastic period. A , A. Measuring the stochastic period
|
[
"https://github.com/facebookincubator/"
] |
[
"Holographic non-Gaussianities in general single-field inflation",
"Holographic non-Gaussianities in general single-field inflation"
] |
[
"Hiroshi Isono \nDepartment of Physics\nFaculty of Science\nChulalongkorn University\n10330BangkokThailand\n",
"Toshifumi Noumi \nDepartment of Physics and Jockey Club Institute for Advanced Study\nHong Kong University of Science and Technology\nHong Kong\n\nDepartment of Physics\nKobe University\n657-8501KobeJapan\n",
"Gary Shiu \nDepartment of Physics and Jockey Club Institute for Advanced Study\nHong Kong University of Science and Technology\nHong Kong\n\nDepartment of Physics\nUniversity of Wisconsin-Madison\n53706MadisonWIUSA\n",
"Sam S C Wong \nDepartment of Physics and Jockey Club Institute for Advanced Study\nHong Kong University of Science and Technology\nHong Kong\n",
"Siyi Zhou \nDepartment of Physics and Jockey Club Institute for Advanced Study\nHong Kong University of Science and Technology\nHong Kong\n"
] |
[
"Department of Physics\nFaculty of Science\nChulalongkorn University\n10330BangkokThailand",
"Department of Physics and Jockey Club Institute for Advanced Study\nHong Kong University of Science and Technology\nHong Kong",
"Department of Physics\nKobe University\n657-8501KobeJapan",
"Department of Physics and Jockey Club Institute for Advanced Study\nHong Kong University of Science and Technology\nHong Kong",
"Department of Physics\nUniversity of Wisconsin-Madison\n53706MadisonWIUSA",
"Department of Physics and Jockey Club Institute for Advanced Study\nHong Kong University of Science and Technology\nHong Kong",
"Department of Physics and Jockey Club Institute for Advanced Study\nHong Kong University of Science and Technology\nHong Kong"
] |
[] |
We use holographic techniques to compute inflationary non-Gaussianities for general singlefield inflation, including models with a non-trivial sound speed. In this holographic approach, the inflationary dynamics is captured by a relevant deformation of the dual conformal field theory (CFT) in the UV, while the inflationary correlators are computed by conformal perturbation theory.In this paper, we discuss the effects of higher derivative operators, such as (∂ µ φ∂ µ φ) m , which are known to induce a non-trivial sound speed and source potentially large non-Gaussianities. We compute the full inflationary bispectra from the deformed CFT correlators. We also discuss the squeezed limit of the bispectra from the viewpoint of operator product expansions. As is generic in the holographic description of inflation, our power spectrum is blue tilted in the UV region. We extend our bispectrum computation to the IR region by resumming the conformal perturbations to all orders. We provide a self-consistent setup which reproduces a red tilted power spectrum, as well as all possible bispectrum shapes in the slow-roll regime.
|
10.1007/jhep12(2016)028
|
[
"https://arxiv.org/pdf/1610.01258v2.pdf"
] | 119,195,023 |
1610.01258
|
c64147a872e38ada2e074f31f9e30c7166b9cbad
|
Holographic non-Gaussianities in general single-field inflation
13 Dec 2016
Hiroshi Isono
Department of Physics
Faculty of Science
Chulalongkorn University
10330BangkokThailand
Toshifumi Noumi
Department of Physics and Jockey Club Institute for Advanced Study
Hong Kong University of Science and Technology
Hong Kong
Department of Physics
Kobe University
657-8501KobeJapan
Gary Shiu
Department of Physics and Jockey Club Institute for Advanced Study
Hong Kong University of Science and Technology
Hong Kong
Department of Physics
University of Wisconsin-Madison
53706MadisonWIUSA
Sam S C Wong
Department of Physics and Jockey Club Institute for Advanced Study
Hong Kong University of Science and Technology
Hong Kong
Siyi Zhou
Department of Physics and Jockey Club Institute for Advanced Study
Hong Kong University of Science and Technology
Hong Kong
Holographic non-Gaussianities in general single-field inflation
13 Dec 2016
We use holographic techniques to compute inflationary non-Gaussianities for general singlefield inflation, including models with a non-trivial sound speed. In this holographic approach, the inflationary dynamics is captured by a relevant deformation of the dual conformal field theory (CFT) in the UV, while the inflationary correlators are computed by conformal perturbation theory.In this paper, we discuss the effects of higher derivative operators, such as (∂ µ φ∂ µ φ) m , which are known to induce a non-trivial sound speed and source potentially large non-Gaussianities. We compute the full inflationary bispectra from the deformed CFT correlators. We also discuss the squeezed limit of the bispectra from the viewpoint of operator product expansions. As is generic in the holographic description of inflation, our power spectrum is blue tilted in the UV region. We extend our bispectrum computation to the IR region by resumming the conformal perturbations to all orders. We provide a self-consistent setup which reproduces a red tilted power spectrum, as well as all possible bispectrum shapes in the slow-roll regime.
Introduction
By now, there is overwhelming evidence for the celebrated AdS/CFT correspondence [1]. Given its vast successes, it is natural to wonder if there is a de Sitter counterpart. Despite the lack of an explicit string theory construction of de Sitter space, an analysis of the asymptotic symmetries along the lines of [1] suggests a similar duality between de Sitter space and conformal field theory [2,3]. The dS/CFT correspondence states that physics in (d + 1) dimensional de Sitter space dS d+1 is dual to some conformal field theory CFT d in d dimensional Euclidean space, of which the geometry is a constant time slice of the de Sitter spacetime. The isometry group of dS d+1 matches with the conformal group of Euclidean CFT d , which is SO(d + 1, 1). This is the starting point of the conjectured relationship between these two descriptions.
While the dS/CFT correspondence was originally introduced to discuss quantum nature of gravity on de Sitter space, it has also shed some light on observational cosmology, particularly in the context of cosmic inflation. An inflationary universe may be regarded as a quasi-de Sitter space with an approximately (but not exactly) constant Hubble parameter H. Since the timetranslation invariance of de Sitter space corresponds to the scale invariance of the dual CFT, the dS/CFT correspondence has to be extended to non-conformal theories in order to capture inflationary dynamics in a holographic manner. Just as the standard AdS/CFT correspondence, it was achieved, e.g. by performing relevant deformations in the dS/CFT correspondence . This extended relationship is sometimes dubbed the inflation/deformed CFT correspondence.
In this holographic picture the expansion history of our universe may be identified with the renormalization group (RG) flow of the field theory dual [4]. In particular the universe is asymptotically de Sitter space at the future and past infinities, corresponding to the UV and IR fixed points of the flow, with inflation in the interim (see Fig. 1). The slow-roll property of inflation is translated to the smallness of the beta function of the field theory dual. Based on this picture, inflationary correlation functions have been computed for some inflationary models via the relevant deformation of the field theory dual [8,15,18,20,23,24]. It was also used to derive inflationary consistency relations based on the broken conformal symmetry [15,26].
In light of these developments, we would like to use holographic techniques to compute inflationary non-Gaussianities for general single-field inflation [27]. Primordial non-Gaussianities, which give a direct measurement of interactions during inflation, are one of the most important probes of high energy physics in the early universe. While the level of non-Gaussianities generated by single-field inflationary models with a canonical kinetic term is suppressed by slow-roll parameters [6,28], higher derivative interactions in the generalized Lagrangians are known to source potentially large non-Gaussianities as well as a non-trivial sound speed c s < 1 [29,30,27]. The inflationary bispectrum for general single-field inflation has been computed directly on the inflation side long ago [27] (see [31] for a less complete result that applies only to cases with c s ≈ 1). It was shown that the bispectrum is completely determined by five parameters: the usual slow-roll parameters , η, the sound speed c s and its rate of change s :=ċ s / (c s H), and Λ which characterizes the derivative interactions [27]. 1 In this paper, as a first step, we compute the bispectrum of general single-field inflation along the line of conformal perturbation theory of the CFT dual.
In order to carry out the conformal perturbation theory, we need to specify the reference CFT.
However, while a concrete realization of the dS/CFT correspondence was proposed for Vasiliev's higher spin gravity [11,19], an explicit realization for Einstein gravity is yet unknown. The holographic studies of inflation in the literature are therefore mainly classified into the following three categories:
1. Specify a concrete model of the reference CFT and the RG flow, and investigate the dual inflationary dynamics [13,14].
2. Clarify model-independent properties of holographic inflation such as an expression of the power spectrum in terms of the beta function of the dual QFT [15,16,18,23], and holographic derivation of consistency relations [15,21,25].
3. Specify the hypothetical reference CFT based on a concrete bulk model and the dS/CFT correspondence (sometimes called the holographic CFT in the context of AdS/CFT correspondence), and use conformal perturbation theory to compute inflationary spectra [9,20,22].
As we will see, the shape of the inflationary bispectrum depends on details of the CFT dual.
We therefore consider that the last approach is adequate to discuss how the bispectrum shape of general single-field inflation is realized through the conformal perturbation theory. Also, it will be a useful step for further understanding of primordial non-Gaussianities from the holographic viewpoint. Based on these motivations, we employ the third approach in the above to compute the bispectrum of general single-field inflation. In particular we reproduce the shape of the bispectrum dictated by the sound speed c s , albeit in a perturbative manner order by order in 1 − c s .
The organization of the paper and main results of each section are as follows. In Sec. 2 we review basics of holographic inflation. In particular we introduce the holographic dictionary between inflationary observables and correlators of the deformed CFT.
In Sec. 3 we compute the inflationary bispectrum in a holographic way, using the conformal perturbation theory around the UV fixed point. We first summarize general features of holographic primordial spectra which do not depend on details of inflationary models. In particular we derive consistency relations of bispectra based on the conformal Ward identity, by extending the previous derivation at the lowest order of the conformal perturbation [15]. We then compute the bispectrum of general single-field inflation at the UV scale. Based on the third approach mentioned above, we specify the UV CFT correlators by solving the Dirichlet boundary problem of the AdS model associated with the dS dynamics of our interests. We also discuss why the bispectrum associated with higher derivative interactions vanishes in the squeezed limit, from the viewpoint of operator product expansion, which is a less model-sensitive argument based on conformal symmetry.
The inflationary spectra around the UV fixed point are known to be blue tilted. In Sec. 4, in order to reproduce a red tilted spectrum consistent with observation [32], we extend our computation of the bispectrum to the IR region by taking into account the conformal perturbation at all orders. Under the approximation that the deformation operator is nearly marginal and the derivative interactions are reasonably small, we compute the leading contribution to the bispectrum at the IR scale and explicitly perform the resummation over the deformation parameter. We reproduce all possible bispectrum shapes of general single-field inflation in the slow-roll regime, even at the scale with the red spectrum beyond the UV scale. This is our main result in this paper.
We relegate some useful details of CFT correlation functions to the appendices.
Holographic approach to single field inflation
In this section we summarize the basics of holographic inflation relevant to this paper [6,15,18,20].
In Sec. 2.1 we first sketch how the relevant deformation of the dual CFT can be related to the inflationary dynamics. We then in Sec. 2.2 introduce the relation between primordial spectra and correlation functions of the field theory dual. We will apply the general results introduced here to concrete inflationary models in the following sections.
Inflation from relevant perturbations
In the holographic approach, the inflationary dynamics is captured by a relevant deformation of the dual CFT. We perturb this CFT by a relevant operator O 0 dual to the inflaton field as
S[χ] = S CFT [χ] + d 3 xφ O 0 (x) , (2.1) where S CFT [χ]
is the action of the CFT, which we call the UV CFT, and χ stands for the integration variables (matter fields) of the path integral defining the UV CFT. As depicted in Fig. 1, the UV CFT is associated with the future infinity of an asymptotically de Sitter space. The deformation parameterφ is related to the inflaton background value at late time corresponding to the Wilsonian cutoff scale Λ of the field theory dual as φ =φΛ −λ . We take the limit Λ → ∞ withφ being fixed.
Just as the evolution of inflaton background deforms the de Sitter space and breaks some isometries, the relevant deformation O 0 breaks the conformal invariance of the dual CFT. It is instructive to see this concretely from the scale dependence of correlation functions, with an analogy to the inflationary perturbations. Correlation functions in the deformed QFT can be expressed in terms of the UV CFT correlators as
. . . = . . . exp − d 3 xφ O 0 (x) CFT , (2.2)
where . . . CFT is the correlation function in the UV CFT. In particular the two-point function in momentum space is given by 2
O 0 (k)O 0 (−k) = O 0 (k)O 0 (−k) CFT −φ O 0 (k)O 0 (−k)O 0 (0) CFT + O(φ 2 ) . (2.3) We have also used d 3 x O 0 (x) = O 0 (k = 0)
. The scale dependence of each CFT correlator is determined by the dilatation symmetry as
O 0 (k)O 0 (−k) CFT = A 0 k 3−2λ , O 0 (k)O 0 (−k)O 0 (0) CFT = A 1 k 3−3λ ,(2.4)
where λ is a positive constant defined by λ = 3 − ∆ with ∆ being the scaling dimension of O 0 in the UV CFT. The coefficients A 0 and A 1 are determined only from the UV CFT data. We then
have O 0 (k)O 0 (−k) = A 0 k 3−2λ 1 − A 1 A 0φ k −λ + O(φ 2 ) . (2.5)
Since the deformation parameterφ always appears with a factor k −λ , the correlation functions reduce to the UV CFT ones in the limit k → ∞. On the other hand, the deformation becomes important in the IR as k → 0. The deformation induced momentum dependence probes the QFT at different scales, as each mode of the observed inflationary perturbations probes the geometry at its own horizon crossing time. Since holography converts the scale dependence of the QFT to the time evolution of the bulk geometry, we can compute inflationary correlation functions from the QFT correlators at an appropriate scale associated with the geometry of our interest (see Fig. 1).
Therefore one can extract inflationary observables from QFT correlators. For instance, the spectral index (n s − 1) and its running are in principle encoded in (2.5).
Holographic dictionary
We next introduce the holographic dictionary translating the inflationary correlation functions and the QFT correlators. For this purpose, it is convenient to start from an action with sources,
S[χ; g ij , φ] = S CFT [χ; g ij ] + d 3 x √ gφO 0 , (2.6)
where g ij (x) and φ(x) =φ + ϕ(x) source the energy-momentum tensor and the operator O 0 (x), respectively. The original action (2.1) is reproduced by setting g ij = δ ij and φ =φ. The key idea of holography is that the bulk fields source operators in the dual QFT. For example, the scalar curvature perturbation ζ of inflation can be identified with the metric g ij as
g ij = e 2ζ δ ij ,(2.7)
and sources the trace of the energy momentum tensor defined by
T := g ij 2 √ g δS δg ij = −e −3ζ δS δζ = − ∂L ∂ζ + 3L ,(2.8)
where we introduced the Lagrangian density L such that
S = d 3 x √ gL = d 3 x e 3ζ L.
Here and in what follows we drop the tensor modes and focus on the scalar sector only. For later use, it is convenient to expand the energy-momentum tensor in ζ and ϕ as
T = T 0 + T 1 ζ − 3ϕO 0 + . . . ,(2.9)
where the dots stand for the second and higher orders in ζ and ϕ. In particular we may write T 0 and T 1 in terms of the Lagrangian density as
T 0 = − ∂L ∂ζ + 3L ζ=0 ϕ=0 , T 1 = − ∂ 2 L ∂ζ 2 + 3 ∂L ∂ζ ζ=0 ϕ=0 .
(2.10)
The action can then be expanded in ζ up to the second order as 11) where note that the dots contain terms with ϕ as well as higher order terms in ζ. In the following we mostly use U = 3T 0 + T 1 instead of T 1 for notational simplicity. Here Ψ[ζ] is the wavefunction of ζ at the future infinity in the gauge φ(x, t) =φ(t). 3 On the other hand, we define the partition function in the dual QFT by
S[χ; g ij , φ] = S[χ; δ ij ,φ] + d 3 x ∂L ∂ζ + 3L ζ=0 ϕ=0 ζ + 1 2 ∂ 2 L ∂ζ 2 + 6 ∂L ∂ζ + 9L ζ=0 ϕ=0 ζ 2 + . . . = S[χ; δ ij ,φ] − d 3 x T 0 ζ + 1 2 (3T 0 + T 1 )ζ 2 + . . . ,(2.Z[ζ] := [dχ]e −S[χ;g ij ,φ] with g ij = e 2ζ δ ij . (2.13)
We normalize it so that Z[ζ = 0] = 1. It can then be expanded in ζ as
Z[ζ] = exp 1 2 d 3 x 1 d 3 x 2 T 0 (x 1 )T 0 (x 2 ) ζ(x 1 )ζ(x 2 ) + 1 6 d 3 x 1 d 3 x 2 d 3 x 3 T 0 (x 1 )T 0 (x 2 )T 0 (x 3 ) + 3 T 0 (x 1 )U (x 2 ) δ 3 (x 2 − x 3 ) ζ(x 1 )ζ(x 2 )ζ(x 3 ) + O(ζ 4 ) .
(2.14)
Here and in what follows we drop ultra-local terms, which contain two or more delta functions and are not important for our purpose. We also assume that one-point functions vanish. 4 The correlation functions ... on the right hand side are defined with the action S[χ; δ ij ,φ]. By going into the momentum space and identifying Z[ζ] with the wave function Ψ[ζ], we can write primordial spectra in terms of correlation functions in the dual QFT as
ζ(k)ζ(−k) = 1 −2Re T 0 (k)T 0 (−k) , (2.15) ζ(k 1 )ζ(k 2 )ζ(k 3 ) = 2Re T 0 (k 1 )T 0 (k 2 )T 0 (k 3 ) + 2Re T 0 (k 1 )U (−k 1 ) + 2 perms 3 i=1 [−2Re T 0 (k i )T 0 (−k i ) ] ,(2.16)
3 See, e.g., [17,18,22,26] for other gauge choices. 4 If one takes into account the quadratic order 1 2 T 2 ζ 2 in T in eq. (2.9), it contributes to a contact term in the coefficient of ζ(x) 3 , where all operators overlap at the same point x. In principle, these contact terms are canceled by local counterterms. Finally, we reformulate the dictionary (2.15)-(2.16) in terms of correlation functions of O 0 by using the conformal Ward-Takahashi identity [33,34] T
(x) s = −λφ(x) O 0 (x) s = −λ φ + ϕ(x) O 0 (x) s ,(2.17)
where . . . s is the correlation function computed in the presence of the sources ζ and ϕ. Expanding this master equation in ζ and ϕ, we can convert the T 0 correlators into O 0 correlators. For example, the order ζ 1 and ϕ 1 terms of Eq. (2.17) are given by
T 0 (k)T 0 (−k) + T 1 (0) = −λφ T 0 (k)O 0 (−k) , (2.18) T 0 (k)O 0 (−k) +3 O 0 (0) = −λφ O 0 (k)O 0 (−k) + λ O 0 (0) ,(2.19)
where we used the expression (2.9). We then arrive at the relation,
T 0 (k)T 0 (−k) = λ 2φ2 O 0 (k)O 0 (−k) + (3 − λ)λφ O 0 (0) − T 1 (0) = λ 2φ2 O 0 (k)O 0 (−k) ,(2.20)
where at the second equality we assumed that one-point functions of O 0 and T 1 vanish. Similarly we can derive the relation,
T 0 (k 1 )T 0 (k 2 )T 0 (k 3 ) + U (k 1 )T 0 (−k 1 ) + U (k 2 )T 0 (−k 2 ) + U (k 3 )T 0 (−k 3 ) = −λ 3φ3 O 0 (k 1 )O 0 (k 2 )O 0 (k 3 ) + λ 3φ2 O 0 (k 1 )O 0 (−k 1 ) + O 0 (k 2 )O 0 (−k 2 ) + O 0 (k 3 )O 0 (−k 3 ) . (2.21)
We therefore have
ζ(k)ζ(−k) = 1 λ 2φ2 [−2Re O 0 (k)O 0 (−k) ] , (2.22) ζ(k 1 )ζ(k 2 )ζ(k 3 ) = −2φRe O 0 (k 1 )O 0 (k 2 )O 0 (k 3 ) + 2Re O 0 (k 1 )O 0 (−k 1 ) + 2 perms λ 3φ4 3 i=1 [−2Re O 0 (k i )O 0 (−k i ) ]
. (2.23)
Interestingly, no correlation functions including U appear in these relations. As a result we can compute primordial spectra purely in terms of correlation functions of O 0 together with the parameters λ andφ.
The UV story
In this section we compute the inflationary bispectrum in a class of P (X, φ) inflation models by using the perturbed CFT presented in the last section. As we discussed in the last section, the perturbation from the UV CFT corresponds to the perturbative expansion inφk −λ . In this section we focus on the UV region, where higher order terms inφk −λ are negligible. In the next section we extend our result to the IR region by performing a resummation.
Generality and consistency relation of bispectrum
Let us first discuss general properties of holographic inflationary correlation functions. As we mentioned earlier, correlation functions in the dual QFT are expanded in the deformation parameter φ. Each order of a two-point function inφ is determined by the dilatation symmetry up to a constant as
O 0 (k)O 0 (−k) = k 3−2λ ∞ n=0 A n −φk −λ n . (3.1)
where A 0 and A 1 are given by (2.4) and other A n 's are defined in a similar way. They are uniquely determined by the UV CFT data, i.e., the spectrum and the OPE coefficients of the UV CFT. Note also that A n 's are real constants because O 0 is a real scalar. Through the holographic dictionary (2.22), the inflationary power spectrum is given by
P ζ (k) = k 3 2π 2 ζ(k)ζ(−k) = k 2λ 2π 2 (−2λ 2φ2 ) ∞ n=0 A n −φk −λ n −1 .
(3.2)
We may compute the corresponding spectral index as
n s − 1 = d ln P ζ d ln k = 2λ + λ nA n −φk −λ n A n −φk −λ n = λ 2 + A 1 A 0 (−φk −λ ) + 2 A 2 A 0 − A 2 1 A 2 0 −φk −λ 2 + . . . . (3.3)
Notice that the spectrum is blue tilted, i.e., n s − 1 > 0, in the UV region because the UV CFT is deformed by a relevant operator λ > 0. Therefore we need to go to the IR region to reproduce a red spectrum.
In the inflationary context it is common to normalize the bispectrum by a factor of the power spectrum squared. By using the dictionary (2.22) and (2.23), we may express the normalized bispectrum in terms of the O 0 correlators as
ζ(k 1 )ζ(k 2 )ζ(k 3 ) ζ(k 1 )ζ(−k 1 ) ζ(k 2 )ζ(−k 2 ) = −λ 1 + Re O 0 (k 2 )O 0 (−k 2 ) Re O 0 (k 3 )O 0 (−k 3 ) + Re O 0 (k 1 )O 0 (−k 1 ) Re O 0 (k 3 )O 0 (−k 3 ) + λφ Re O 0 (k 1 )O 0 (k 2 )O 0 (k 3 ) Re O 0 (k 3 )O 0 (−k 3 ) . (3.4)
By a similar argument for the power spectrum case, we may rewrite it in terms of the UV CFT correlators. Later we will do that explicitly for a concrete UV CFT, but in the rest of this subsection we focus on the squeezed limit without specifying the detail of the UV CFT. In the squeezed limit k 1 k 2 = k 3 , the normalized bispectrum (3.4) is reduced to the form,
ζ(k 1 )ζ(k 2 )ζ(k 3 ) ζ(k 1 )ζ(−k 1 ) ζ(k 2 )ζ(−k 2 ) → −2λ + λφ Re O 0 (0)O 0 (k 2 )O 0 (−k 2 ) Re O 0 (k 2 )O 0 (−k 2 ) . (3.5)
In the inflationary context, the consistency relation, i.e., the Ward-Takahashi identity for the broken de Sitter symmetry, tells us that the shape function in the squeezed limit is related to the spectral index n s as [6,35]
ζ(k 1 )ζ(k 2 )ζ(k 3 ) ζ(k 1 )ζ(−k 1 ) ζ(k 2 )ζ(−k 2 ) → −(n s − 1) ,(3.6)
where the spectral index is computed at the horizon crossing time of the modes k 2 = −k 3 .
Let us reproduce this relation from the QFT side using the expression (3.5). Since the operator O 0 (0) with zero momentum is sourced by the deformation parameterφ, as is seen from the deformed action (2.1), we may rewrite the second term of (3.5) as
λφ Re O 0 (0)O 0 (k 2 )O 0 (−k 2 ) Re O 0 (k 2 )O 0 (−k 2 ) = −λφ ∂φRe O 0 (k 2 )O 0 (−k 2 ) Re O 0 (k 2 )O 0 (−k 2 ) = −λ nA n −φk −λ 2 n A n −φk −λ 2 n . (3.7)
By comparing this with the spectral index in Eq. (3.3) (after the replacement k → k 2 ), we can readily find that the consistency relation (3.6) holds for any inflationary three-point function computed via holography for any single-field inflation. 5 It should be noticed that the dilatation symmetry of the UV CFT plays a crucial role in our derivation. As mentioned in the last section,
the deformation parameterφ appears in the two-point function always in the formφk −λ as a consequence of the dilatation symmetry. As a result, we may convert a derivative inφ into that in the momentum k,
−λ ∂ ∂ lnφ ln ∞ n=1 A n −φk −λ n = ∂ ∂ ln k ln ∞ n=1 A n −φk −λ n ,(3.8)
to reproduce the consistency relation.
Inflationary setup
In the last subsection we showed that the three-point function in the squeezed limit is governed by the (broken) dilatation symmetry and its behavior does not depend on the details of the model. In contrast, the shape of three-point functions for general momentum configurations highly depends on the inflationary model. In the rest of this paper we specify a concrete UV CFT model and compute the shape function through the dictionary (2.23) and the conformal perturbation theory.
As we mentioned in Introduction, we would like to take into account higher derivative operators of general single-field inflation in the holographic inflation. To clarify our setup, let us begin by 5 A similar derivation is given in [15] at the leading order inφ. Our derivation can be thought of as its all-order extension. Another type of holographic derivation may be found in [38,25], where an infinite set of consistency relations [37] were derived (at all orders inφ) based on diffeomorphism invariance of the field theory dual or equivalently spatial diffeomorphism invariance on the constant time slice. briefly reviewing the slow-roll inflationary models discussed in the holographic context:
S s.r. = d 4 x √ −g M 2 Pl 2 R − 1 2 (∂ µ φ) 2 − V s.r. (φ) with V s.r. (φ) = 3M 2 Pl H 2 0 + m 2 2 φ 2 − g 3 3 φ 3 + O(φ 4 ) . (3.9)
Here the inflaton mass m and the scaling dimension ∆ = 3 − λ of the operator O 0 dual to the inflaton are related by m 2 = λ(3 − λ)H 2 0 . We assume that λ 1 for the slow-roll property. We also assume that g 3 ∼ 1 to make the conformal perturbation work well throughout the RG flow generated by the deformation of the CFT. 6 As depicted in Fig. 2, this potential has two extrema,
φ UV = 0 , φ IR = 3λH 2 0 g 3 + O(λ 2 ) ,(3.10)
which correspond to the dual UV and IR CFTs, respectively. We may find a time-dependent solution connecting the two extrema,
φ(t) = φ * 1 e λH 0 (t−t * ) + (1 − e λH 0 (t−t * ) ) g 3 φ * 3λH 2 0 , (3.11)
which satisfies the Hamilton-Jacobi equation for a spatially homogeneous configuration,
φ −λH 0 φ + g 3 3H 0 φ 2 ,(3.12)
in the regime λ 1. Here we introduced φ(t * ) = φ * with t * being some reference time. The inflaton background evolution (3.11) can then be identified with the RG flow connecting the two CFTs. Also let us recall that the scalar spectral index is directly related to the scaling dimension of the deformation operator [see, e.g., Eq. (3.3)]. Since the deformation operator is relevant (irrelevant) near the UV (IR) fixed point, the scalar power spectrum is blue (red) tilted in the UV (IR) region [16,20,23].
Let us now add higher derivative operators to the above slow-roll model. The general single-field inflation, or the so-called P (X, φ) model, has an action of the form,
S = d 4 x √ −g M 2 Pl 2 R + P (X, φ) , (3.13)
where P (X, φ) is an arbitrary function of the inflaton φ and its kinetic operator
X = − 1 2 (∂ µ φ) 2 .
To discuss the bispectrum associated with higher derivative operators in general single-field inflation, it will be reasonable to focus on the following class of P (X, φ) model: 7
P (X, φ) = X + m≥2 α m X m − V s.r. (φ) ,(3.14)
where V s.r. (φ) is the slow-roll potential in Eq. (3.9). Although our model (3.14) accommodates stationary classical solutions φ = 0 and φ = φ IR , it does not necessarily mean that there exists a classical solution connecting the two extrema (corresponding to the RG flow connecting the two CFTs). Indeed, there is no such flow when α m 's are big, essentially because the slow-roll potential makes a subdominant contribution to the inflaton dynamics. In order to discuss a self-contained model with both the red tilted power spectrum and the non-Gaussianities in general single-field inflation, we focus on the following parameter region in this paper: for each m ≥ 2
λ α m λ 4(m−1) 1 . (3.15)
As we discuss in Sec. 4, there exists an RG flow connecting the two CFTs in this parameter region.
Also the sound speed c s of the scalar perturbation satisfies λ c −2 s − 1 1 in the IR region. As a result, the bispectrum associated with the non-trivial sound speed dominates over the slow-roll type one. This is the inflationary setup we consider in this paper. It should be noted that if we are not interested in reproducing a red tilted power spectrum, we do not have to introduce the slow-roll potential. We may also relax the condition (3.15) to realize a small sound speed c s 1 and a large non-Gaussianity f NL 1 even in the UV region. However, we focus on our parameter set to reproduce the red tilted spectrum in this paper.
UV CFT
We then would like to specify the UV CFT corresponding to the local minimum φ = 0 to apply the conformal perturbation theory. As we mentioned in Introduction, we use the dS/CFT correspondence to specify the hypothetical UV CFT in this paper.
As explained in [6], the dS/CFT correspondence can be related to the AdS/CFT correspondence by an analytic continuation. If we write the Poincaré metrics of dS and Euclidean AdS as
ds 2 dS = R 2 dS η −2 (−dη 2 + dx 2 ) , ds 2 AdS = R 2 AdS z −2 (dz 2 + dx 2 ) , (3.16)
they are related to each other by the analytic continuation 8
z = −iη , R AdS = −iR dS ,(3.17)
where R dS and R AdS are the dS radius and the AdS radius, respectively. In particular, the dS radius is the inverse of the Hubble parameter H 0 . 9 Furthermore, we can show that the analytic continuation (3.17) relates the perturbation theory around φ = 0 (corresponding to an exact dS background) to the AdS perturbation theory of the following action:
S AdS = d 4 x √ G 1 2 (∂Φ) 2 + V (Φ) + m≥2 a m 2 m (∂Φ) 2m (3.18)
with the potential V (Φ) and the coupling a m of the form,
V (Φ) = m 2 2 Φ 2 − g 3 3 Φ 3 + O(Φ 4 ) , a m = (−1) m+1 α m . (3.19)
Here for simplicity we neglected gravitational fluctuations, which are not relevant to our computation. We also introduced G and Φ to denote the AdS metric and the scalar field, respectively, to avoid notational confusion. Just as the original argument in [6], our UV CFT correlation functions are then computed via the Witten diagrams for this AdS model followed by the replacement
R AdS = −iR dS , a m = (−) m+1 α m . (3.20)
In the following we will compute the hypothetical UV CFT correlation functions based on this approach. Since the calculation is a bit complicated essentially because of the singularity in the limit λ → 0, we put the details in appendices A and B, and summarize the results and the essence of our computation in the main text. Also for simplicity we mostly set R dS = 1 in the following.
Bispectrum at the UV
We then compute the three-point function of O 0 using the conformal perturbation theory for the model (3.14). As we mentioned earlier, in this section we focus on the UV region defined bȳ
φk −λ λ , (3.21) k 3 k 2 k 1 2m 3 Figure 3: The Witten diagram for O 0 (k 1 )O 0 (k 2 )O 0 (k 3 )O 2m−3 0 (0) CFT .
The black line denotes the bulkboundary propagator K k i (z). The red line represents the zero-momentum bulk-boundary propagator
K 0 (z).
where k is a typical momentum scale of correlation functions of our interest. In this UV region, there are two types of the leading contributions to the three-point functions (3.4). The first contribution is from the first term of (3.4) and is the zeroth order inφ:
−λ 1 + k 2 k 3 3−2λ + k 1 k 3 3−2λ , (3.22)
which corresponds to the slow-roll type shape associated with the η parameter [6].
The other contribution is the α 1 m -terms of the second term of (3.4), which corresponds to the Witten diagram in Fig. 3. In this paper, we are interested in the leading order in λ according to the parameter regime (3.15). Noting that the integral for this Witten diagram is not singular as λ → 0, 10 we can easily evaluate the integral at the leading order in λ. The result is
λφ Re O 0 (k 1 )O 0 (k 2 )O 0 (k 3 ) Re O 0 (k 3 )O 0 (−k 3 ) 4k 1 k 2 k 3 k 3 2 F k 1 k 3 , k 2 k 3 , (3.23)
where the shape function F is defined as
F k 1 k 3 , k 2 k 3 = − 4 3 X 2 P XXX 3k 1 k 2 k 3 2(k 1 + k 2 + k 3 ) 3 + 2XP XX G(k 1 , k 2 , k 3 ) X=(−λφ) 2 /2 .
(3.24)
Here we introduced P XX = ∂ 2 P/∂X 2 and P XXX = ∂ 3 P/∂X 3 . The function G is defined as
G(k 1 , k 2 , k 3 ) = − k 2 1 k 2 2 + k 2 1 k 2 3 + k 2 2 k 2 3 k 1 k 2 k 3 (k 1 + k 2 + k 3 ) + k 2 1 k 3 2 + k 2 1 k 3 3 + k 2 2 k 3 3 + k 2 2 k 3 1 + k 2 3 k 3 1 + k 2 3 k 3 2 2k 1 k 2 k 3 (k 1 + k 2 + k 3 ) 2 + k 3 1 + k 3 2 + k 3
This result reproduces the shapes associated with the X m coupling computed in [27]. We may observe that the first contribution (3.22) is responsible for the consistency relation, whereas the second contribution (3.23) vanishes in the squeezed limit. Indeed, the inflationary parameters in the UV region are given by 11
= 1 2 λφk −λ 2 , (3.26) η = −2λ , (3.27) 1 c 2 s − 1 = m≥2 m(m − 1) 2 m−2 α m λφk −λ 2m−2 , (3.28) s :=ċ s Hc s = m≥2 m(m − 1) 2 2 m−2 λ α m (λφk −λ ) 2m−2 . (3.29)
We may easily check that our three-point functions are consistent with the inflationary results in [27].
In the rest of this section, we revisit the shape of bispectrum associated with the higher derivative operators, i.e., the O(α m ) contribution (3.23) from the OPE perspective. In particular we discuss why it vanishes in the squeezed limit. Let us consider the CFT correlation functions of the form,
O 0 (k 1 )O 0 (k 2 )O 0 (k 3 )O 0 (0) . . . O 0 (0) CFT . (3.30)
As we have discussed, such correlation functions constitute three-point functions of the deformed CFT. In particular, they are relevant to the O(α m ) contribution to the bispectrum. In the momentum space, the conformal partial wave expansion simply reads
O 0 (k 1 )O 0 (k 2 )O 0 (k 3 )O 0 (0) . . . O 0 (0) CFT = I O 0 (k 1 )O 0 (k 2 )O I (k 3 ) CFT O I (k 3 )O 0 (−k 3 )O 0 (0) . . . O 0 (0) CFT O I (k 3 )O I (−k 3 ) CFT ,(3.31)
where O I 's denote all the primary operators in the CFT. The shape of each term in the summation is determined by the three-point functions,
O 0 (k 1 )O 0 (k 2 )O I (k 3 ) CFT ,O 0 (k 1 )O 0 (k 2 )O I (k 3 ) CFT = C O 0 O 0 O I k 3 2 −λ 1 k 3 2 −λ 2 k ∆ I − 3 2 3 ∞ 0 dxx 1 2 K 3 2 −λ (k 1 x)K 3 2 −λ (k 2 x)K ∆ I − 3 2 (k 3 x) ,(3.32)
where ∆ I is the conformal dimension of O I . In Appendix C, we compute CFT three-point functions with two nearly marginal primary scalars and one primary scalar with a conformal dimension close to a non-negative integer n. There we find that such three-point functions vanish in the squeezed limit k 1 k 2 = k 3 except for the case n = 0, 3. In particular the bispectrum associated with derivative couplings is independent of the inflaton cubic coupling g 3 , so that it will be dominated by the partial wave where O I is composite (multi-trace), e.g., operators schematically of the form
O 0 ∂ p O 0 (p = 0, 1, 2, . . .).
Since O 0 is nearly marginal, such composite operators have nearly integer dimensions with n ≥ 6, and therefore their partial wave contributions vanish in the squeezed limit. The bispectrum associated with the higher derivative operators X m then vanishes in the squeezed limit as long as this feature survives after summing up all the partial waves. Further progress in obtaining the precise momentum dependence would require a clarification of the OPE coefficients, which we postpone for future work.
Beyond the UV
As described in Introduction, the bulk geometry of our setup is asymptotically de Sitter space both at the early and late time. In the dual QFT point of view, we have an RG flow between an IR CFT and a UV CFT. In [16,20] it was found that the potential (3.9) has two local extremal points corresponding to those two CFTs, each of which reproduces the spectral tilt, In this section, to discuss inflationary models with a red tilted spectrum (consistent with observations [32]), we extend the holographic computation of the bispectrum to the IR region characterized byφk −λ λ, in contrast with (3.21).
Inflationary bispectrum at the IR
For the holographic computation of the bispectrum (3.4), we need the two-point function
O 0 (k 1 )O 0 (k 2 ) = ∞ n=0 (−φ) n n! O 0 (k 1 )O 0 (k 2 )O 0 (0) n CFT ,(4.2)
and the three-point function
O 0 (k 1 )O 0 (k 2 )O 0 (k 3 ) = ∞ n=0 (−φ) n n! O 0 (k 1 )O 0 (k 2 )O 0 (k 3 )O 0 (0) n CFT ,(4.3)
in the deformed CFT. Just as we did in the previous section, we will compute these UV CFT correlation functions via the Witten diagrams in AdS 4 followed by the replacement (3.20). In the IR region, higher-point CFT correlators become non-negligible, so it seems impossible to obtain the full correlators of the deformed CFT. Nevertheless, it turns out that the leading contributions in our parameter regime are calculable by performing a resummation overφk −λ . For concreteness (and as it turns out to be the most relevant for our purpose), we first summarize our main results of the resummation for the three-point function ( Step 1) seems intractable apparently, but actually the enumeration of the diagrams is simplified In the rest of this subsection we explain these results and their consequences more concretely:
first evaluate the contributions to the three-point function without derivative couplings, which we will call the "non-derivative part" of the three-point function, and then evaluate the contributions at the first order in the derivative couplings, which we will call the "α m -part".
Non-derivative part of the three-point function
Let us first focus on the non-derivative part of the three-point function (4.3). As stated above, this part is contributed by Witten diagrams made up of three bulk-boundary propagators with k i and an arbitrary number of zero-momentum bulk-boundary propagators, connected by the bulk-bulk propagators and the cubic vertex g 3 . We then introduce what we call the effective bulk-boundary propagator schematically as in Fig. 4. It is essentially a bulk-boundary propagator dressed by zeromomentum propagators with the cubic vertex g 3 (see Appendix A for a more precise definition).
Notice that this dressing originates from the operator O 0 (0) in the perturbative expansion (4.3).
All the relevant diagrams are now nicely reformulated into the form of Fig. 5, where three effective bulk-boundary propagators are attached to the cubic coupling g 3 . More explicitly,
O 0 (k 1 )O 0 (k 2 )O 0 (k 3 ) −2g 3 ∞ 0 dz z 4 K k 1 (z)K k 2 (z)K k 3 (z) ,(4.4)
where K q (z) denotes the effective bulk-boundary propagator of momentum q. This is exactly
the integral expression of O 0 (k 1 )O 0 (k 2 )O 0 (k 3 )
CFT with the three bulk-boundary propagators replaced by the effective ones K k i (z).
Generally speaking, it is quite difficult to evaluate the integral (4.4). However, it turns out to be somewhat tractable under the assumptions λ 1 and
k 1 k λ ∼ k 2 k λ ∼ k 3 k λ ∼ 1 .f (φk −λ )(k 3−3λ 1 + k 3−3λ 2 + k 3−3λ 3 ) ,(4.6)
where f (φk −λ ) is some function irrelevant to the shape. Actually, because of some combinatorial complications, it seems not straightforward to determine the function f (φk −λ ) directly. Instead, in the following, we use the consistency relation (3.6), which we already proved on the CFT side for general setups, to find the explicit form of f (φk −λ ).
Let us start from the right hand side of the consistency relation (3.6). The O(α 0 m ) part of the power spectrum and the spectral tilt were already computed in [8,16,20,23], which gives
P ζ = 1 4π 2 β 2 (ḡ(q)) + O(α m ) , n s − 1 = −2β (ḡ(q)) + O(α m ) . (4.7)
Hereḡ(q) and β(ḡ(q)) are defined bȳ
g(q) :=φq −λ 1 + g 3 3φ q −λ λ −1 , (4.8) β(ḡ(q)) := −λḡ(q) + g 3 3ḡ (q) 2 = −λφq −λ 1 + g 3 3φ q −λ λ −2 . (4.9)
As discussed in [20],ḡ(q) and β(g) may be identified with the coupling for the renormalized operator associated with O 0 and the corresponding beta function. Indeed,ḡ(q) flows from UV to IR as 0 ≤ḡ(q) ≤ 3λ g 3 , (4.10)
as expected from our inflationary setup. We may also see that β is at most of the order λ 2 along the flow. On the other hand, the left hand side of Eq. (3.6) in the squeezed limit reads
−2λ + λφ Re O 0 (0)O 0 (k 2 )O 0 (−k 2 ) Re O 0 (k 2 )O 0 (−k 2 ) = −2λ − 2(λφk −λ ) 3 f (φk −λ ) β(ḡ(k)) 2 .
(4.11) k 3 k 2 k 1 2m 3 Figure 6: The diagrammatic representation of the integral (4.14). The red double line represents the effective zero-momentum bulk-boundary propagator E(z).
Here we used the approximation (4.5) and the two-point function,
O 0 (q)O 0 (−q) = − β(ḡ(q)) 2 (λφ) 2 q 3 + O(α m ) ,(4.12)
which follows from the result in [8,16,20,23]. The consistency relation (3.6) then implies
f (φk −λ ) = − λ + β (ḡ(k)) β(ḡ(k)) 2 (λφk −λ ) −3 = − 2g 3 3λ 1 + g 3 3φ k −λ λ −5 .O 0 (k 1 )O 0 (k 2 )O 0 (k 3 ) m≥2 2m(2m − 2) 2 m α m ∞ 0 dzz 2m−4 (2m − 1)K k 1 (z)K k 2 (z)K k 3 (z)E (z) 2m−3 − k 1 .k 2 K k 1 (z)K k 2 (z)K k 3 (z)E (z) 2m−3 + (231) + (312) ,(4.14)
where E(z) denotes the zero-momentum effective bulk-boundary propagator. For the derivation, see Appendix A.4. This is exactly the integral expression of
O 0 (k 1 )O 0 (k 2 )O 0 (k 3 )O 0 (0) 2m−3 CFT
for each m, with the three bulk-boundary propagators replaced by the effective ones K k i (z) and (2m − 3) zero-momentum propagators replaced by the effective ones E(z).
Surprisingly, it turns out that the assumption λ 1 drastically simplifies our computation. In Appendix B we show that the following approximations may be used in the integral (4.14):
K k i (z) 1 + g 3φ k −λ 3λ −2 K k i (z) , (4.15) K k i (z) 1 + g 3φ k −λ 3λ −2 K k i (z) , (4.16) E (z) 1 + g 3φ k −λ 3λ −2 −φ K 0 (z) ,(4.17)
where K q (z) is the standard (undressed) bulk-boundary propagator,
K q (z) = q (3/2)−λ z 3/2 2 (1/2)−λ Γ((3/2) − λ) K (3/2)−λ (qz) . (4.18)
Interestingly, all the ingredients of the integral (4.14) have the same z-independent prefactor,
1 + g 3φ k −λ 3λ −2 = β(ḡ(k)) −λφk −λ ,(4.19)
which goes to unity in the UV limit k → ∞. As a result, the shape function takes a form quite similar to the UV results (3.23)-(3.25). By performing the integral (4.14), we have
O 0 (k 1 )O 0 (k 2 )O 0 (k 3 ) α 1 m = −4k 1 k 2 k 3 β(ḡ(k)) 2 (λφ) 3 F k 1 k 3 , k 2 k 3 ,(4.20)
with the shape function F(k 1 /k 3 , k 2 /k 3 ) defined by
F k 1 k 3 , k 2 k 3 = − 4 3 X 2 P XXX 3k 1 k 2 k 3 2(k 1 + k 2 + k 3 ) 3 + 2XP XX G(k 1 , k 2 , k 3 ) X=β(ḡ(k)) 2 /2 .
(4.21)
Here we used the approximations λ 1 and (4.5). Also notice that all the scale-dependence appears through X = β(ḡ(k)) 2 /2.
Before stating the final result for the inflationary bispectrum, we would like to make some technical comments. After the computation of the integral (4.14), one may wonder why the computation here was simpler than the one for the non-derivative part. In particular, one may wonder why we cannot use the approximations (4.15)-(4.17) for the non-derivative contribution. Actually, the integrals (4.4) and (4.14) have a different behavior around z = 0. First, the leading contribution of the latter integral is regular around z = 0 essentially because α m 's are derivative interactions. We can therefore take the λ → 0 limit of the integrand before performing the integral.
On the other hand, the former is singular for λ 1, so that we need to perform an analytic continuation of λ. Because of this analytic continuation, we cannot take the λ → 0 limit before performing the integral. Essentially, such a difference makes the computation in this subsection simpler compared with the non-derivative part. More details will be discussed in Appendix B.
Final result
We now compute the inflationary bispectrum using the obtained deformed QFT correlators. So far, we have computed the three-point function O 0 (k 1 )O 0 (k 2 )O 0 (k 3 ) up to the first order in α m . Also, by evaluating the second integral of the right hand side of (B.27), we may show that the O(α m ) correction to the two-point function O 0 (q)O 0 (−q) scales as ∼ α m λ 4(m−1) , which is subdominant under the condition (3.15). All in all, the leading contribution to the normalized bispectrum is given by
ζ(k 1 )ζ(k 2 )ζ(k 3 ) ζ(k 1 )ζ(−k 1 ) ζ(k 2 )ζ(−k 2 ) = β (ḡ(k)) 1 + k 1 k 3 3 + k 2 k 3 3 + 4k 1 k 2 k 3 k 3 2 F k 1 k 3 , k 2 k 3 . (4.22)
Comparison with inflationary results
In this subsection, we compare the result (4.22) from the dual QFT with the result on the inflation side. For this we first relate the couplingḡ (4.8) with the inflaton. The couplingḡ satisfies
dḡ(k) d ln k = β(ḡ(k)) = −λḡ(k) + g 3 3ḡ 2 (k). (4.23)
On the inflation side, the time derivative of the inflaton trajectory φ c (t) can be written in terms of the inflaton trajectory φ c itself as 12
φ c = −λφ c + g 3 3 φ 2 c + O(α m ). (4.24)
This equation holds at any time, especially at the horizon crossing t * (k) satisfying k = aH(t * (k)).
Comparing (4.23) and (4.24), we can identify the running couplingḡ(k) at the leading order in λ with the inflaton at the horizon exit φ c (t * (k)) up to α m -corrections,
φ c (t * (k)) =ḡ(k) + O(α m ). (4.25)
Let us apply this identification to the bispectrum directly computed on the inflation side [27]. In our parameter regime, the leading contribution is 26) where the parameters Λ, Σ are defined as [27] Λ := XP XX + 2 3 X 2 P XXX , Σ := P X + 2XP XX . (4.27) Here these parameters and the sound speed are defined with X = 1 2φ 2 c . First the slow-roll parameter η becomes η = 2β . Next, the remaining parameters become, at the leading order in the derivative couplings,
ζ(k 1 )ζ(k 2 )ζ(k 3 ) ζ(k 1 )ζ(−k 1 ) ζ(k 2 )ζ(−k 2 ) = η 2 1 + k 1 k 3 3 + k 2 k 3 3 + 4k 1 k 2 k 3 k 3 3 1 c 2 s − 1 − 2 Λ Σ 3k 1 k 2 k 3 2(k 1 + k 2 + k 3 ) + 1 c 2 s − 1 G(k 1 , k 2 , k 3 ) ,(4.X = 1 2φ 2 c 1 2 β(ḡ(k)) 2 , 1 c 2 s − 1 = 2XP XX P X 2XP XX , 1 c 2 s − 1 − 2Λ Σ = 2XP XX P X − 2 XP XX + 2 3 X 2 P XXX P X + 2XP XX − 4 3 X 2 P XXX , (4.28)
where stands for the equality up to the leading order in α m . Applying these results to (4.26), we find it exactly coincides with (4.22), as desired.
Summary and Outlook
In this paper, we applied conformal perturbation techniques to holographically compute the bispectrum of a generalized single-field inflation model containing derivative couplings. Correlation functions of the reference UV CFT were computed via the dS/CFT correspondence. Using them, we first computed correlation functions of a relevantly deformed CFT perturbatively around the UV fixed point, and obtained a blue tilted power spectrum as well as a bispectrum of the correct shape. Then, by taking into account all-order conformal perturbations to reach the IR region of the RG flow, we reproduced the bispectrum up to first order in the derivative couplings at scales where the power spectrum is red tilted.
We would like to end this paper with several promising future directions. First, our analysis in the present paper was perturbative in the derivative couplings. While our result is exact to all orders in the cubic coupling g 3 , it is valid only around the parameter region with c 2 s ∼ 1. It would be interesting to generalize our findings to the small sound speed case where c 2 s 1. Another interesting direction will be to extend our results to quasi single-field inflation [41][42][43] (which contains massive fields in addition to the inflaton), especially in connection to the "cosmological collider physics" program [44]. The CFT viewpoint will be useful in understanding various soft limit properties of the inflationary correlators. Furthermore, recent developments in higher dimensional CFT, such as the conformal bootstrap approach, 13 may give us additional handles to derive inflationary non-Gaussianities. We expect such technical advances, together with conformal perturbation theory and holography that we exploited in this paper, would shed new light on the structure of primordial spectra. We hope to report our progress in these directions elsewhere in the near future.
A Correlation functions from AdS
This and the next appendices will give a detailed account of computing CFT correlation functions from an AdS action. In this appendix, we give a classical solution to the equation of motion of the AdS action perturbatively, compute their derivatives and use them to write down the correlation functions of our interest. See the appendix B of [20] for a nice review for the canonical kinetic term case. This section is its extension to the case with derivative couplings.
A.1 Review of holographic computations in AdS
The bulk AdS action of our interest is
S AdS = d 4 x √ G 1 2 (∂Φ) 2 + m 2 2 Φ 2 − g 3 3 Φ 3 + m≥2 a m 2 m (∂Φ) 2m , (A.1) where (∂Φ) 2 := G M N ∂ M Φ∂ N Φ and G M N is the Euclidean four-dimensional AdS metric ds 2 = R 2 AdS z −2 (dz 2 + dx 2 ). The equation of motion of Φ is −G −1/2 ∂ M (G 1/2 ∂ M Φ) + m 2 Φ − g 3 Φ 2 − m≥2 a m 2 m G −1/2 ∂ M (G 1/2 ∂ M Φ(∂Φ) 2(m−1) ) = 0. (A.2)
Let us introduce a parameter ν and formally expand Φ in ν as Φ = ∞ n=1 ν n Φ n . Then the equations of motion for Φ n read
0 = DΦ 1 , (A.3) 0 = DΦ 2 − g 3 Φ 2 1 , (A.4) 0 = DΦ n − g 3 a+b=n a,b≥1 Φ a Φ b − m≥2 2ma m 2 m a+ i (b i +c i )=n a,b i ,c i ≥1 G −1/2 ∂ M (G 1/2 ∂ M Φ a (∂ N Φ b i ∂ N Φ c i ) m−1 ), (A.5)
where the third equation is for n ≥ 3, and D is the differential operator
DΦ := −G −1/2 ∂ M (G 1/2 ∂ M Φ) + m 2 Φ. (A.6)
We solve the first equation of motion (A.3) by
Φ 1 (X) = x K X (x)ϕ [0] (x), (A.7)
where ϕ [0] (x) is the boundary field, X = (z, y) denotes a bulk point, and we introduced the bulkboundary propagator K X (x), which satisfies D X K X (x) = 0. We will use its three-dimensional momentum expression
K q (z) := q (3/2)−λ z 3/2 2 (1/2)−λ Γ((3/2) − λ) K (3/2)−λ (qz), R 2 AdS m 2 = λ(λ − 3), (A.8)
which is defined by
K (z,y) (x) = d 3 q (2π) 3 e iq.(x−y) K q (z).
Here the normalization factor in K q is fixed such that K q (z) → z λ as z → 0. To solve the remaining equations of motion (A.4) and (A.5), we introduce the Green's function G(z, x; z , x ), called the bulk-bulk propagator, as the solution to the free equation of motion with a delta-function source
R 2 AdS D z G(z, x; z , x ) = z 4 δ(z − z )δ 3 (x − x ). (A.9)
The overall R 2 AdS on the left hand side is introduced so that the R 2 AdS -dependence only comes from the mass in the form R 2 AdS m 2 . Its three-dimensional momentum space G q (z, z ) reads
G q (z, z ) = (zz ) 3/2 K3 2 −λ (qz)I3 2 −λ (qz ) z > z , (zz ) 3/2 I3 2 −λ (qz)K3 2 −λ (qz ) z > z, (A.10)
which is defined by
G(z, x; z , x ) = d 3 q (2π) 3 e iq.(x−x ) G q (z, z ).
The solutions to the equations of motion (A.4) and (A.5) then read
Φ 2,X = g 3 R −2 AdS X G XX Φ 1,X Φ 1,X , (A.11) Φ n,X = g 3 R −2 AdS X G XX a+b=n a,b≥1 Φ a,X Φ b,X + m≥2 2m 2 m a m R 2(m−5) AdS X G XX a+ i (b i +c i )=n a,b,c≥1 z 4 ∂ M z 2(m−2) ∂ M Φ a,X (∂ N Φ b i ,X ∂ N Φ c i ,X ) m−1 , (A.12)
where each of the indices M, N is summed over z, x. 14 Just for notational simplicity, we introduced the abbreviations: for Φ a,X = Φ a (z, x),
G XX = G(z, x; z , x ), G M N z = G M N (z), ∂ M = ∂ ∂z M and X := dz z 4 d 3 x .
AdS/CFT dictionary To describe the AdS/CFT correspondence precisely, we use the asymptotic behavior of Φ as z → 0, which is given as the sum of two power series
Φ(z) = z λ ϕ [0] + z 2 ϕ [2] + · · · + z 3−λ ϕ [3−2λ] + z 2 ϕ [5−2λ] · · · . (A.13)
The holographic dictionary is to identify ϕ [3−2λ] with the 1-point function in the presence of the boundary field ϕ [0] up to contact terms [47],
O 0 s = −R 2 AdS (3 − 2λ)ϕ [3−2λ] .
(A.14)
One can extract ϕ [3−2λ] from (A.12) by expanding G XX at z → 0, z < z ,
G q (z, z ) = 1 3 − 2λ K q (z )z 3−λ + O(z 5−λ ), (A.15)
such that the contribution of Φ n,X to O 0 s are obtained by replacing G q (z, z ) by K q (z ) in (A.12).
Correlation functions are obtained by taking derivatives with respect to −ϕ [0]
O 0 (k 1 )...O 0 (k n ) = (−) n−1 δ n−1 O 0 (k 1 ) s δϕ [0] (−k 2 )...δϕ [0] (−k n ) ϕ [0] =0 . (A.16)
Note that the correlation functions thus obtained are those of CFT 3 dual to AdS 4 . We can see from (A.7), (A.11) and (A.12) that Φ n contains n source fields ϕ [0] . They can be drawn as
Witten diagrams with the bulk-bulk propagator G and the bulk-boundary propagator K and all the combinatorial factors are contained in Φ n . Correlation functions in our perturbed CFT are expressed as an infinite sum of the CFT correlators, for instance,
O 0 (k 1 )O 0 (k 2 ) = ∞ n=0 (−φ) n n! O 0 (k 1 )O 0 (k 2 )O 0 (0) n CFT .
(A.17) 14 Note that the integral measure also involves R 4 AdS from √ G, which, however, is irrelevant to the flip (3.20).
Since all UV CFT correlation functions contribute at the same order at the IR, we need n-point functions for general n in which two or three operators have nonzero momenta and others have zero momentum. To evaluate them, we first take the n-th, (n − 1)-th, and (n − 2)-th derivatives of Φ n with respect to the boundary fields with zero momentum. Moreover, we will evaluate the correlation functions to all orders in g 3 and to first order in a m . We will finally find that the two-point and three-point functions can be compactly written in terms of the effective bulk-boundary propagators, which can be depicted as Witten diagrams in Fig. 5 plus Fig. 6. Diagrammatic representation of the effective bulk-boundary propagators will be given in Fig. 4 and 8.
For later convenience, we first decompose Φ n as
Φ n = Φ (g 3 ) n + m≥2 Φ (am) n . (A.18) Here Φ (g 3 ) n
has no derivative couplings a m while Φ (am) n has only one a m for each m.
Recovering the AdS radius From now on, we will set R AdS = 1 for simplicity. We can recover the R 2 AdS dependence by applying the following replacement
g 3 −→ R −2 AdS g 3 , a m −→ R 2(m−5) AdS a m , OO... AdS −→ R 2 AdS OO... AdS . (A.19) A.2 Properties of Φ (g 3 ) n
Keeping the correlation functions of our interest in mind, we introduce
C (g 3 ) n,k 1 ,k 2 (z) = 2 (n − 2)!(2g 3 ) n−1 δ n Φ (g 3 ) n (z, k 3 ) δϕ [0] (−k 1 )δϕ [0] (−k 2 )δϕ [0] (0) n−2 , (A.20) K (g 3 ) n,k 1 (z) = 1 (n − 1)!(2g 3 ) n−1 δ n Φ (g 3 ) n (z, k 2 ) δϕ [0] (−k 1 )δϕ [0] (0) n−1 , (A.21) E (g 3 ) n (z) = 1 n!(2g 3 ) n−1 δ n Φ (g 3 ) n (z, 0) δϕ [0] (0) n , (A.22)
where we have omitted momentum-conserving delta function (2π) 3 δ( k i ). 15 Here we make a comment on the meaning of the normalizations above. We will shortly find that the recursion rela-
tions for C (g 3 ) , K (g 3 ) , E (g 3 ) involve no coupling constant g 3 . This implies that the C (g 3 ) , K (g 3 ) , E (g 3 )
are all independent of g 3 , so that the normalizations above tell us the exact g 3 -dependence of the above three derivatives of Φ Taking derivatives of (A.7), (A.11) and (A.12), we find the recursion relations for E
(g 3 ) n
It can be expressed as
C (g 3 )
n,k 1 ,k 2 (z) = X a+b+c=n a≥0, b,c≥1 and the two-point function is derived from K (a 2 ) n,k 1 ,
K (a 2 ) n,k 1 (z) := 1 (n − 1)!a 2 (2g 3 ) n−3 δ n Φ (a 2 ) n (z, k 2 ) δϕ [0] (−k 1 )δϕ [0] (0) n−1 = − z a+b+c+d=n a≥0, b,c,d≥1 ∂ M G a,k 1 (z, z ) ∂ M K (g 3 ) b,k 1 (z )∂ N E (g 3 ) c (z )∂ N E (g 3 ) d (z ) + 2∂ M E (g 3 ) b (z )∂ N E (g 3 ) c (z )∂ N K (g 3 ) d,k 1 (z ) + z a+b+c=n a≥0, b,c≥1 G a,k (z, z )E (a 2 ) b (z )K (g 3 ) c,k 1 (z ), (A.33)
where we introduced ∂ M E n = (∂ z E n , 0) and ∂ M K n,q = (∂ z , iq)K n,q (z), etc. for momentum space we are working in. Indices are contracted as usual. We omit the expressions for E (a 2 ) n since our results will not involve it at the leading order in λ.
Based on the above results, we can now define the effective propagators,
E(z) := ∞ n=1 (−2g 3 ) n−1 (−φ) n E (g 3 ) n (z), (A.34) K q (z) := ∞ n=1 (2g 3φ ) n−1 K (g 3 ) n,q (z) = ∞ p=0 (−2g 3 ) p z 1 ,...,zp G q (z, z 1 )E(z 1 )G q (z 1 , z 2 ) · · · E(z p )K q (z p ), (A.35) E(z) := ∞ n=1 (−2g 3 ) n−3 (−φ) n a 2 E (a 2 ) n (z) = −a 2 z ∂ M G 0 (z, z )∂ M E(z )∂ N E(z )∂ N E(z ), (A.36) G q (z, z ) := ∞ n=0 (2g 3φ ) n G n,q (z, z ) = ∞ p=0 (−2g 3 ) p z 1 ,...,zp G q (z, z 1 )E(z 1 )G q (z 1 , z 2 ) · · · E(z p )G q (z p , z ). (A.37)
We will only use the bulk-boundary effective propagators E, K explicitly later. Here, we give their diagrammatic representations. propagator E(z). The one on the right is the effective bulk-boundary propagator K k (z) in terms of E. See also Fig. 4.
A.4 Correlation functions at all orders
A.4.1 Two-point function
The full two-point function can be written as
O 0 (q)O 0 (−q) = ∞ n=0 (−φ) n n! O 0 (q)O 0 (−q)O 0 (0) n CFT = ∞ n=0 (2g 3φ ) n K (g 3 ) n+1,q (z) + ∞ n=2 (−φ) n (−2g 3 ) n−2 a 2 K (a 2 )
n+1,q (z) [3−2λ] .
(A.38)
Here the operation A| [3−2λ] means picking up the coefficient of z 3−λ -term in the asymptotic form of A as z → 0 and multiplying it by (3 − 2λ)R 2 AdS (to put it practically, replacing every first G q (z, z ) with K q (z ) in the integrals). Diagrammatically, it means pulling the bulk point with coordinate z to the boundary. The first term of (A.38) is just the effective bulk-boundary propagator with the bulk point pulled to the boundary,
∞ n=0 (2g 3φ ) n K (g 3 ) n+1,q (z) [3−2λ] = K q (z) [3−2λ] = p≥0 (−2g 3 ) p z 1 ,...,zp K q (z 1 )E(z 1 )G q (z 1 , z 2 )E(z 2 ) · · · G q (z p−1 , z p )E(z p )K q (z p ), (A.39) where p = 0 in the sum is just the CFT two-point function O 0 (k)O 0 (−k) CFT , we find O 0 (q)O 0 (−q) = K q (z) [3−2λ] − a 2 z ∂ M K q (z) ∂ M K q (z)∂ N E(z)∂ N E(z) + 2∂ M E(z)∂ N E(z)∂ N K q (z) + 2g 3 a 2 z,z K q (z) 2 ∂ M G 0 (z, z )∂ M E(z )∂ N E(z )∂ N E(z ) = K q (z) [3−2λ] − a 2 dz 3∂ z K q (z)∂ z K q (z)∂ z E(z)∂ z E(z) − q 2 K q (z) 2 ∂ z E(z)∂ z E(z) + . . . . (A.40)
Here we introduced the notation ∂ M E = (∂ z E, 0) and ∂ M K q = (∂ z , iq)K q (z). Indices are contracted as usual. We omitted the O(g 3 a 2 ) term, which turns out to be subleading in λ.
A.4.2 Three-point function
We begin with the identity
O 0 (k 1 )O 0 (k 2 )O 0 (k 3 ) = ∞ n=0 (−φ) n n! O 0 (k 1 )O 0 (k 2 )O 0 (k 3 )O 0 (0) n CFT = ∞ n=0 (−φ) n (−2g 3 ) n+1 2 C (g 3 ) n+2,k 2 ,k 3 (z) + ∞ n=1 (−φ) n (−2g 3 ) n−1 a 2 C (a 2 )
n+2,k 2 ,k 3 (z) [3−2λ] . (A.41)
Using the recursive relations we found, we can rewrite the three-point function (A.41) as
O 0 (k 1 )O 0 (k 2 )O 0 (k 3 ) = −2g 3 z K k 1 (z)K k 2 (z)K k 3 (z) − 2a 2 z ∂ M E(z) ∂ M K k 1 (z)∂ N K k 2 (z)∂ N K k 3 (z) + (231) + (312) + (−2g 3 ) 2 z,z E(z)K k 1 (z)G k 1 (z, z )K k 2 (z )K k 3 (z ) + (231) + (312) + 2g 3 a 2 z,z K k 1 (z)K k 2 (z)∂ M G k 3 (z, z ) × ∂ M K k 3 (z )∂ N E(z )∂ N E(z ) + 2∂ M E(z )∂ N E(z )∂ N K k 3 (z ) + (231) + (312) (A.42) = −2g 3 dz z 4 K k 1 (z)K k 2 (z)K k 3 (z) (A.43) − 2a 2 dz 3∂ z K k 1 (z)∂ z K k 2 (z)∂ z K k 3 (z)∂ z E(z) − k 1 .k 2 K k 1 (z)K k 2 (z)∂ z K k 3 (z)∂ z E(z) + (231) + (312) (A.44) + · · · .
Here we dropped the other integrals because they turn out to be subleading in λ and so only the
B Explicit form of the correlation functions at all orders
In this appendix, we evaluate the effective bulk-boundary propagators E and K q explicitly at the leading order in λ. The non-derivative part of the three-point function will also be evaluated.
B.1 Zero-momentum bulk-boundary effective propagator
Let us compute E (g 3 ) n (z) at the leading order in λ. We will encounter the following integral
∞ 0 dz z 4 z aλ z bλ G 0 (z, z ), (B.1)
where the zero-momentum bulk-bulk propagator reads
G 0 (z, z ) = 1 3 − 2λ z λ z 3−λ θ(z − z ) + z λ z 3−λ θ(z − z) . (B.2)
By the step functions in G 0 , the integral splits into 16 The point is that one z-differentiation of the effective zero-
momentum propagator E (g 3 ) n+1
(z) yields one λ. By this property, we can see for each order inφ, or equivalently in g 3 , that the 1st and 2nd term on the right hand side of (A.42) always dominate over the 3rd and 4th terms at the leading order in λ. 16 The following order estimate can be justified recursively.
B.2 Effective bulk-boundary propagators of general momentum
Let us compute K n,q (z). We start from K 1,q (z) = K q (z). Applying the recursion relation (A.26) to this, we find
K 2,q (z) = ∞ 0 dz z 4 G q (z, z )E (g 3 ) 1 (z )K q (z ). (B.6)
The bulk-to-boundary propagator may be expanded as
K q (z) = q −λ (qz) λ 1 + O (qz) 2 + 1 3 (qz) 3−λ 1 + O (qz) 2 . (B.7)
Note that the series expansion is in the the dimensionless combination qz.
Here and in what follows, we will recurrently use such a dimensionless combination. It will make the computations simpler and also be useful to reproduce the correct scale-dependence of the correlation functions.
The z > z part of the bulk-to-bulk propagator may be expanded as
G q (z, z ) = K q (z)I q (z ) = q −3+λ K q (z) Γ(3/2 − λ) 2Γ(5/2 − λ) (qz) 3−λ 1 + O (qz ) 2 , (B.8)
where we introduced
I q (z) = 2 1/2−λ Γ(3/2 − λ)(qz) 3/2 q −3+λ I 3/2−λ (qz) . (B.9)
For λ 1, we may simplify it as
G q (z, z ) 1 3 q −3+λ K q (z)(qz ) 3−λ 1 + O (qz ) 2 . (B.10)
Note that we ignored λ in the Gamma functions but kept (qz ) 3−λ . This is because the approximation (qz) λ 1 is not valid when λ| ln(qz)| 1, as explained in the zero-momentum case. On the other hand, the z > z part may be expanded as
G q (z, z ) = q −λ I q (z) (qz ) λ 1 + O (qz ) 2 + 1 3 (qz ) 3−λ 1 + O (qz ) 2 . (B.11)
The recursion relation for n = 2 (B.6) reads, reflecting the step functions in G q ,
K 2,q (z) = z 0 + ∞ z dz z 4 G q (z, z )E (g 3 ) 1 (z )K q (z ) . (B.12)
The first part is whose dominant contribution is from y ∼ 0. We can find the leading order terms in λ by collecting the y −1+O(λ) terms in the integrand. The remaining part of (B.12) is
z 0 part q −λ K q (z)∞ z part q 3−3λ I q (z) ∞ 0 − qz 0 dy y 4 y λ y λ 1 + O y 2 + 1 3 y 3−λ 1 + O y 2 2
. (B.14)
Its dominant contribution is again from y ∼ 0 since the integrand is originally proportional to y λ−4 K 2 q (y ), which damps exponentially as y → ∞. Therefore the leading order terms in λ can be found by collecting the y −1+O(λ) terms and applying (C.7) to them. Then it turns out that the contributions from ∞ 0 and qz 0 differ by an overall factor of (qz) λ , which is not negligible when λ| ln(qz)| 1. Whether it is negligible is determined by whether the (qz)-integral to which the effective propagator is applied is regular around qz = 0 or not. We will discuss this point for the integrals (A.43) and (A.44), respectively.
B.2.1 Effective propagators in integral (A.43)
Let us consider the case in which the effective propagators are used in the integral (A.43). We first estimate the order in λ. For this, following the technique given in Appendix C, we collect terms of the form z −1+O(λ) , which gives rise to λ −1 . Indeed the integral (A.43) contains z −1+O(λ) . So the λ-expansion of this integral starts from order λ −1 . Since this integrand z −1+O(λ) is singular around z = 0, it is dangerous to set (qz) λ = 1 and thus (B.14) cannot be zero.
To extract the leading order in λ from the integral (B.12), namely (B.13)+(B.14), we collect terms of the form y −1+O(λ) following Appendix C. Then we find K 2,q (z) = q −2λ λ −1 2 9 (qz) 3−λ − 1 9 (qz) 3 + 1 3 (qz) 2λ + higher order in λ .
(B.15)
Using the recursion relation (A.27), we can find the structure of K n+1,q (z) inductively
K n+1,q (z) = q −(n+1)λ λ −n A n (qz) (n+1)λ + n l=0 B nl (qz) 3+(l−1)λ , (B.16)
where A n , B nl are independent of λ.
Let us evaluate the integral (A.43). This involves three momenta k 1 , k 2 , k 3 . We introduce k = k 1 + k 2 + k 3 as a reference momentum and rescale the integration variable as y = kz. (See also Appendix C.) We can then use the approximation (k i /k) λ ∼ 1 by the approximation (4.5). Then, we can show at the leading order in λ that the integral is proportional to k 3−3λ
1 + k 3−3λ 2 + k 3−3λ 3
at all orders inφ. The proportional factor is the function ofφk −λ , which is fixed in Sec. 4.2.
B.2.2 Effective propagators in integral (A.44)
Let us next consider the case in which the effective propagators are used in the integral (A.44).
We first estimate the order in λ. In this case it turns out that the leading contribution in the λ-expansion comes from the terms in the integrand which are regular around z = 0. So we may set (qz) λ = 1 and thus (B.14) becomes zero. Thus it is enough to evaluate the [0, z]-part (B.13).
Then, K 2,q (z) becomes
K 2,q (z) = z λ 3λ K q (z). (B.17)
For K n,q (z) with general n, it is also enough to evaluate the z -integral over [0, z] in the recursion relation (A.27). We then find
K n+1,q (z) = (n + 1) z λ 6λ n K q (z). (B.18)
Let us find explicit forms of K q (z) and E(z). Here q is one of the three-momenta k 1 , k 2 , k 3
in the three-point function of our interest. Before taking the infinite sum in the definitions (A. 34) and (A.35) of E and K q , let us clarify the power structure of K q (z) and E(z). As in the nonderivative case, we rescale the integration variable with the reference momentum k as y = kz in (A.43) and (A.44). Then we can see that −φ −1 E and K q are the power series in the following form: for each n
(−φ) n E (g 3 ) n , (−φ) n−1 K (g 3 ) n,q ∝ −φk −λ λ n (1 + O(λ)),
where the approximation (4.5) was applied. Therefore, in the IR scale characterized byφk −λ /λ = O(1), it is enough to pick up the λ-leading contributions to K n,q and −φ −1 E
K q (z) = 1 − g 3φ z λ 3λ −2 K q (z), (B.19) E(z) = (−φ) 1 − g 3φ z λ 3λ −1 z λ . (B.20)
Since we can use the approximation (kz) λ = 1 inside the integral (A.44), these forms can be simplified further,
K q (z) = 1 − g 3φ k −λ 3λ −2 K q (z), (B.21) E(z) = (−φ) 1 − g 3φ k −λ 3λ −1 z λ . (B.22)
Similarly, we may simplify K k i (z) and E (z) as
K q (z) = 1 − g 3φ k −λ 3λ −2 K q (z) , (B.23) E (z) = (−λφ) 1 − g 3φ k −λ 3λ −2 z λ−1 . (B.24)
B.3 From AdS to dS
We have obtained from the AdS action (A.1) the integral expressions for the full two-point and three-point functions and the explicit forms of the effective propagators. Now, following Sec. 3.3, we convert them into the forms associated with the UV CFT dual to our inflation model (in dS).
To achieve this, we apply the replacement (3.20) after recovering the AdS radius dependence using the rule (A. 19). The result is the following:
(i) effective bulk-boundary propagators for (A.44)
K dS q (z) = 1 + g 3 R −2 dSφ k −λ 3λ −2 K q (z), (B.25) E dS (z) = −φ 1 + g 3 R −2 dSφ k −λ 3λ −1 z λ . (B.26)
Their derivatives corresponding to (B.23) and (B.24) may also be obtained in a similar way.
(ii) integral form of the two-point function where λ 1 characterizes the deviation from marginality and integer dimension, and we focus on the leading order in λ of the three-point functions.
O 0 (q)O 0 (−q) = −R 2 dS K q (z) [3−2λ] + m≥2 2m 2 m α m (R dS ) 2(m−2) ∞ 0 dzz 2(m−2) E (z) 2(m−1) (2m − 1)K q (z) 2 − q 2 K q (z
As we mentioned earlier, a CFT three-point function of primary scalars of dimensions ∆ i (i = 1, 2, 3) can be uniquely fixed by the conformal symmetry as [49]
O 1 (k 1 )O 2 (k 2 )O 3 (k 3 ) = C 123 k ∆ 1 +∆ 2 +∆ 3 −3 Id 2 −1{∆ 1 − 3 2 ,∆ 2 − 3 2 ,∆ 3 − 3 2 } (κ 1 , κ 2 , κ 3 ) (C.3)
up to the OPE coefficient C 123 . Here we introduced k = k 1 + k 2 + k 3 and κ i = k i /k. We refer to the function I α{β 1 ,β 2 ,β 3 } as the triple-K integral [48][49][50] defined by 17
I α{β 1 ,β 2 ,β 3 } (κ 1 , κ 2 , κ 3 ) = κ β 1 1 κ β 2 2 κ β 3 3 ∞ 0 dxx α K β 1 (κ 1 x)K β 2 (κ 2 x)K β 3 (κ 3 x) , (C.4)
where K ν (z) is the modified Bessel function of the second kind. Refs. [49,50] studied properties of the triple-K integral in detail. Here we give a brief summary of their results we will use. The integral (C.4) converges if the parameters satisfy α + 1 > |β 1 | + |β 2 | + |β 3 | for fixed κ 1 , κ 2 , κ 3 > 0.
Outside this parameter region, we define the integral as its maximal analytical continuation such that the integral coincides with the integral evaluated in the convergence region. This integral, however, still has singularities for special parameter sets, defined by α + 1 ± β 1 ± β 2 ± β 3 = −2n for n = 0, 1, ..., where the choice of the three ± is arbitrary. In particular, for our parameter set (C.2), the triple-K integral is singular in the limit λ → 0. In the following we compute the triple-K integral for (C.2) at the leading order in λ by applying the discussion in [50].
C.1 General results
By using the series expansion of the Bessel K function,
K ν (z) = ∞ j=0
a − j (ν)z −ν+2j + a + j (ν)z ν+2j with a ± j (ν) := (−) j Γ(∓ν − j) 2 ±ν+2j+1 j! , (C. 5) we notice that the integrand of the triple-K integral of our interest contains terms schematically of the form,
∼ ∞ j=0 x −1±n+2j+O(λ) , ∞ j=0 x −1±(n−3)+2j+O(λ) , ∞ j=0
x −1−(n−6)+2j+O(λ) . (C.6)
Here for notational simplicity we omitted the coefficients of each term, which are of O(λ 0 ). An important point is that there always appear x −1+O(λ) -terms when n ≥ 0, i.e., when O 3 has a nonnegative dimension close to an integer. Since dx x −1+αλ = 1 αλ x αλ , one may naively expect that the singularity of the triple-K integral in the λ → 0 limit is associated with those x −1+O(λ) -terms.
Indeed, Ref. [50] showed that the singularity in this limit arises only from the integration over a region around x = 0 of x −1+O(λ) -terms, by virtue of the maximal analytic continuation. Based on the argument in [50], we apply the following recipe to compute the triple-K integral at the leading order in λ:
1. Expand the integrand of the triple-K integral (C.4) in x using (C.5).
2. Then, collect the x −1+O(λ) -terms and integrate them over a region around x = 0 using where µ 1 is the cutoff satisfying the condition |αλ ln µ| 1.
This procedure gives us the leading contribution to the triple-K integrals in the λ expansion.
Now it is straightforward to compute the triple-K integral. Below, we first explain the computation for the n = 0 case in detail, and then present the results for general n.
Detailed explanation for n = 0: We start from a detailed explanation for the n = 0 case. In this case, the integrand has three types of x −1+O(λ) -terms:
x −1−(v 1 +v 2 +v 3 )λ a − 0 ( 3 2 + v 1 λ) a − 0 ( 3 2 + v 2 λ)a − 0 (− 3 2 + v 3 λ) + (κ 1 ) 3+2v 1 λ (κ 3 ) −3+2v 3 λ x −1+(v 1 −v 2 +v 3 )λ a + 0 ( 3 2 + v 1 λ)a − 0 ( 3 2 + v 2 λ)a + 0 (− 3 2 + v 3 λ) + (κ 2 ) 3+2v 2 λ (κ 3 ) −3+2v 3 λ x −1+(−v 1 +v 2 +v 3 )λ a − 0 ( 3 2 + v 1 λ)a + 0 ( 3 2 + v 2 λ)a + 0 (− 3 2 + v 3 λ) .
(C.8) By applying Eq. (C.7), we find I 1 2 { 3 2 +v 1 λ, 3 2 +v 2 λ,− 3 2 +v 3 λ} (k 1 , k 2 , k 3 )
= a − 0 ( 3 2 ) a − 0 ( 3 2 )a − 0 (− 3 2 ) −(v 1 + v 2 + v 3 )λ + κ 1 κ 3 3 a + 0 ( 3 2 ) a − 0 ( 3 2 )a + 0 (− 3 2 ) (v 1 − v 2 + v 3 )λ + κ 2 κ 3 3 a − 0 ( 3 2 ) a + 0 ( 3 2 )a + 0 (− 3 2 ) (−v 1 + v 2 + v 3 )λ + O(λ 0 ) = π 3/2 6 √ 2 λ 1 −(v 1 + v 2 + v 3 ) + (κ 1 /κ 3 ) 3 v 1 − v 2 + v 3 + (κ 2 /κ 3 ) 3 −v 1 + v 2 + v 3 + O(λ 0 ) . (C.9)
Here we have used (κ i ) λ = 1 + λ ln κ i + . . . = 1 + O(λ). This approximation is applicable as long as |λ ln(k i /k t )| 1, which is indeed the case (4.5) in our inflationary discussions.
three-point functions for n = 1, 2, 3: In a similar manner we may compute the integral for n = 1, 2, 3 as follows: For n = 1,
= π 3/2 6 √ 2λ (κ 3 ) 3 −v 1 − v 2 + v 3 + (κ 2 ) 3 −v 1 + v 2 − v 3 + (κ 1 ) 3 v 1 − v 2 − v 3 + O(λ 0 ) . (C.12)
In particular, when v 1 = v 2 = v 3 = −1, i.e., ∆ 1 = ∆ 2 = ∆ 3 = 3 − λ, it reduces to the form,
I1 2 { 3 2 −λ, 3 2 −λ, 3 2 −λ} (κ 1 , κ 2 , κ 3 ) = π 3/2 6 √ 2λ (κ 1 ) 3 + (κ 2 ) 3 + (κ 3 ) 3 + O(λ 0 ) . (C.13)
Figure 1 :
1A sketch of the bulk and the dual CFT description of the cosmic evolution. From the bulk perspective (right figure), the inflaton starts from the grey ball on the top of the potential and rolls all the way down to the bottom of the potential. From the CFT perspective (left figure) such an evolution is identified with an RG flow connecting two conformal fixed points.
where the correlators of ζ on the left hand side are defined by ... = [dζ] ... |Ψ[ζ]| 2 .
Figure 2 :
2Illustration of the slow-roll potential V s.r. (φ).
(n s − 1)| UV = 2λ, (n s − 1)| IR = −2λ + . . . .
goal is to evaluate all the CFT correlation functions in the expansions (4.2) and (4.3) at the leading order in λ, at all orders in g 3 and up to the first order in the derivative couplings, following the strategy given in Sec. 3.3. Namely, starting from the AdS action (3.18), we 1) evaluate all Witten diagrams for the CFT correlation functions in (4.2) and (4.3) in our parameter regime, 2) substitute the results of Step 1) back into the expansions (4.2) and (4.3), and carry out the summations keeping the conditionφk −λ λ in mind, 3) finally apply the replacement (3.20) to obtain correlation functions of the perturbed CFT.
in our parameter region stated in Sec. 3.2. First, we only need to take into account the cubic coupling g 3 out of the slow-roll potential because it dominates over higher-point couplings. Second, diagrams with two or more derivative couplings are subleading thanks to the conditions (3.15) on the derivative couplings. These conditions on the model parameters enable us to complete the computation of relevant diagrams. The detail of Step 1) is given in Appendix A. Then, inStep 2), we obtain integral representations of the full two-point and three-point functions after the summations. Remarkably, these integral expressions have very compact structures with what we call "effective bulk-boundary propagators". As explained in more detail shortly, they are bulkboundary propagators "dressed" with zero-momentum propagators associated with the operator O 0 (0). The result of Step 3) is given in Appendix B.3.
Figure 4 :
4The effective bulk-boundary propagator is essentially taking all contributions of zero momentum legs (indicated in red). When the bulk point is pulled to the boundary according to (A.15), it is exactly the two-point function of the perturbed CFT.
Figure 5 :
5The diagrammatic representation of the integral (4.4). The black double line denotes the effective bulk-boundary propagator K k i (z).
introduced k := k 1 + k 2 + k 3 . From an inflationary point of view, the first assumption corresponds to the slow-roll condition as we mentioned earlier. On the other hand, the second one is essentially equivalent to the assumption that the slow-roll parameters are approximately the same at the time of horizon crossing of each mode k i . In Appendix B.2.1, we find that the integral (4.4) takes the form,
m -part of the three-point function We next move on to the α m -part of the three-point function (4.3). This part is contributed by all diagrams made up of three bulk-boundary propagators with with k i and an arbitrary number of zero-momentum bulk-boundary propagators, connected by the bulk-bulk propagators, the cubic vertex g 3 and only one derivative coupling. Let us look at the diagrams with one α m . Similarly to the non-derivative case, all the relevant diagrams are nicely reformulated in terms of effective propagators as in Fig. 6. Here, the new ingredient is the effective zero-momentum bulk-boundary propagator, i.e., the red double line in the figure. It is again dressed by zero-momentum propagators with the cubic vertex g 3 (see Appendix A for more details). More explicitly, the integral representation of the α m -contribution to the three-point function (4.3) is given by
Acknowledgement
We thank Junyu Liu for initial collaboration during his internship at the Hong Kong University of Science and Technology. We thank Ignatios Antoniadis, Auttakit Chatrabhuti, Xingang Chen, Oleg Evnin, Hongliang Jiang, Juan Maldacena, Razieh Emami Meibody, Yutaka Ookouchi, Yuki Sato, Shigeki Sugimoto, Henry Tye and Yi Wang for useful discussions. This work is supported in part by the Research Grants Council (RGC) of Hong Kong through grants HKUST4/CRF/13G and 16304414. HI is supported by the "CUniverse" research promotion project by Chulalongkorn University (grant reference CUAASC). GS is supported in part by the DOE grant DE-FG-02-95ER40896 and the Kellett Award of the University of Wisconsin. SZ is supported by the Hong Kong PhD Fellowship Scheme (HKPFS) issued by the RGC of Hong Kong. We also thank the Yukawa Institute for Theoretical Physics at Kyoto University for hospitality during the workshop YITP-W-16-05 "Strings and Fields 2016", where HI presented the results in this paper. He benefited from the fruitful discussion there.
15
More concretly, for (A.20), it holds that k 1 + k 2 + k 3 = 0, and for (A.20), it holds that k 1 + k 2 = 0.
Figure 8 :
8derive AdS 4 integral representations of the two-point and three-point functions using the derivatives of the classical solutions found in the last appendix. In what follows, CFT correlation functions in Sec. A.4.1, A.4.2, B.1 and B.2 are all for CFT dual to AdS, while those defined with CFT dual to dS only appear in Sec. B.3. The diagram on the left is the Witten diagram of the effective zero momentum bulk-boundary
first and second integrals (A.43), (A.44) contribute to the three-point function. The first integral (A.43) comes from (A.30) while the second comes from the first integral of (A.32).
has an integrand z (a+b+1)λ−4 , so that the integral is convergent around z ∼ ∞. Also it is regular at λ = 0. On the other hand, z 0 part has integrand z −1+(a+b−1)λ and the leading order of the integral is of O(λ −1 ).Thus the integral becomes∞ 0 dz z 4 z aλ z bλ G 0 (z, z ) = z (a+b)λ 3(a + b − 1)λ + . . . , (B.3)where the dots stand for higher order terms in λ. Here one would wonder if we may approximate that z (a+b)λ 1 at the leading order in λ. However, this approximation breaks down whenλ ln |z| 1 (indeed, it happens for very small z in the integral region [0, ∞)). We therefore keep z (a+b)λ without reducing it to 1. This remark applies to the computations in what follows. With the explicit forms for n = 1 and n Order estimate of the integral (A.42) for three-point function Using this explicit form (B.5), we can estimate the λ-order of the integral (A.42) for the three-point function to show that (A.43) and (A.44) dominate.
3+λ 1 + O y 2 , y = qz , (B.13)
(g 3
3) n for each n in order to extract the λ-leading contributions to (A.34) and (A.35). Substituting (B.5) and (B.18) into the definitions (A.34) and (A.35), we find
) 2 .
2(B.27) (iii) integral form of the three-point functionO 0 (k 1 )O 0 (k 2 )O 0 E (z) 2m−3 (2m − 1)K k 1 (z)K k 2 (z)K k 3 (z) − k 1 .k 2 K k 1 (z)K k 2 (z)K k 3 (z) + (231) + (312) .(B.28)C three-point functions with two nearly marginal scalarsIn this appendix we compute CFT three-point functionsO 1 (k 1 )O 2 (k 2 )O 3 (k 3 ) CFT (C.1)of two nearly marginal scalars, O 1 and O 2 , and one scalar O 3 with a conformal dimension close to a non-negative integer n. More explicitly, we parametrize the dimension ∆ i of O i as ∆ 1 = 3 + v 1 λ , ∆ 2 = 3 + v 2 λ , ∆ 3 = n + v 3 λ , (C.2)
I1 2
I1{ 3 2 +v 1 λ,3 2 +v 2 λ,− 12 +v 3 λ} (κ 1 , κ 2 , κ 3 ) = π 3/2 4 √ 2(v 1 + v 2 − v 3 )λ (κ 1 ) 2 + (κ 2 ) 2 − (κ 3 ) 2 κ 3 + O(λ 0 ) . (C.10) 3 λ} (κ 1 , κ 2 , κ 3 )
which is uniquely fixed by the conformal symmetry up to an overall OPE coefficient. For example, when O I is a primary scalar, it takes the form,
Our parameter Λ was denoted by λ in[27]. We use the capital letter Λ because λ is reserved for the anomalous conformal dimension of the deformation operator O 0 , which is one of the key parameters appearing recurrently in this paper. See the next section for its definition.
For a correlation function in momentum space we will put a prime on the bracket ... , which means dropping the factor (2π) 3 δ 3 ( k i ), while for a correlation function in position space we will not put a prime on the ket.
More precisely, we assume that the mass term and the cubic interaction dominate over the other terms throughout the flow.
In other words, we assume P Xφ = 0 (for any φ and X) for simplicity. Here and in what follows we use the notation such as P Xφ = ∂ 2 P/∂X∂φ.
The dS/CFT correspondence and AdS/CFT correspondence can also be related by an analytic continuation of the Planck mass[10]. The relation of these two approaches was discussed, e.g., in[24].9 The Hubble parameter of the exact dS (dual to the UV CFT) is given by H 0 introduced in Eq. (3.9), whereas the Hubble parameter H(t) during inflation is time-dependent and deviates from H 0 because of the inflaton background.
8k 1 k 2 k 3 . (3.25)10 See also the comment at the end of Sec. 4.1.3.
Our convention for the slow-roll parameters is = −Ḣ/H 2 and η =˙ /( H), where H(t) is the Hubble parameter.
This is derived by writingφ c in terms of φ c . See[20] for the detail for a general slow-roll potential.
See, e.g.,[45,46] for application of the conformal bootstrap approach to non-unitary CFTs, which will be relevant in the context of dS/CFT.
Note that the triple-K integrals take the same form, up to an overall numerical constant, as the three-point correlation function computed with the Witten diagrams in (d + 1)-dimensional AdS spacetime, in which the coefficient C 123 is determined from the bulk action in AdS d+1 , and the x-integral in (C.4) corresponds to the integral in the radial direction of AdS d+1 .
General expression for n ≥ 4: It is not difficult to derive the general expression for n ≥ 4.For an even n ≥ 4, we haven . The index n is the number of the zero-momentum bulk-boundary propagators.The recursion relations forIn terms of E (g 3 ) , it reads,n,k (z) = p≥0 n 1 +...+np=n−1 1≤n 1 ,...,np≤n−1The summation symbol p≥0 n 1 +...+np=n−1,1≤n 1 ,...,np≤n−1 means that the sum over all possible decompositions of n − 1 into a sum of integers no less than 1. In the n = 1 case, this decomposition does not exist, namely p = 0, consistent with (A.25). Diagrammatically, it simply means attaching (n − 1) zero momentum legs to the bulk-boundary propagator using the g 3 vertex.The recursion relations for Cwhere we defined for n ≥ 0 G n,k (z, z ) := p≥0 n 1 +...+np=n 1≤n 1 ,...,np≤nwhere G 0,k is equal to G k . The following figure shows this definition diagrammatically:In a similar manner, from Φ to general m is straightforward, though the computations will be a bit more complicated. For instance, the three-point function is derived from C (a 2 ) n,k 1 ,k 2 ,d,k 1 ,k 2 (z ) + z a+b+c=n a≥0, b,c≥1 G a,k 1 +k 2 (z, z ) K
The Large N limit of superconformal field theories and supergravity. J M Maldacena, hep-th/9711200Adv. Theor. Math. Phys. 38231Int. J. Theor. Phys.J. M. Maldacena, "The Large N limit of superconformal field theories and supergravity," Int. J. Theor. Phys. 38, 1113 (1999) [Adv. Theor. Math. Phys. 2, 231 (1998)] [hep-th/9711200].
The dS/CFT correspondence. A Strominger, hep-th/0106113JHEP. 011034A. Strominger, "The dS/CFT correspondence," JHEP 0110, 034 (2001) [hep-th/0106113].
Quantum gravity in de Sitter space. E Witten, hep-th/0106109E. Witten, "Quantum gravity in de Sitter space," hep-th/0106109.
Inflation and the dS/CFT correspondence. A Strominger, hep- th/0110087JHEP. 011149A. Strominger, "Inflation and the dS/CFT correspondence," JHEP 0111, 049 (2001) [hep- th/0110087].
De Sitter holography and the cosmic microwave background. F Larsen, J P Van Der Schaar, R G Leigh, hep-th/0202127JHEP. 020447F. Larsen, J. P. van der Schaar and R. G. Leigh, "De Sitter holography and the cosmic microwave background," JHEP 0204, 047 (2002) [hep-th/0202127].
Non-Gaussian features of primordial fluctuations in single field inflationary models. J M Maldacena, astro-ph/0210603JHEP. 030513J. M. Maldacena, "Non-Gaussian features of primordial fluctuations in single field inflationary mod- els," JHEP 0305, 013 (2003) [astro-ph/0210603].
Inflation and de Sitter holography. F Larsen, R Mcnees, hep- th/0307026JHEP. 030751F. Larsen and R. McNees, "Inflation and de Sitter holography," JHEP 0307, 051 (2003) [hep- th/0307026].
Inflationary perturbations from deformed CFT. J P Van Der Schaar, hep-th/0307271JHEP. 040170J. P. van der Schaar, "Inflationary perturbations from deformed CFT," JHEP 0401, 070 (2004) [hep-th/0307271].
Non-Gaussian Inflationary Perturbations from the dS/CFT Correspondence. D Seery, J E Lidsey, astro-ph/0604209JCAP. 0606D. Seery and J. E. Lidsey, "Non-Gaussian Inflationary Perturbations from the dS/CFT Correspon- dence," JCAP 0606, 001 (2006) [astro-ph/0604209].
Holography for Cosmology. P Mcfadden, K Skenderis, arXiv:0907.5542Phys. Rev. D. 8121301hep-thP. McFadden and K. Skenderis, "Holography for Cosmology," Phys. Rev. D 81 (2010) 021301 [arXiv:0907.5542 [hep-th]].
The Holographic Universe. P Mcfadden, K Skenderis, arXiv:1001.2007J. Phys. Conf. Ser. 22212007hep-thP. McFadden and K. Skenderis, "The Holographic Universe," J. Phys. Conf. Ser. 222, 012007 (2010) [arXiv:1001.2007 [hep-th]].
Higher Spin Realization of the dS/CFT Correspondence. D Anninos, T Hartman, A Strominger, arXiv:1108.5735hep-thD. Anninos, T. Hartman and A. Strominger, "Higher Spin Realization of the dS/CFT Correspon- dence," arXiv:1108.5735 [hep-th].
Holographic Non-Gaussianity. P Mcfadden, K Skenderis, arXiv:1011.0452JCAP 1105. 13hep-thP. McFadden and K. Skenderis, "Holographic Non-Gaussianity," JCAP 1105, 013 (2011) [arXiv:1011.0452 [hep-th]].
Holographic predictions for cosmological 3-point functions. A Bzowski, P Mcfadden, K Skenderis, arXiv:1112.1967JHEP. 120391hep-thA. Bzowski, P. McFadden and K. Skenderis, "Holographic predictions for cosmological 3-point func- tions," JHEP 1203 (2012) 091 [arXiv:1112.1967 [hep-th]].
Dual description of a 4d cosmology. M Smolkin, N Turok, arXiv:1211.1322hep-thM. Smolkin and N. Turok, "Dual description of a 4d cosmology," arXiv:1211.1322 [hep-th].
Consistency condition for inflation from (broken) conformal symmetry. K Schalm, G Shiu, T Van Der Aalst, arXiv:1211.2157JCAP. 13035hep-thK. Schalm, G. Shiu and T. van der Aalst, "Consistency condition for inflation from (broken) conformal symmetry," JCAP 1303, 005 (2013) [arXiv:1211.2157 [hep-th]].
Holography for inflation using conformal perturbation theory. A Bzowski, P Mcfadden, K Skenderis, arXiv:1211.4550JHEP. 130447hep-thA. Bzowski, P. McFadden and K. Skenderis, "Holography for inflation using conformal perturbation theory," JHEP 1304 (2013) 047 [arXiv:1211.4550 [hep-th]].
CMB from CFT. I Mata, S Raju, S Trivedi, arXiv:1211.5482JHEP. 130715hepthI. Mata, S. Raju and S. Trivedi, "CMB from CFT," JHEP 1307 (2013) 015 [arXiv:1211.5482 [hep- th]].
Inflation and deformation of conformal field theory. J Garriga, Y Urakawa, arXiv:1303.5997JCAP. 130733hep-thJ. Garriga and Y. Urakawa, "Inflation and deformation of conformal field theory," JCAP 1307 (2013) 033 [arXiv:1303.5997 [hep-th]].
Higher Spin de Sitter Holography from Functional Determinants. D Anninos, F Denef, G Konstantinidis, E Shaghoulian, arXiv:1305.6321JHEP. 14027hep-thD. Anninos, F. Denef, G. Konstantinidis and E. Shaghoulian, "Higher Spin de Sitter Holography from Functional Determinants," JHEP 1402 (2014) 007 [arXiv:1305.6321 [hep-th]].
On the power spectrum of inflationary cosmologies dual to a deformed CFT. P Mcfadden, arXiv:1308.0331JHEP. 131071hep-thP. McFadden, "On the power spectrum of inflationary cosmologies dual to a deformed CFT," JHEP 1310, 071 (2013) [arXiv:1308.0331 [hep-th]].
Inflationary Consistency Conditions from a Wavefunctional Perspective. G L Pimentel, arXiv:1309.1793JHEP. 1402124hep-thG. L. Pimentel, "Inflationary Consistency Conditions from a Wavefunctional Perspective," JHEP 1402 (2014) 124 [arXiv:1309.1793 [hep-th]].
Conformal Invariance and the Four Point Scalar Correlator in Slow-Roll Inflation. A Ghosh, N Kundu, S Raju, S P Trivedi, arXiv:1401.1426JHEP. 140711hep-thA. Ghosh, N. Kundu, S. Raju and S. P. Trivedi, "Conformal Invariance and the Four Point Scalar Correlator in Slow-Roll Inflation," JHEP 1407 (2014) 011 [arXiv:1401.1426 [hep-th]].
Holographic inflation and the conservation of ζ. J Garriga, Y Urakawa, arXiv:1403.5497JHEP. 140686hep-thJ. Garriga and Y. Urakawa, "Holographic inflation and the conservation of ζ," JHEP 1406 (2014) 086 [arXiv:1403.5497 [hep-th]].
Multi-field inflation from holography. J Garriga, K Skenderis, Y Urakawa, arXiv:1410.3290JCAP. 15010128hep-thJ. Garriga, K. Skenderis and Y. Urakawa, "Multi-field inflation from holography," JCAP 1501, no. 01, 028 (2015) [arXiv:1410.3290 [hep-th]].
Soft limits in holographic cosmology. P Mcfadden, arXiv:1412.1874JHEP. 150253hepthP. McFadden, "Soft limits in holographic cosmology," JHEP 1502 (2015) 053 [arXiv:1412.1874 [hep- th]].
Ward Identities for Scale and Special Conformal Transformations in Inflation. N Kundu, A Shukla, S P Trivedi, arXiv:1507.06017JHEP. 160146hep-thN. Kundu, A. Shukla and S. P. Trivedi, "Ward Identities for Scale and Special Conformal Transfor- mations in Inflation," JHEP 1601, 046 (2016) [arXiv:1507.06017 [hep-th]].
Observational signatures and non-Gaussianities of general single field inflation. X Chen, M X Huang, S Kachru, G Shiu, hep-th/0605045JCAP. 07012X. Chen, M. x. Huang, S. Kachru and G. Shiu, "Observational signatures and non-Gaussianities of general single field inflation," JCAP 0701, 002 (2007) [hep-th/0605045].
Second order cosmological perturbations from inflation. V Acquaviva, N Bartolo, S Matarrese, A Riotto, astro-ph/0209156Nucl. Phys. B. 667119V. Acquaviva, N. Bartolo, S. Matarrese and A. Riotto, "Second order cosmological perturbations from inflation," Nucl. Phys. B 667 (2003) 119 [astro-ph/0209156].
k -inflation. C Armendariz-Picon, T Damour, V F Mukhanov, hep-th/9904075Phys. Lett. B. 458209C. Armendariz-Picon, T. Damour and V. F. Mukhanov, "k -inflation," Phys. Lett. B 458, 209 (1999) [hep-th/9904075].
DBI in the sky. M Alishahiha, E Silverstein, D Tong, hep-th/0404084Phys. Rev. D. 70123505M. Alishahiha, E. Silverstein and D. Tong, "DBI in the sky," Phys. Rev. D 70, 123505 (2004) [hep-th/0404084].
Primordial non-Gaussianities in single field inflation. D Seery, J E Lidsey, astro-ph/0503692JCAP. 05063D. Seery and J. E. Lidsey, "Primordial non-Gaussianities in single field inflation," JCAP 0506, 003 (2005) [astro-ph/0503692].
P A R Ade, Planck CollaborationarXiv:1502.01589Planck 2015 results. XIII. Cosmological parameters. 59413astro-ph.COP. A. R. Ade et al. [Planck Collaboration], "Planck 2015 results. XIII. Cosmological parameters," Astron. Astrophys. 594 (2016) A13 [arXiv:1502.01589 [astro-ph.CO]].
Derivation of a Four-dimensional c Theorem. H Osborn, Phys. Lett. B. 22297H. Osborn, "Derivation of a Four-dimensional c Theorem," Phys. Lett. B 222, 97 (1989).
The local Callan-Symanzik equation: structure and applications. F Baume, B Keren-Zur, R Rattazzi, L Vitale, arXiv:1401.5983JHEP. 1408152hep-thF. Baume, B. Keren-Zur, R. Rattazzi and L. Vitale, "The local Callan-Symanzik equation: structure and applications," JHEP 1408, 152 (2014) [arXiv:1401.5983 [hep-th]].
Single field consistency relation for the 3-point function. P Creminelli, M Zaldarriaga, astro-ph/0407059JCAP. 04106P. Creminelli and M. Zaldarriaga, "Single field consistency relation for the 3-point function," JCAP 0410, 006 (2004) [astro-ph/0407059].
. P Creminelli, J Noreña, M Simonović, 10.1088/1475-7516/2012/07/052arXiv:1203.4595JCAP. 120752hep-thP. Creminelli, J. Noreña and M. Simonović, JCAP 1207, 052 (2012) doi:10.1088/1475- 7516/2012/07/052 [arXiv:1203.4595 [hep-th]].
An Infinite Set of Ward Identities for Adiabatic Modes in Cosmology. K Hinterbichler, L Hui, J Khoury, arXiv:1304.5527JCAP. 140139hep-thK. Hinterbichler, L. Hui and J. Khoury, "An Infinite Set of Ward Identities for Adiabatic Modes in Cosmology," JCAP 1401, 039 (2014) [arXiv:1304.5527 [hep-th]].
Slavnov-Taylor Identities for Primordial Perturbations. L Berezhiani, J Khoury, arXiv:1309.4461JCAP. 14023hep-thL. Berezhiani and J. Khoury, "Slavnov-Taylor Identities for Primordial Perturbations," JCAP 1402, 003 (2014) [arXiv:1309.4461 [hep-th]].
Gauge theory correlators from noncritical string theory. S S Gubser, I R Klebanov, A M Polyakov, hep-th/9802109Phys. Lett. B. 428105S. S. Gubser, I. R. Klebanov and A. M. Polyakov, "Gauge theory correlators from noncritical string theory," Phys. Lett. B 428 (1998) 105 [hep-th/9802109].
Anti-de Sitter space and holography. E Witten, hep- th/9802150Adv. Theor. Math. Phys. 2253E. Witten, "Anti-de Sitter space and holography," Adv. Theor. Math. Phys. 2 (1998) 253 [hep- th/9802150].
Quasi-Single Field Inflation and Non-Gaussianities. X Chen, Y Wang, arXiv:0911.3380JCAP. 100427hep-thX. Chen and Y. Wang, "Quasi-Single Field Inflation and Non-Gaussianities," JCAP 1004, 027 (2010) [arXiv:0911.3380 [hep-th]].
Signatures of Supersymmetry from the Early Universe. D Baumann, D Green, arXiv:1109.0292Phys. Rev. D. 85103520hep-thD. Baumann and D. Green, "Signatures of Supersymmetry from the Early Universe," Phys. Rev. D 85, 103520 (2012) [arXiv:1109.0292 [hep-th]].
Effective field theory approach to quasi-single field inflation and effects of heavy fields. T Noumi, M Yamaguchi, D Yokoyama, arXiv:1211.1624JHEP. 130651hep-thT. Noumi, M. Yamaguchi and D. Yokoyama, "Effective field theory approach to quasi-single field inflation and effects of heavy fields," JHEP 1306, 051 (2013) [arXiv:1211.1624 [hep-th]].
. N Arkani-Hamed, J Maldacena, arXiv:1503.08043Cosmological Collider Physics. hep-thN. Arkani-Hamed and J. Maldacena, "Cosmological Collider Physics," arXiv:1503.08043 [hep-th].
More constraining conformal bootstrap. F Gliozzi, arXiv:1307.3111Phys. Rev. Lett. 111161602hep-thF. Gliozzi, "More constraining conformal bootstrap," Phys. Rev. Lett. 111 (2013) 161602 [arXiv:1307.3111 [hep-th]].
Critical exponents of the 3d Ising and related models from Conformal Bootstrap. F Gliozzi, A Rago, arXiv:1403.6003JHEP. 141042hep-thF. Gliozzi and A. Rago, "Critical exponents of the 3d Ising and related models from Conformal Bootstrap," JHEP 1410 (2014) 042 [arXiv:1403.6003 [hep-th]].
Holographic reconstruction of space-time and renormalization in the AdS / CFT correspondence. S Haro, S N Solodukhin, K Skenderis, hep- th/0002230Commun. Math. Phys. 217S. de Haro, S. N. Solodukhin and K. Skenderis, "Holographic reconstruction of space-time and renormalization in the AdS / CFT correspondence," Commun. Math. Phys. 217, 595 (2001) [hep- th/0002230].
Implications of conformal invariance in momentum space. A Bzowski, P Mcfadden, K Skenderis, arXiv:1304.7760JHEP. 1403111hep-thA. Bzowski, P. McFadden and K. Skenderis, "Implications of conformal invariance in momentum space," JHEP 1403 (2014) 111 [arXiv:1304.7760 [hep-th]].
Scalar 3-point functions in CFT: renormalisation, beta functions and anomalies. A Bzowski, P Mcfadden, K Skenderis, arXiv:1510.08442JHEP. 160366hep-thA. Bzowski, P. McFadden and K. Skenderis, "Scalar 3-point functions in CFT: renormalisation, beta functions and anomalies," JHEP 1603 (2016) 066 [arXiv:1510.08442 [hep-th]].
Evaluation of conformal integrals. A Bzowski, P Mcfadden, K Skenderis, arXiv:1511.02357JHEP. 160268hep-thA. Bzowski, P. McFadden and K. Skenderis, "Evaluation of conformal integrals," JHEP 1602 (2016) 068 [arXiv:1511.02357 [hep-th]].
|
[] |
[
"Weingarten map of the hypersurface in 4-dimensional Euclidean space and its applications",
"Weingarten map of the hypersurface in 4-dimensional Euclidean space and its applications"
] |
[
"Salim Yüce [email protected] \nFaculty of Arts and Sciences\nDepartment of Mathematics\nYildiz Technical University\nDavutpaşa Campus34220Esenler, IstanbulTURKEY\n"
] |
[
"Faculty of Arts and Sciences\nDepartment of Mathematics\nYildiz Technical University\nDavutpaşa Campus34220Esenler, IstanbulTURKEY"
] |
[] |
In this paper, by taking into account the beginning of the hypersurface theory in Euclidean space E 4 , a practical method for the matrix of the Weingarten map (or the shape operator) of an oriented hypersurface M 3 in E 4 is obtained. By taking this efficient method, it is possible to study of the hypersurface theory in E 4 which is analog the surface theory in E 3 . Furthermore, the Gaussian curvature, mean curvature, fundamental forms and Dupin indicatrix of M 3 is introduced.
| null |
[
"https://arxiv.org/pdf/1901.07883v1.pdf"
] | 119,660,190 |
1901.07883
|
dbf71cfd389a882b5a44afe7f3d3164fa33cc617
|
Weingarten map of the hypersurface in 4-dimensional Euclidean space and its applications
21 Jan 2019
Salim Yüce [email protected]
Faculty of Arts and Sciences
Department of Mathematics
Yildiz Technical University
Davutpaşa Campus34220Esenler, IstanbulTURKEY
Weingarten map of the hypersurface in 4-dimensional Euclidean space and its applications
21 Jan 2019
In this paper, by taking into account the beginning of the hypersurface theory in Euclidean space E 4 , a practical method for the matrix of the Weingarten map (or the shape operator) of an oriented hypersurface M 3 in E 4 is obtained. By taking this efficient method, it is possible to study of the hypersurface theory in E 4 which is analog the surface theory in E 3 . Furthermore, the Gaussian curvature, mean curvature, fundamental forms and Dupin indicatrix of M 3 is introduced.
Introduction
Let x = 4 i=1 x i e i , y = 4 i=1 y i e i , z = 4 i=1
z i e i be three vectors in R 4 , equipped with the standard inner product given by
x, y = x 1 y 1 + x 2 y 2 + x 3 y 3 + x 4 y 4 , where {e 1 , e 2 , e 3 , e 4 } is the standard basis of R 4 . The norm of a vector x ∈ R 4 is given by x =
x, x . The vector product (or the ternary product or cross product) of the vectors x, y, z ∈ R 4 is defined by
x ⊗ y ⊗ z = e 1 e 2 e 3 e 4 x 1 x 2 x 3 x 4 y 1 y 2 y 3 y 4 z 1 z 2 z 3 z 4 .
(1)
Some properties of the vector product are given as follows: (for the vector product in R 4 , see [1,2,5] i. ii.
x ⊗ y ⊗ z 2 =
x, x x, y x, z y, x y, y y, z z, x z, y z, z
iii.
x ⊗ y ⊗ z, t = det (x, y, z, t).
Let M 3 be an oriented 3-dimensional hypersurface in 4-dimensional Euclidean space E 4 . Let examine the implicit and parametric equations of M 3 . Firstly; the implicit equation of M 3 can be defined by
M 3 = X ∈ E 4 |f : U ⊂ E 4 dif f. → R, f (X) = const. − → ∇f | P = 0, P ∈ M 3 (3) where − → ∇f | P is the gradient vector of M 3 . The unit normal vector field of M 3 is defined by N = − → ∇f − → ∇f .
The Weingarten map (or the shape operator) of M 3 is defined by
S : χ M 3 → χ M 3 , S (X) = D X N,
where D is the connection of E 4 and χ M 3 is the space of vector fields of M 3 . Then the Gauss curvature K and mean curvature H of M 3 are given by K = detS and H = 1 3 T rS, respectively. Also, the q − th fundamental forms of M 3 are given by [3],
I q (X, Y ) = S q−1 (X) , Y , ∀ X, Y ∈ χ M 3 .
Secondly, to examine parametric form of the hypersurface M 3 given by the implicit equation in the eq (3), let consider
φ : U ⊂ R 3 → E 4 (u, v, w) → φ (u, v, w) = (ϕ 1 (u, v, w) , ϕ 2 (u, v, w) , ϕ 3 (u, v, w) , ϕ 4 (u, v, w))
where (u, v, w) ∈ R ⊂ R 3 and ϕ i , 1 ≤ i ≤ 4 are the real functions defined on R.
M 3 = φ (R) ⊂ E 4 is a hypersurface if only if the frame field {φ u , φ v , φ w } of M 3 is linearly independent system.
It can be also seen by taking the Jacobian matrix [φ] * = φ u φ v φ w of the differential map of φ. It is clear that if rank [φ] * = 3, then the vector system {φ u , φ v , φ w } is linearly independent. Furthermore, φ u , φ v , φ w are the tangent vectors of the parameter curves α (u) = φ (u, v 0 , w 0 ), β (v) = φ (u 0 , v, w 0 ) and γ (w) = φ (u 0 , v 0 , w), respectively. Then the unit normal vector field of M 3 is defined by
N = φ u ⊗ φ v ⊗ φ w φ u ⊗ φ v ⊗ φ w(4)
and it has the following properties:
N, φ u = N, φ v = N, φ w = 0.(5)
By using the Weingarten operator the below equalities can be written
S (φ u ) = D φu N = ∂N ∂u S (φ v ) = D φv N = ∂N ∂v S (φ w ) = D φw N = ∂N ∂w .(6)
2 The matrix of the Weingarten map of hypersurface M 3 in E 4
In this original section, a practical method for the matrix of the Weingarten map of hypersurface M 3 in E 4 is introduced. Let M 3 be an oriented hypersurface with the parametric equation φ (u, v, w). Then {φ u , φ v , φ w } is linearly independent and we also can write where a ij ∈ R, 1 ≤ i, j ≤ 3. Using the equation (7), we have the following systems of linear equations:
S (φ u ) = a 11 φ u + a 21 φ v + a 31 φ w S (φ v ) = a 12 φ u + a 22 φ v + a 32 φ w S (φ w ) = a 13 φ u + a 23 φ v + a 33 φ w S (φ u ) , φ u = a 11 φ 11 + a 21 φ 12 + a 31 φ 13 S (φ u ) , φ v = a 11 φ 12 + a 21 φ 22 + a 31 φ 23 S (φ u ) , φ w = a 11 φ 13 + a 21 φ 23 + a 31 φ 33 , S (φ v ) , φ u = a 12 φ 11 + a 22 φ 12 + a 32 φ 13 S (φ v ) , φ v = a 12 φ 12 + a 22 φ 22 + a 32 φ 23 S (φ v ) , φ w = a 12 φ 13 + a 22 φ 23 + a 32 φ 33 , S (φ w ) , φ u = a 13 φ 11 + a 23 φ 12 + a 33 φ 13 S (φ w ) , φ v = a 13 φ 12 + a 23 φ 22 + a 33 φ 23 S (φ w ) , φ w = a 13 φ 13 + a 23 φ 23 + a 33 φ 33 , (8) where φ u , φ u = φ 11 , φ u , φ v = φ 12 , φ u , φ w = φ 13 , φ v , φ v = φ 22 , φ v , φ w = φ 23 , φ w , φ w = φ 33 .
(9)
Since the system {φ u , φ v , φ w } is linearly independent, using the equations (2) and (9), we have
φ u ⊗ φ v ⊗ φ w 2 = φ 11 φ 12 φ 13 φ 12 φ 22 φ 23 φ 13 φ 23 φ 33 = 0.
Also, 3-linear equation systems given by the equation 8 have the determinant
φ 11 φ 12 φ 13 φ 12 φ 22 φ 23 φ 13 φ 23 φ 33 = ∆.
Because of the property φ u ⊗ φ v ⊗ φ w 2 = ∆ = 0, these 3-linear equations systems can be solved by Cramer method. Then using the equations (6), (8) and (9) the matrix S of the Weingarten map in M 3 can be found. Although S is a symmetric linear operator, the matrix presentation (a ij ) of S with respect to {φ u , φ v , φ w } is not necessary to be symmetric because the system {φ u , φ v , φ w } is not orthonormal.
Special Case
If we take the orthogonal frame field {φ u , φ v , φ w } of the hypersurface M 3 , then we have φ 12 = φ 13 = φ 23 = 0 from the equation (9). Then, the system
U = φu φu , V = φv φv , W = φw φw
is an orthonormal frame field. Furthermore, we can write the following equations
S (U ) = c 1 U + c 2 V + c 3 W S (V ) = c 2 U + c 4 V + c 5 W S (W ) = c 3 U + c 5 V + c 6 W,(10)
then, the matrix of the Weingarten map can be calculated as follows:
S = c 1 c 2 c 3 c 2 c 4 c 5 c 3 c 5 c 6 .
By using the equations (4), (6) and (10), the coefficients c i ∈ R, 1 ≤ i ≤ 6 can be calculated as follows:
c 1 = S (U ) , U = 1 φu 2 ∂N ∂u , φ u , c 2 = S (U ) , V = 1 φu 1 φv ∂N ∂u , φ v , c 3 = S (U ) , W = 1 φu 1 φw ∂N ∂u , φ w , c 4 = S (V ) , V = 1 φv 2 ∂N ∂v , φ v , c 5 = S (V ) , W = 1 φv 1 φw ∂N ∂v , φ w , c 6 = S (W ) , W = 1 φw 2 ∂N ∂w , φ w .(11)
By using the equation (5), we can also write six equations as below:
∂N ∂u , φ u + N, φ uu = 0, ∂N ∂u , φ v + N, φ uv = 0, ∂N ∂u , φ w + N, φ uw = 0, ∂N ∂v , φ v + N, φ vv = 0, ∂N ∂v , φ w + N, φ vw = 0, ∂N ∂w , φ w + N, φ ww = 0.(12)
Also, by using the equations (2) and (9), we find
φ u ⊗ φ v ⊗ φ w 2 = φ 22 0 0 0 φ 11 0 0 0 φ 33 = φ u 2 φ v 2 φ w 2 .(13)
Hence we find the coefficients c 1 , c 2 , c 3 , c 4 , c 5 , c 6 of the Weingarten matrix in the equation (10) as follows:
c 1 = − 1 φu 3 1 φv 1 φw det (φ uu , φ u , φ v , φ w ) , c 2 = − 1 φu 2 1 φv 2 1 φw det (φ uv , φ u , φ v , φ w ) , c 3 = − 1 φu 2 1 φv 1 φw 2 det (φ uw , φ u , φ v , φ w ) , c 4 = − 1 φu 1 φv 3 1 φw det (φ vv , φ u , φ v , φ w ) , c 5 = − 1 φu 1 φv 2 1 φw 2 det (φ vw , φ u , φ v , φ w ) , c 6 = − 1 φu 1 φv 1 φw 3 det (φ ww , φ u , φ v , φ w ) .(14)
So, by taking into account the equations (4), (13) and (14) we have the symmetric Weingarten matrix
S = ϕ11 φ11 ϕ12 √ φ11φ22 ϕ13 √ φ11φ33 ϕ12 √ φ11φ22 ϕ22 φ22 ϕ23 √ φ 22 φ33 ϕ13 √ φ11φ33 ϕ23 √ φ 22 φ33 ϕ33 φ33 .(15)ϕ 11 φ 11 + ϕ 22 φ 22 + ϕ 33 φ 33 ,
respectively.
Proof. By using the equation (15) and the definitions of the Gaussian curvature K and the mean curvature H, the theorem can be easily proved. So, we find the Weingarten matrix S as: Furthermore, the unit normal vector field N can be found:
φ (u, v, w) = u, 1 u , v, w . Then, we obtain φ u ⊗ φ v ⊗ φ w = − 1 u 2 , −1,S = 2u 3 (1+u 4 ) 3/2 0 0 0 0 0 0 0 0 .N = (− sin u cos v sin w, − sin u sin v sin w, − cos u sin w, − cos w) .
Then using the equation (15), we obtain S = I 3 .
Theorem 4 Let M 3 be an oriented hypersurface in E 4 and let {X P , Y P , Z P } be a linearly independent vector system of the tangent space T M 3 (P ). Then, we have
i. S (XP ) ⊗ S (YP ) ⊗ S (ZP ) = K (P ) (XP ⊗ YP ⊗ ZP ) ii. (S (XP ) ⊗ YP ⊗ ZP ) + (XP ⊗ S (YP ) ⊗ ZP ) + (XP ⊗ YP ⊗ S (ZP )) = 3H (P ) (XP ⊗ YP ⊗ ZP ) ,
where K and H are the Gaussian curvature and the mean curvature of M 3 , respectively.
Proof. By using (i), (ii) parts of the equation (2) and considering the definitions of the Gaussian curvature K and the mean curvature H the theorem can be easily proved.
In [4], it is proved that these equations are also provided for closed hypersurfaces.
Theorem 5 Let M 3 be an oriented hypersurface in E 4 and let I q , K, H be the q-th fundamental forms, the Gaussian curvature and the mean curvature, respectively. Then we have
I 4 − 3H I 3 + 3K h I 2 − K I = 0 (16)
where h is the harmonic mean of the non-zero principal curvatures of M 3 .
Proof. Let k 1 , k 2 , k 3 be the characteristic values of the Weingarten map S (or the principal curvatures of M 3 ). Then we obtain the characteristic polynomial P S (λ) of the Weingarten map S of M 3 as
P S (λ) = det (λI 3 − S) = λ 3 − (k 1 + k 2 + k 3 ) λ 2 + (k 1 k 2 + k 1 k 3 + k 2 k 3 ) λ − (k 1 k 2 k 3 ) .
By using the Cayley-Hamilton theorem, we obtain
S 3 − (k 1 + k 2 + k 3 ) S 2 + (k 1 k 2 + k 1 k 3 + k 2 k 3 ) S − (k 1 k 2 k 3 ) I 3 = 0.
By using the definitions of the q−th fundamental forms, the Gaussian curvature, the mean curvature and the harmonic mean
h = 3 1 k1 + 1 k2 + 1 k3
of the principal curvature k 1 , k 2 , k 3 , we obtain the equation (16).
Dupin indicatrix of the hypersurface in E 4
Let X, Y, Z be three principal vectors according to the principal curvatures k 1 , k 2 , k 3 of M 3 . If we consider the orthonormal basis {X, Y, Z} of M 3 then for any tangent vector W P ∈ T M 3 (P ), we can write W P = x X P + y Y P + z Z P , where x, y, z ∈ R, and
S (W P ) = x S (X P ) + y S (Y P ) + z S (Z P ) = x k 1 X P + y k 2 Y P + z k 3 Z P
Here, the Dupin indicatrix D of M 3 can be defined by
D = W P = (x, y, z) ∈ T M 3 (P )| S (W P ) , W P = k 1 x 2 + k 2 y 2 + k 3 z 2 = ±1 .
In another words, the Dupin indicatrix corresponds to a hypercylinder which has the equation
k 1 x 2 + k 2 y 2 + k 3 z 2 = ±1.
Now, we will examine the Dupin indicatrix according to the Gaussian curvature K : 1) Let K (P ) > 0.
• If k 1 , k 2 , k 3 > 0 then for equation of the Dupin indicatrix, we can write k 1 x 2 +k 2 y 2 +k 3 z 2 = ±1. Hence, the Dupin indicatrix is the ellipsoidal class and this equation is called ellipsoidal cylinder in E 4 . In this condition, P ∈ M 3 is called an ellipsoidal point.
• If k 1 > 0, k 2 , k 3 < 0 or k 2 > 0, k 1 , k 3 < 0 or k 3 > 0 k 1 , k 2 < 0 then for equation of the Dupin indicatrix, we can write k 1 x 2 − k 2 y 2 − k 3 z 2 = ±1. Hence, the Dupin indicatrix is the hyperboloidical class and this equation is called hyperboloidical cylinder one or two sheets in E 4 . In this condition, P ∈ M 3 is called a hyperboloidical point.
2) Let K (P ) < 0.
• If only one of k i 's, i = 1, 2, 3 is negative, then for the equation of the Dupin indicatrix, we can write
k 1 x 2 + k 2 y 2 − k 3 z 2 = ±1,
k 1 x 2 − k 2 y 2 + k 3 z 2 = ±1, −k 1 x 2 + k 2 y 2 + k 3 z 2 = ±1.
The above equations are called one or two sheeted hyperboloidical cylinder in E 4 . Then P ∈ M 3 is called a hyperboloidical point.
• If k 1 , k 2 , k 3 < 0 then the Dupin indicatrix is the ellipsoidal class and this equation is called ellipsoidal cylinder in E 4 . So P ∈ M 3 is called a ellipsoidal point.
3) Let K (P ) = 0.
• If k 1 = 0 or k 2 = 0 or k 3 = 0, then for the equation of the Dupin indicatrix for each case, we get i If k 1 = 0, k 2 , k 3 are the same or different signs then k 2 y 2 +k 3 z 2 = ±1. ii If k 2 = 0, k 1 , k 3 are the same or different signs then k 1 x 2 +k 3 z 2 = ±1. iii If k 3 = 0, k 1 , k 2 are the same or different signs then k 1 x 2 +k 2 y 2 = ±1.
These equations are called elliptic cylinder or hyperbolic cylinder in E 4 . In this condition, P ∈ M 3 is called an elliptic cylinder or hyperbolic cylinder point.
• If k 1 = k 2 = k 3 = 0 then the point P ∈ M 3 is a flat point.
• If any two of k i 's, i = 1, 2, 3 are zero and other positive or negative then k 3 z 2 = ±1 or k 2 y 2 = ±1 or k 1 x 2 = ±1.
e 1 ⊗ e 2
12⊗ e 3 = −e 4 e 2 ⊗ e 3 ⊗ e 4 = e 1 e 3 ⊗ e 4 ⊗ e 1 = −e 2 e 4 ⊗ e 1 ⊗ e 2 = e 3 e 3 ⊗ e 2 ⊗ e 1 = e 4
where ϕ 11 =
11− φ uu , N , ϕ 12 = − φ uv , N , ϕ 13 = − φ uw , N , ϕ 22 = − φ vv , N , ϕ 23 = − φ vw , N , ϕ 33 = − φ ww , N .Finally the following theorem can be given for hypersurface M 3 in E 4 : Theorem 1 Let M 3 be an oriented hypersurface in E 4 . Then the Gaussian curvature and the mean curvature of M 3 can be given by:K =ϕ 11 ϕ 22 ϕ 33 + 2ϕ 12 ϕ 13 ϕ 23 − ϕ 2 12 ϕ 33 − ϕ 2 13 ϕ 22 − ϕ 2 23 ϕ 11 φ 11 φ 22 φ 33
Example 2
2Let M 3 be an oriented hypersurface with the implicit equation xy = 1 in E 4 . The parametric equation of M 3 can be given by
0, 0
0and the unit normal field N = 1 √ 1+u 4 −1, −u 2 , 0, 0 . By using the orthonormal basis φu φu , φv φv ,
Example 3
3Let S 3 be a hypersphere with the implicit equation x 2 +y 2 +z 2 +t 2 = 1 in E 4 . The parametric equation of S 3 can be given byφ (u, v, w) = (sin u cos v sin w, sin u sin v sin w, cos u sin w, cos w) .Then, {φ u , φ v , φ w } is an orthogonal system. Also we have the orthonormal basis {U, V, W } of S 3 such that U = φu φu = (cos u cos v, cos u sin v, − sin u, 0) , V = φv φv = (−sinv, cos v, 0, 0) , W = φw φw = (sin u cos v cos w, sin u sin v cos w, cos u cos w, − sin w) .
Differential geometry of intersection curves in R 4 of three implicit surfaces. O Alèssio, Comput. Aided Geom. Design. 26Alèssio O., Differential geometry of intersection curves in R 4 of three implicit surfaces, Comput. Aided Geom. Design 2009; 26: 455-471.
Four-space visualization of 4D objects, MSc. S R Hollasch, Phoenix, AZ, USAArizona State UniversityHollasch S.R., Four-space visualization of 4D objects, MSc, Arizona State University, Phoenix, AZ, USA, 1991.
. J M Lee, Riemann Manifolds, 224New York, USALee J.M., Riemann Manifolds, New York, USA, 1997, 224 p.
Curvatures of implicit hypersurfaces in Euclidean 4-space. Uyar Düldül, B , Igdir Univ. J. Inst. Sci. and Tech. 81Uyar Düldül B., Curvatures of implicit hypersurfaces in Euclidean 4-space, Igdir Univ. J. Inst. Sci. and Tech. 2018; 8(1): 229-236.
A triple product of vectors in four-space. Williams M Z Stein, F M , Math. Mag. 37Williams M.Z, Stein F.M., A triple product of vectors in four-space, Math. Mag. 1964; 37: 230-235.
|
[] |
[
"Prime polynomials in short intervals and in arithmetic progressions",
"Prime polynomials in short intervals and in arithmetic progressions"
] |
[
"Efrat Bank ",
"Lior Bary-Soroker ",
"Lior Rosenzweig "
] |
[] |
[] |
We also generalize these results to other factorization types.
|
10.1215/00127094-2856728
|
[
"https://arxiv.org/pdf/1302.0625v3.pdf"
] | 119,153,285 |
1302.0625
|
1081718f7dee4c1eaff3503f3d496a4e5ec903e5
|
Prime polynomials in short intervals and in arithmetic progressions
10 Mar 2013 March 12, 2013
Efrat Bank
Lior Bary-Soroker
Lior Rosenzweig
Prime polynomials in short intervals and in arithmetic progressions
10 Mar 2013 March 12, 2013arXiv:1302.0625v2 [math.NT]
We also generalize these results to other factorization types.
In this paper we establish function field versions of two classical conjectures on prime numbers. The first says that the number of primes in intervals (x, x + x ǫ ] is about x ǫ / log x and the second says that the number of primes p < x, p ≡ a (mod d), for d 1+δ < x, is about π(x) φ(d) . More precisely, we prove: Let 1 ≤ m < k be integers, let q be a prime power, and let f be a monic polynomial of degree k with coefficients in F q . Then there is a constant c(k) such that the number N of prime polynomials g = f + h with deg h ≤ m satisfies |N − q m+1 /k| ≤ c(k)q m+ 1 2 . Here we assume m ≥ 2 if gcd(q, k(k − 1)) > 1 and m ≥ 3 if q is even and deg f ′ ≤ 1. We show that this estimation fails in the neglected cases.
Let π q (k) be the number of monic prime polynomials of degree k with coefficients in F q . For relatively prime f, D ∈ F q [t] we prove that the number N ′ of monic prime polynomials g ≡ f (mod D) of degree k satisfies |N ′ − πq(k) φ(D) | ≤ c(k) πq(k)q −1/2 φ(D)
Introduction
We study two function field analogues of two classical problems in number theory concerning the number of primes in short intervals and in arithmetic progressions. We first introduce the classical problems and then formulate the results in function fields.
Primes in short intervals
Let π(x) = #{0 < p ≤ x | p is a prime} be the prime counting function. By the Prime Number Theorem (PNT) π(x) ∼ x log x , x → ∞.
Therefore one may expect that an interval I = (x, x + Φ(x)] of size Φ(x) starting at a large x contains about Φ(x)/ log x primes, i.e.
π(I) := π(x + Φ(x)) − π(x) ∼ Φ(x) log x .(1)
From PNT (1) holds for Φ(x) ∼ cx, for any fixed 0 < c < 1. By Riemann Hypothesis (1) holds for Φ(x) ∼ √ x log x or even Φ(x) ∼ ǫ √
x log x assuming a strong form of Montgomery's pair correlation conjecture [7]. Concerning smaller powers of x Granville conjectures [4, p. 7] that Heath-Brown [6], improving Huxley [8], proves Conjecture 1.1, unconditionally, for x 7 12 −ǫ(x) ≤ Φ(x) ≤ x log 4 x , where ǫ(x) → 0. We note that for extremely short intervals (e.g. for Φ(x) = log x log log x log log log log x log log log x ) (1) fails [12] uniformly, but may hold for almost all x, see [13] and the survey [5, Section 4].
Primes in arithmetic progressions
Let π(x; d, a) denote the number of primes p ≤ x such that p ≡ a (mod d). Then the Prime Number Theorem for arithmetic progressions says that if a and d are relatively prime and fixed, then
π(x; d, a) ∼ π(x) φ(d) , x → ∞,(2)
where π(x) is the prime counting function and φ(d) is the Euler totient function, giving the number of positive integers i up to d with gcd(i, d) = 1. In many applications it is crucial to allow the modulus d to grow with x. The interesting range is d < x since if d ≥ x, there can be at most one prime in the arithmetic progression p ≡ i (mod d). A classical conjecture is the following (in a slightly different form see [11,Conjecture 13.9] . . . the best proven results have x bigger than the exponential of a power of q (Granville's q is our d) far larger than what we expect. If we are prepared to assume the unproven Generalized Riemann Hypothesis we do much better, being able to prove that the primes up to q 2+δ are equally distributed amongst the arithmetic progressions mod q, for q sufficiently large, though notice that this is still somewhat larger than what we expect to be true.
In this work we establish function field analogues of Conjectures 1.1 and 1.2 for certain intervals of parameters ǫ, δ which may be arbitrary small, and in particular breaking the barriers ǫ = 1/2 in the former and δ = 1 in the latter. This indicates that Conjectures 1.1 and 1.2 should hold. A crucial ingredient is a type of Hilbert's irreducibility theorem over finite fields [2].
Results in function fields
Let P ≤k be the space of polynomials of degree at most k over F q and M(k, q) ⊆ P ≤k the subset of monic polynomials of degree k. If deg f = k, we let f = q k .
Short intervals
Let π q (k) = #{g ∈ M(k, q) | g is a prime polynomial} be the prime polynomial counting function. The Prime Polynomial Theorem (PPT) asserts that
π q (k) = q k k + O q k/2 k .
An interval I around f ∈ M(k, q) is defined as
I = I(f, m) = {g ∈ F q [t] | f − g ≤ q m } = f + P ≤⌊m⌋ .
If m ≥ k, then I(f, m) = P ≤m , and so the PPT gives the number of primes there. The interesting intervals are the short intervals, i.e. when m < k. In particular M(k, q) = I(t k , k − 1). We note that all the polynomials in a short interval around a monic polynomial are monic. For a short interval I let π q (I) = #{g ∈ I | g is a prime polynomial}. The expected analogy to (1) is
π q (I(f, m)) ∼ |I(f, m)| k = q ⌊m⌋+1 k ,(3)
for f ∈ M(k, q) and 0 < m < k. Keating and Rudnick [9] study the variance of primes in short intervals in the limit q → ∞. From their result it follows in a standard way that (3) holds almost everywhere for m ≤ k − 3, see Appendix A for details.
We show that (3) holds everywhere:
Theorem 2.1. Let k be a positive integer. Then there exists a constant c(k) > 0 depending only on k such that for any
• prime power q = p ν ,
• integer 1 ≤ m < k, and
• a short interval I = I(f, m) around f ∈ M(k, q)
we have
π q (I) − q m+1 k ≤ c(k)q m+ 1 2 , provided 2 ≤ m if p | k(k − 1) and provided 3 ≤ m if p = 2 and deg f ′ ≤ 1.
To compare with Conjecture 1.1 we note that x corresponds to q k , hence an interval of length x ǫ corresponds to I(f, ǫk), f ∈ M(k, q). Thus for any fixed k, for every 3 k ≤ ǫ ≤ 1, and for every sequence of intervals I q = I q (f q , ǫk),
π q (I q ) ∼ |I q | k , q → ∞.
(In fact it is possible to consider ǫ ≥ 1 k for those intervals I q , q = p ν , for which p ∤ k(k −1) and ǫ ≥ 2 k if p = 2 or p = 2 and deg f ′ q ≥ 2.) The conclusion is that a precise analogue of Conjecture 1.1 for 3 k ≤ ǫ ≤ 1 holds. In particular we go below the barrier ǫ = 1 2 , and by enlarging k, ǫ can be made arbitrary small.
In Section 6 we discuss the cases which are not included in Theorem 2.1 by studying the intervals I(t k , m). In particular we show that (3) fails if m = 0, or if m = 1 and p | k(k − 1). We do not know whether (3) holds true in the remaining case p = m = 2 and deg f ′ ≤ 1.
Primes in arithmetic progressions
For relatively prime f, D ∈ F q [t] let π q (k; D, f ) = #{h = f + Dg ∈ M(k, q) | h is a prime polynomial}.
The Prime Polynomial Theorem for arithmetic progressions says that
π q (k; D, f ) = π q (k) φ(D) + O q k/2 k deg D .(4)
Here φ(g) is the function field Euler totient function, giving the number of units in
F q [t]/gF q [t].
In analogy to the classical case we want to allow deg D to grow with k. The interesting range of parameters is deg D > k, because if deg D ≥ k, there is at most one monic prime in the arithmetic progression h ≡ f mod D of degree k.
We note that
φ(D) ∼ q deg D , q → ∞.
Therefore, if 2 deg D < k − δ, then (4) gives that
π q (k; D, f ) ∼ π q (k) φ(D) , q → ∞.
On the other hand (4) gives nothing when 2 deg D ≥ k.
In analogy with (2) one may expect that
π q (k; D, f ) ∼ π q (k) φ(D)(5)
as long as (1 + δ) deg D ≤ k.
Keating and Rudnick [9] calculate the variance of the number of primes in arithmetic progressions in function fields. From their work (5) holds true almost everywhere, in a standard way.
We show (5) everywhere:
Theorem 2.2. Let k be a positive integer. Then there exists a constant c(k) > 0 depending only on k such that for any
• prime power q = p ν ,
• 2 ≤ m < k • monic modulus D ∈ F q [t] with deg D = k − m − 1, • and f ∈ F q [t],
we have
π q (k; D, f ) − π q (k) φ(D) ≤ c(k)q m+ 1 2 , provided (f /D) ′ is not constant if p = m = 2. (Note that πq(k) φ(D) ∼ q k−deg D k = q m+1 k as q → ∞. )
To compare with Conjecture 1.2 we note that x corresponds to q k , d corresponds to q deg D and the condition d (1+δ) < x translates to (1 + δ) deg D < k. Thus for any fixed k and 4 k−4 ≤ δ and for any sequence of (D q , f q ) q with D q , f q ∈ F q [t] such that D is monic
and (1 + δ) deg D q < k we have π q (k; D q , f q ) ∼ π q (k) φ(D q ) , q → ∞.
(In fact we may take 3
k−3 ≤ δ if q is odd or if (f q /D q ) ′ is not constant.)
The conclusion is that a perfect analogue of Conjecture 1.2 for 4 k−4 ≤ δ holds. In particular we go below the barrier δ = 1 and by enlarging k, δ can be made arbitrary small. This indicates that Conjecture 1.2 should hold for any δ > 0.
Other factorization types
Our method allows us to count polynomials with any given factorization type. Let us start by setting up the notation.
The degrees of the primes in the factorization of a polynomial f ∈ F q [t] to a product of prime polynomials gives a partition of deg f , denoted by λ f . Similarly the lengths of the cycles in the factorization of a permutation σ ∈ S k to a product of disjoint cycles induce a partition, λ σ , of k. For a partition λ of k we denote the probability for σ ∈ S k to have λ σ = λ by
P (λ) = #{σ ∈ S k | λ σ = λ} k! .(6)
We note that if λ is the partition to one part, then λ f = λ if and only if f is prime and
P (λ) = (k−1)! k! = 1 k .
Let k be a positive integer and λ a partition of k. For a short interval I = I(f, m) with f ∈ M(k, q) we define the counting function π q (I; λ) = #{g ∈ I | λ g = λ}.
We generalize Theorem 2.1: Theorem 2.3. Let k be a positive integer. Then there exists a constant c(k) > 0 depending only on k such that for any
• partition λ of k, • prime power q = p ν , • integer 1 ≤ m < k, and • a short interval I = I(f, m) around f ∈ M(k, q) we have π q (I; λ) − P (λ)q m+1 ≤ c(k)q m+ 1 2 , provided 2 ≤ m if p | k(k − 1) and provided 3 ≤ m if p = 2 and deg f ′ ≤ 1.
For relatively prime f, D ∈ F q [t] with D monic we define the counting function
π q (k; D, f ; λ) = #{g ≡ f (mod D) | deg g = k and λ g = λ}.
We generalize Theorem 2.2:
Theorem 2.4. Let k be a positive integer. Then there exists a constant c(k) > 0 depending only on k such that for any
• partition λ of k,
• prime power q = p ν ,
• 2 ≤ m < k • monic modulus D ∈ F q [t] with deg D = k − m − 1, and • f ∈ F q [t],
we have
π q (k; D, f ; λ) − π q (k; λ) φ(D) ≤ c(k)q m+ 1 2 , provided (f /D) ′ is not constant if p = m = 2. (Note that πq(k;λ) φ(D) ∼ P (λ)q k−deg D = P (λ)q m+1 as q → ∞. )
Auxiliary results
Specializations
We briefly recall some definitions and basic facts on specializations, see [2, Section 2.1] for more details and proofs. Let K be a field with algebraic closureK, Gal(K) = Aut(K/K) the absolute Galois group of K, W = Spec S and V = Spec R absolutely irreducible smooth affine K-varieties, ρ : W → V a finite separable morphism which is generically Galois, F/E the function field Galois extension that corresponds to ρ, K-rational point p ∈ V (K) that isétale in W , and P ∈ ρ −1 (p).
Then p induces a homomorphism φ p : R → K that extends to a homomorphism φ P : S → K (via the inclusion R → S induced by ρ). Since p isétale in W , we have a homomorphism P * : Gal(K) → Gal(F/E) such that φ P (P * (σ)(x)) = σ(φ P (x)), ∀x ∈ S, ∀σ ∈ Gal(K).
For every other Q ∈ ρ −1 (p) there is τ ∈ Gal(F/E) such that φ Q = φ P • τ . Thus, by (7), Q * = τ −1 P * τ and vice-versa every τ −1 P * τ comes from a point Q ∈ ρ −1 (p) . Hence
p * = {Q * | Q ∈ ρ −1 (p)} is the orbit of P * under the conjugation action of Gal(F/E).
The key ingredients in the proof of the following proposition are the Lang-Weil estimates [10, Theorem 1] and the field crossing argument (as utilized in [2, Proposition 2.2]). Proposition 3.1. Let k, m, and B be positive integers, let λ be a partition of k, let F be an algebraic closure of F q , and let F ∈ F q [A 0 , . . . , A m , t] be a polynomial that is separable in t with deg F ≤ B and deg t F = k. Assume that F is separable in t and
Gal(F , F(A 0 , . . . , A m )) = S k .
Then there is a constant c(m, B) that depends only on m and B such that if we denote by N = N(F , q) the number of (a 0 , . . . , a m ) ∈ F m+1 q such that f = F (a 0 , . . . , a m , t) has factorization type λ f = λ, then
N − P (λ)q m+1 ≤ c(m, B)q m+1/2 , where P (λ) is defined in (6).
Proof. Let A = (A 0 , . . . , A m ) and F the splitting field of F over F q (A). Since
S k = Gal(F , F(A)) = Gal(F · F/F(A)) ≤ Gal(F/F q (A)) ≤ S k ,
all inequalities are in fact equalities and F q = F ∩ F. In particular, α :
Gal(F/F q (A)) → Gal(F ∩ F/F q ) = 1, so ker α = S k .(8)
Since Gal(F q ) = ϕ ∼ =Ẑ with ϕ being the Frobenius map x → x q , the homomorphisms θ : Gal(F q ) → S k can be parametrized by permutations σ ∈ S k . Explicitly, each σ ∈ S k gives rise to θ σ : Gal(F q ) → S k defined by θ σ (ϕ) = σ. Let C be the conjugacy class of all permutations σ with λ σ = λ and let Θ = {θ σ | σ ∈ C}. Fix θ ∈ Θ. Clearly #Θ = #C, so by (8)
we have # ker α #Θ = #S k #C = 1 P (λ) . (9) Let Z be the closed subset of A m+1 = Spec F q [A] defined by D = disc t (F ) = 0 and V = A m+1 Z = Spec F q [A, D −1 ]. By assumption F is separable in t, so D is a nonzero polynomial of degree depending only on B. By [10, Lemma 1], there exists a constant c 1 = c 1 (m, B) such that #Z(K) ≤ c 1 q m .(10)
Let u 1 , . . . , u k be the roots of F in some algebraic closure of F(A 0 , . . . , A m ) and let
W = Spec F q [u 1 , . . . , u k , D −1 ] ⊆ A k+1 . Then W is an irreducible smooth affine F q - variety of degree bounded in terms of B = deg F and the embedding F q [A, D −1 ] → F q [u 1 , . . . , u k , D −1 ] induces a finite separableétale morphism ρ : W → V .
We apply [2, Proposition 2.2] to get an absolutely irreducible smooth F q -variety W together with a finite separableétale morphism π : W → V with the following properties:
i. Let U ⊆ V (F q ) be the set of p ∈ V (F q ) that areétale in W and such that p * = Θ.
Then π( W (F q )) = U.
ii. For every p ∈ U,
#(π −1 (p) ∩ W (F q )) = # ker α #Θ = 1 P (λ)
.
(See (9) for the last equality.)
By the construction of W in loc. cit. it holds that W L = W L , for some finite extension L/F q (where subscript L indicates the extension of scalars to L). Hence W and W have the same degree, which is bounded in terms of B. Thus, by [10,Theorem 1], there is a constant c 2 = c 2 (m, B) such that
|# W (F q ) − q m+1 | ≤ c 2 q m+1/2 .(11)
Applying (ii) gives P (λ) · #π( W (F q )) = # W (F q ). So multiplying (11) by P (λ) implies (10) and (12) that
|#π( W (F q )) − P (λ)q m+1 | ≤ P (λ)c 2 q m+1/2 ≤ c 2 q m+1/2 .(12)Then N = #X. Equation (i) gives X ∩ V (F q ) = π( W (F q )). Since V = A m+1 Z, it follows fromN − P (λ)q m+1 = #X − P (λ)q m+1 = #(X ∩ V (F q )) + #(X ∩ Z(F q )) − P (λ)q m+1 ≤ #(X ∩ V (F q )) − P (λ)q m+1 + #(X ∩ Z(F q )) ≤ π( W (F q )) − P (λ)q m+1 + #Z(F q ) ≤ c 2 q m+1/2 + c 1 q m ≤ c(m, B)q m+1/2 ,
where c = c 1 + c 2 .
Calculating a Galois Group
= f + g( m i=1 A i t i )
is separable in t and irreducible in the ring F (A)[t]. Proof. Since F is linear in A 0 and since f, g are relatively prime, it follows that F is
irreducible in F [A, t], hence by Gauss' lemma also in F (A)[t]. Take α ∈ F with g(α) = 0. Then F ′ (α) = f ′ (α) + g ′ (α)A 0 + g(α)A 1 = 0, hence F ′ = 0, so F is separable.= f + g( m i=1 A i t i ) over F (A)
is doubly transitive (with respect to the action on the roots of F ).
Proof. By replacing t by t+α, where α ∈ F is a root of f , we may assume that f (0) = 0, hence f 0 (t) = f (t)/t is a polynomial. By Lemma 3.2 the group G is transitive. The image of F under the substitution A 0 = 0 is
F = f + g m i=1 A i t i = t f 0 + g m−1 i=0 A i t i−1 .
Lemma 3.2 then gives that f 0 + g m−1 i=0 A i t i−1 is separable and irreducible. Hence the stabilizer of the root t = 0 in the Galois group ofF acts transitively on the other roots. But sinceF is separable, its Galois group embeds into G, so the stabilizer of a root of F in G is transitive. Thus G is doubly transitive.
For a rational function ψ(t) ∈ F (t) the first and second Hasse-Schmidt derivatives of ψ are denoted by ψ ′ and ψ [2] , respectively, and defined by
ψ(t + u) ≡ ψ(t) + ψ ′ (t)u + ψ [2] (t)u 2 mod u 3 .
A trivial observation is that ψ ′ is the usual derivative of ψ and, if the characteristic of F = 2, then ψ [2] = 1 2 ψ ′′ . Lemma 3.4. Let ψ(t) ∈ F (t) be a rational function with ψ [2] nonzero and A 1 a variable. Then ψ ′ (t) + A 1 and ψ [2] (t) have no common zeros.
Proof. This is obvious since the roots of ψ ′ + A 1 are transcendental over F , while those of ψ [2] are algebraic.
= ψ + m i=1 A i t i . Assume deg f > deg g + m.
Further assume that ψ ′ is not a constant if p = m = 2. Then the system of equations
Ψ ′ (ρ 1 ) = 0 Ψ ′ (ρ 2 ) = 0 Ψ(ρ 1 ) = Ψ(ρ 2 )(13)
has no solution with distinct ρ 1 , ρ 2 in an algebraic closure Ω of F (A).
Proof. For short we write ρ = (ρ 1 , ρ 2 ). Let
−ϕ(t) = ψ + m i=3 A i t i ′ = ψ ′ + m i=3 iA i t i−1 = f ′ g − f g ′ g 2 + m i=3 iA i t i−1 . Then Ψ ′ (t) = 2A 2 t + A 1 − ϕ(t). If p = m = 2, then ϕ = −ψ ′ which is not constant by assumption. Let c(ρ) = ψ(ρ 1 ) − ψ(ρ 2 ) + m i=3 (ρ i 1 − ρ i 2 )A i = Ψ(ρ 1 ) − Ψ(ρ 2 ) − ((ρ 2 1 − ρ 2 2 )A 2 + (ρ 1 − ρ 2 )A 1 ).
The system of equations (13) defines an algebraic set T ⊆ A 2 × A m in the variables ρ 1 , ρ 2 , A 1 , . . . , A m . Let α : T → A 2 and β : T → A m the projection maps. The system of equations (13) takes the matrix form
M(ρ) · A 2 A 1 = B(ρ) = ϕ(ρ 1 ) ϕ(ρ 2 ) c(ρ) ,(14)
where M(ρ) = It suffices to prove that d(ρ) is a nonzero rational function in the variables ρ = (ρ 1 , ρ 2 ). Indeed, this implies that dim(α(T )) ≤ dim{d(ρ) = 0} = 1, so dim T ≤ 1 + m − 2 < m. Thus β(T ) does not contain the generic point of A m which is A = (A 0 , . . . , A m ) and hence (13) has no solution with ρ ∈ Ω 2 .
2ρ 1 1 2ρ 2 1 ρ 2 2 −ρ 2 1 ρ 2 −ρ 1 . For every ρ ∈ U = {ρ | ρ 1 = ρ 2 , ϕ(ρ i ) = ∞, i = 1,
A straightforward calculation gives
d(ρ) = (ρ 1 − ρ 2 )(2c(ρ) + (ρ 1 − ρ 2 )(ϕ(ρ 1 ) + ϕ(ρ 2 ))). If m ≥ 3, then the coefficient of A 3 in 2c(ρ) + (ρ 1 − ρ 2 )(ϕ(ρ 1 ) + ϕ(ρ 2 )) is 2(ρ 3 1 − ρ 3 2 ) + 3(ρ 2 1 − ρ 2 2 )
, which is nonzero in any characteristic and we are done.
To this end assume m = 2. If p = 2, then 2c(ρ) = 0. Since ϕ is not constant in this case, we have ϕ(ρ 1 ) + ϕ(ρ 2 ) = 0 and we are done.
Finally assume m = 2 and p = 2. Then c(ρ) = ψ(ρ 1 ) − ψ(ρ 2 ) and φ = −ψ ′ . We may assume without loss of generality that f (0) = 0 (and hence ψ(0) = 0). Since f (t)/t + g(t)(A 2 t + A 1 ) is separable (Lemma 3.2), we can replace A 1 and A 2 by A 1 + α 1 and A 2 +α 2 , respectively, and f by f (t)+g(t)(α 2 t 2 +α 1 t), for suitably chosen α 1 , α 2 ∈ F , to assume that f (t)/t is separable. Since deg f (t) > deg +m ≥ 2, this implies that f (t) has at least one simple root, say α. Then α is a simple root of ψ = f /g. So ψ ′ (α) = 0. Let β = α be another root of f , hence of ψ.
If ψ ′ (β) = 0, then we have c(α, β) = ψ(α) − ψ(β) = 0, so
d(α, β) = −(α − β) 2 ψ ′ (α) = 0
and we are done. If ψ ′ (β) = 0, then β is a simple root of ψ, hence of f . But deg f > 2, so there must be another root γ of ψ. If d = 0, then we must have
d(α, β) −(α − β) 2 = 0 = ψ ′ (α) + ψ ′ (β) d(α, γ) −(α − γ) 2 = 0 = ψ ′ (α) + ψ ′ (γ) d(γ, β) −(γ − β) 2 = 0 = ψ ′ (γ) + ψ ′ (β).
So 2ψ ′ (α) = 0. This contradiction, implies that d = 0, as needed.
Proposition 3.6. Let F be a field of characteristic p ≥ 0, let 1 ≤ m < k, let A = (A 0 , . . . , A m ) an (m + 1)-tuple of variables, and let f, g ∈ F [t] be relatively prime polynomials with deg g + m < k = deg f . Assume
1. 2 ≤ m if deg g > 0, 2. 2 ≤ m if p | k(k − 1), and 3. (f /g) ′ is not constant if p = m = 2.
Then the Galois group of F (
A, t) = f (t) + g(t)( m i=0 A i t i ) over F (A) is Gal (F , F (A)) = S k .
Proof. LetF be an algebraic closure of F . Since Gal(F ,F (A)) ≤ Gal(F , F (A)) ≤ S k , we may replace, without loss of generality, F byF to assume that F is algebraically closed.
If p ∤ k(k − 1) and deg g = 0, the result follows from [3, Theorem 1] (note that F (A 0 , . . . , A m ) = F (A 2 , . . . , A m−1 )(A 0 , A 1 ), hence the result for m = 1 in loc. cit. extends to m > 1).
Assume that 2 ≤ m. Then G = Gal(F , F (A)) ≤ S k is doubly transitive by Lemma 3.3.
Let Ω be an algebraic closure of F (A 1 , . . . , A m ) and consider the map Ψ :
P 1 Ω → P 1 Ω defined locally by t → −A 0 := f (t) g(t) + m i=1 A i t i . The numerator of Ψ ′ = f ′ g−g ′ f g 2 + m i=1 iA i t i is f ′ g − g ′ f + g 2 (· · · + 2A 2 t + A 1 ).
If m ≥ 3 or if p = 2, this numerator has positive degree. If p = m = 2, then this numerator is f ′ g − g ′ f + g 2 A 1 , so it is not constant by (3). In any case, the numerator of Ψ ′ , hence Ψ ′ , has a root, say α ∈ Ω. Then Ψ is ramified at t = α.
Proof of Theorems 2.1 and 2.3
Since Theorem 2.1 is a special case of Theorem 2.3 it suffices to prove the latter. Let k be a positive integer, λ a partition of k, q = p ν a prime power, 1 ≤ m < k, and I = I(f, m) a short interval around f ∈ M(k, q). Assume 2 ≤ m if p | k(k − 1) and assume 3 ≤ m if p = 2 and deg f ′ ≤ 1. Let F be an algebraic closure of F q .
Let F = f + m i=0 A i t i . Then F satisfies the assumptions of Proposition 3.6, so Gal(F , F(A 0 , . . . , A m )) = S k .
Since deg F = deg t F = deg f = k and m < k, by Proposition 3.1, the number N of (a 0 , . . . , a m ) ∈ F m+1 q such that f + m i=0 a i t i has factorization type λ satisfies
N − P (λ)q m+1 ≤ c(k)q m+1/2 ,
where c(k) > 0 is a constant depending only on k (and not on f , q). This finishes the proof since by definition N = π q (I(f, m); λ).
Proof of Theorems 2.2 and 2.4
Since Theorem 2.2 is a special case of Theorem 2.4 it suffices to prove the latter. Let k be a positive integer, λ a partition of k, q = p ν a prime power, 2 ≤ m < k,
D ∈ F q [t] monic with deg D = k − m − 1 and f ∈ F q [t].
We are interested in the number of primes in the arithmetic progression g ≡ f mod D, so we may replace f by f − QD, for some polynomial Q to assume that deg f < deg D. Let F be an algebraic closure of F q .
Let Since deg F = deg t F = k, Proposition 3.1 implies that the number N of (a 0 , . . . , a m ) ∈ F m+1 q such that f + D(t m+1 + m i=0 a i t i ) has factorization type λ satisfies
F = f + D t m+1 + m i=0 A i t i =f + D m i=0 A i t i ,f = f + D · t m+1 ,N − P (λ)q m+1 ≤ c 1 (k)q m+1/2 ,
where c(k) > 0 is a constant depending only on k (and not on f , q). Finally φ(D) = |D| P |f (1 − 1/|P |), where the products runs over the distinct prime polynomials P dividing D and since |P | ≥ q, we have
φ(D) = q deg D (1 + O( 1 q )) = q k−m−1 + O k (q k−m−2 ).
By Theorem 2.2 applied to the interval I(t k , k − 1),
π q (k; λ) = P (λ)q k + O k (q m+1/2 ). Thus π q (k; λ) φ(D) − P (λ)q m+1 ≤ c 2 (k)q m+1/2 and N − π q (k; λ) φ(D) ≤ N − P (λ)q m+1 + π q (k; λ) φ(D) − P (λ)q m+1 ≤ c(k)q m+1/2 ,
where c = c 1 + c 2 . This finishes the proof since by definition N = π q (k; D, f ; λ).
Small m
In this section we show (3) fails in the cases excluded by Theorem 2.1 expect possibly in the case p = m = 2 and deg f ′ ≤ 1 (when we do not know whether the (3) holds or not).
m = 0
We denote Euler's totient function by φ(k) = |(Z/kZ) * |.
Proposition 6.1. For k > 1 we have
π q (I(t k , 0)) = 0, q ≡ 1 mod k φ(k) k (q − 1), q ≡ 1 mod k.
In particular, if k > 2, |π q (I(t k , 0)) − q/k| ≫ q.
Proof. We separate the proof into cases.
Case I. gcd(q, k) > 1.
In this case t k − a is inseparable for any a ∈ F p . Since F q is perfect, this implies that t k − a is reducible. So π q (I(t k , 0)) = 0.
Case II. gcd(q(q − 1), k) = 1.
In this case k = 2 and 1 − q is invertible modulo k. Assume, by contradiction, that there exists a ∈ F q such that f = t k −a is irreducible in F p [X]. Then the Frobenius map, ϕ : x → x q , acts transitively on the roots of f . Thus α q = ζα, where ζ is a primitive k-th root of unity. We get that the orbit of α under ϕ is
α → α q = ζα → (ζα) q = ζ 1+q α → · · · → ζ 1+q+···+q k−1 α = α.
On the other hand, this orbit equals to the set of roots of f which is {ζ i α | i = 0, . . . , k − 1}. So for every i mod k there is a unique 1 ≤ r ≤ k such that
i ≡ 1 + q + · · · + q r−1 ≡ (1 − q) −1 (1 − q r ) (mod k).
This is a contradiction since there are at most φ(k) < k powers of q mod k, hence #{(1 − q) −1 (1 − q r ) mod k} < k = #{i mod k}.
Case III. gcd(q, k) = 1 and q ≡ 1 mod k.
Let g = gcd(q − 1, k); then l = k/g > 1 and gcd(q(q − 1), l) = 1. Let a ∈ F q , and let α be a root of f = t k − a. Then the polynomial f 1 = t l − α l ∈ F q [α l ][t] is reducible by Case II. Since α is a root of g and since α l is a root of f 2 = t g − a, we get that
[F q [α] : F q ] = [F q [α] : F q [α l ]] · [F q [α l ] : F q ] < l · g = k.
In particular f is reducible.
Case IV. q ≡ 1 mod k.
In this case F q contains a primitive k-th root of unity. By Kummer theory t k − a is irreducible in F q if and only if the order of a(F * q ) k in C = F * q /(F * q ) k is k. Since F * q is cyclic of order q − 1, also C is cyclic of order k, hence there are exactly φ(k) cosets of order k in C. Each coset contains q−1 k elements. So there are exactly φ(k) k (q − 1) irreducible t k − a.
m = 1 and p | k
In this case we study the interval I(t p 2 , 1) = {t p 2 − at + b | a, b ∈ F q } for q = p 2n . Proposition 6.2. For q = p 2n we have π q (I(t p 2 , 1)) = 0.
In particular, |π q (I(t p , 1)) − q 2 /p| ≫ q.
Proof. Let F = F p 2 , let E be the splitting field of F = t p − At + B over K = F q (A, B).
Then, by [15,Theorem 2],
G = Gal(F , F ) ∼ = Gal(E/F ) ∼ = Gal(E · F, F(A, B)) ∼ = Aff(F ),
as permutation groups. Here F is an algebraic closure of F q and Aff(F ) is the group of transformation of the affine line A 1 (F ) = F :
M c,d : x → cx + d, 0 = c, d ∈ F.
Since |G| = p 2 (p 2 − 1) and since the group of translation T = {x → x + d} ∼ = F p 2 is of order p 2 , we get that T is a p-sylow subgroup of T . But T is of exponent p, hence there are no p 2 -cycles in G.
For every a, b ∈ F q , the Galois group G a,b of f = t p 2 − at + b is a cyclic sub-quotient of G, hence of order ≤ p. In particular G a,b acts intransitively on the roots of f , hence f is reducible.
6.3. m = 1 and p | k − 1
The details of this case are nearly identical to Section 6.2 with the distinction that the group Aff(F ) is replaced by the group of transformations on the projective line, cf. [15,Theorem 2]. Hence we state the result but omit the details. A. Primes in almost all intervals A.1. Generalities Definition 1. Let Q be an infinite set of positive integers, and assume that for all q ∈ Q we have a sequence S(q) = {a 1 (q), . . . , a n(q) (q)} of non-negative real numbers. We say that S(q)
1. converges on average to 0 if 1
n(q) n(q) i=1 a i (q) → 0 as q → ∞.
2. converges pointwise to 0 if for any choice of a sequence of indices i(q) ∈ [1, n(q)]
we have lim q→∞ a i(q) (q) → 0.
3. converges almost everywhere to 0 if for every q ∈ Q there is a subset J(q) ⊆ S(q) such that lim q→∞ #J(q)/n(q) = 1 and for any choice of indices i(q) ∈ J(q) we have lim q→∞ a i(q) (q) → 0.
It is standard that convergence on average implies convergence almost everywhere:
Lemma A.1. In the notation of Definition 1, if S(q) converges on average to 0, then S(q) converges almost everywhere to 0.
Proof. Let ǫ > 0. Since lim
q→∞ 1 n(q) n(q) i=1 a i (q) = 0 there exists N 0 (ǫ) > 0 such that for any q > N 0 (ǫ) we have 1 n(q) n(q) i=1 a i (q) < ǫ 2 .(15)
Denote by
J(q) = {1 ≤ i ≤ n(q) | a i (q) < ǫ}.
Then, by (15), we have
ǫ 2 > 1 n(q) n(q) i=1 a i (q) ≥ 1 n(q) i∈[1,n(q)] J(q) a i (q) ≥ n(q) − #J(q) n(q) · ǫ.
Thus |1 − #J(q)/n(q)| < ǫ, so lim q→∞ #J(q)/n(q) = 1.
Let i(q) ∈ J(q). If q > N 0 (ǫ), then 0 ≤ a i (q) < ǫ, by the definition of J(q), and hence lim q→∞ a i(q) (q) = 0.
A.2. Number of primes in short intervals
In the terminology of Definition 1 Theorem 2.1 says that
E(q) = π q (I(f, m)) q m+1 /k − 1 f ∈ M(k, q) .
converges pointwise to 0 (under the restrictions there on m). In what follows we show how to derive an almost everywhere convergence, including small m, from a result of Keating and Rudnick [9].
Definition 2. Let f ∈ F q [t].
The von-Mangoldt function, Λ(f ), is defined by
Λ(f ) = deg(P ) if f = cP k ,
where P is a prime polynomial P and c ∈ F * q , 0
otherwise.
If f ∈ M(k, q) and 1 ≤ m < k, we let
ν(f ; m) = g∈I(f,m) g(0) =0 Λ(g).
We denote the mean value and variance of ν(•; m) by The last corollary says that ν(•; m) ∼ q m+1 (as long as 1 ≤ m < k − 3) almost always. It remains to explain how to deduce from this a similar result for the prime counting function.
ν(•; m) = 1 q k f ∈M(k,q) ν(f ; m),
For a short interval I = I(f, m) with f ∈ M(k, q) and for d | k we let
I 1/d = {g ∈ M(k/d, q) | g d ∈ I}.
Lemma A.4. Let f ∈ M(k, q), 1 ≤ m < k, I = I(f, m) and d | k, d > 1. Then
#(I 1/d ) ≤ q m .
Proof. Let J = I 1/d . If J = ∅, we are done. Otherwise there is monic g ∈ M(k/d, q) such that g d ∈ I. Then I = I(g d , m), so without loss of generality we may assume that g d = f . Ifg ∈ J, then deg(g d − f ) ≤ m. Moreoverg is monic, sog = g + h for some h with deg h < k/d = deg g. It suffice to show that deg h < m, since there are only q m such polynomials.
If d = p a , where p = char(F q ), then I ∋g d = g d + h d = f + h d . So deg h ≤ m/d < m and we are done.
Assume d = p a D with D > 1 and gcd(p, D) = 1. Write g 1 = g p a and h 1 = h p a . Then deg h 1 < deg g 1 , g D 1 = f , and
g d − f = (g + h) d − f = (g 1 + h 1 ) D − f = g D 1 + D i=1 D i g D−i 1 h i 1 − f = Dg D−1 1 h 1 + D(D−1) 2 g D−2 1 h 2 1 + · · · .
Since p ∤ D and deg h 1 < deg g 1 , we get that
m ≥ deg(g d − f ) = deg(g D−1 1 h 1 ) = k(D − 1) D + deg h 1 > deg h 1 ,
as needed.
Finally we prove (3) almost everywhere.
Corollary A.5. Let 1 ≤ m < k − 3 be integers and for each prime power q let E(q) = π q (I(f, m)) q m+1 /k − 1 f ∈ M(k, q) .
Then E(q) converges almost everywhere to 0.
Proof. For f ∈ M(k, q) and for d | k we let Π d (f ) ⊆ I(f, m) 1/d be the subset of monic prime polynomials of degree d and let ǫ = 1 if t k ∈ I(f, m) and ǫ = 0 otherwise. Then
π q (I(f, m)) q m+1 /k − 1 = ν(f ; m) q m+1 − 1 + O k (q −1 ) = ν(f ; m) q m+1 − 1 − 1 q k + O k (q −1 ).
Thus Corollary A.3 gives the convergence of E(q) to almost everywhere 0.
Conjecture 1 . 1 .
11If Φ(x) > x ǫ then (1) holds. But even for Φ(x) = √ x Granville says [5, p. 73]:we know of no approach to prove that there are primes in all intervals [x, x + √ x].
Since for p = (a 0 , . . . , a m ) ∈ V (F q ) ⊆ F m+1q we have p * = Θ if and only if the orbit type of p * is λ (in the sense of [2, p. 859]). Thus λ F (a 0 ,...,am,t) = λ if and only p * = Θ ([2, Lemma 2.1]). Let X = {p = (a 0 , . . . , a m ) ∈ F m+1 q | λ F (a 0 ,...,am,t) = λ and D(a 0 , . . . , a m ) = 0}.
Lemma 3. 2 .
2Let F be an algebraically closed field, A = (A 0 , . . . , A m ) an m-tuple of variables with m ≥ 1, and f, g ∈ K[t] relatively prime polynomials. Then F (A, t)
Lemma 3. 3 .
3Let F be an algebraically closed field, A = (A 0 , . . . , A m ) an m-tuple of variables with m ≥ 2, and f, g ∈ K[t] relatively prime polynomials with deg f > deg g. The Galois group G of F (A, t)
Lemma 3 . 5 .
35Let F be an algebraically closed field of characteristic p ≥ 0, m ≥ 2,A = (A 1 , . . . , A m ), f, g ∈ F [t]relatively prime polynomials and put ψ = f /g and Ψ
2}, the rank of M(ρ) is 2. Thus the dimension of the fiber α −1 (ρ), for any ρ ∈ U, is at most m − 2. Moreover, for a given ρ ∈ U, (14) is solvable if and only if rank(M|B) = 2 if and only if d(ρ) = det(M|B) = 0, that is the solution space (restricting to ρ ∈ U) lies in d(ρ) = 0.
Lemma 3.4 says that the orders of ramifications are ≤ 2, so the equation Ψ(t) = Ψ(α) has at most double roots in Ω. Lemma 3.5 says that the critical values are distinct, so Ψ(t) = Ψ(α) has at least k − 1 solutions. But since α is a ramification point, the fiber over Ψ(α) is with exactly one double points. Hence the inertia group over Ψ(α) permutes two roots of F (A, t) = g(t)(Ψ(t) + A 0 ), and fixes the other roots (cf.[1, Proposition 2.6]). In other words G contains a transposition. Therefore G = S k[14, Lemma 4.4.3].
where A = (A 0 , . . . , A m ) is an (m + 1)-tuple of variables. Since degf = m + 1 + deg D = k > deg D + m, Proposition 3.6 gives that Gal(F , F(A)) = S k ,
Proposition 6 . 3 .
63For q = p 2n we have
f ; m) − ν(•; m) | 2 , respectively. Theorem A.2 (Keating-Rudnick). Let 1 ≤ m < k be integers. Then ν(•; m) = q m+1 1 − 1 q k . (16) If in addition m < k − 3, then lim q→∞ 1 q m+1 Var ν(•; m) = k − m − 2. (17) Proof. See [9, Lemma 4.3] for (16) and Theorem 2.1 in loc.cit. for (17). Corollary A.3. Let 1 ≤ m < k − 3 and for each prime power q let V(q) = a f (→ ∞. So V(q) converges to 0 on average. By Lemma A.1, V(q) converges almost everywhere to 0.
(I(f, m) 1/d ) + ǫ.By Lemma A.4 we have π q (I(f, m) 1/d ) ≤ #(I(f, m) 1/d ) ≤ q m for d > 1. So ν(f ; m) = kπ q (f, m) + O(c(k)q m ),where c(k) = σ(k)
) .
)Conjecture 1.2. For every δ > 0, (2) holds in the range d 1+δ < x.Concerning results on this conjecture Granville says [5, p. 69]:
AcknowledgmentsWe thank Zeev Rudnick for helpful remarks on earlier drafts of this paper and for the suggestions to consider arithmetic progressions and different factorization types.The first two authors were supported by a Grant from the GIF, the German-Israeli Foundation for Scientific Research and Development. The last author was supported by the Göran Gustafsson Foundation (KVA).
Dirichlet's theorem for polynomial rings. L Bary-Soroker, Proc. Amer. Math. Soc. 1371L. Bary-Soroker. Dirichlet's theorem for polynomial rings. Proc. Amer. Math. Soc., 137(1):73-83, 2009.
Irreducible values of polynomials. L Bary-Soroker, Adv. Math. 2292L. Bary-Soroker. Irreducible values of polynomials. Adv. Math., 229(2): 854-874, 2012.
The Galois group of a polynomial with two indeterminate coefficients. S D Cohen, Pacific J. Math. 901S. D. Cohen. The Galois group of a polynomial with two indeterminate coefficients. Pacific J. Math., 90(1):63-76, 1980.
Unexpected irregularities in the distribution of prime numbers. A Granville, Proceedings of the International Congress of Mathematicians. the International Congress of MathematiciansZürich; Birkhäuser, Basel1A. Granville. Unexpected irregularities in the distribution of prime numbers. Pro- ceedings of the International Congress of Mathematicians, Vol. 1, 2 (Zürich, 1994), 388-399, Birkhäuser, Basel, 1995.
Different approaches to the distribution of primes. A Granville, Milan J. Math. 781A. Granville. Different approaches to the distribution of primes. Milan J. Math., 78(1):65-84, 2010.
The number of primes in a short interval. D R Heath-Brown, J. Reine Angew. Math. 389D. R. Heath-Brown. The number of primes in a short interval. J. Reine Angew. Math., 389:22-63, 1988.
A note on the differences between consecutive primes. D R Heath-Brown, D A Goldston, Math. Ann. 2663D. R. Heath-Brown and D. A. Goldston. A note on the differences between consec- utive primes. Math. Ann., 266(3):317-320.
On the difference between consecutive primes. M N Huxley, Invent. Math. 15M. N. Huxley. On the difference between consecutive primes. Invent. Math., 15:164- 170, 1972.
The variance of the number of prime polynomials in short intervals and in residue classes. J P Keating, Z Rudnick, Int. Math. Res. Not. IMRN. J. P. Keating and Z. Rudnick. The variance of the number of prime polynomials in short intervals and in residue classes. Int. Math. Res. Not. IMRN, page 30 pp., April 2012.
Number of points of varieties in finite fields. S Lang, A Weil, Amer. J. Math. 76S. Lang and A. Weil. Number of points of varieties in finite fields. Amer. J. Math., 76:819-827, 1954.
H L Montgomery, R C Vaughan, Multiplicative Number Theory I. Classical Theory. Cambridge Studies in Advanced Mathematics, 97. CambridgeCambridge University Pressxviii+552 pp.H. L. Montgomery and R. C. Vaughan. Multiplicative Number Theory I. Classical Theory. Cambridge Studies in Advanced Mathematics, 97. Cambridge University Press, Cambridge, 2007. xviii+552 pp..
The Difference between Consecutive Prime Numbers. R A Rankin, J. London Math. Soc. 13R. A. Rankin. The Difference between Consecutive Prime Numbers. J. London Math. Soc., 13:242-247, 1938.
On the normal density of primes in small intervals, and the difference between consecutive primes. A Selberg, Arch. Math. Naturvid. 476A. Selberg. On the normal density of primes in small intervals, and the difference between consecutive primes. Arch. Math. Naturvid., 47(6):87-105, 1943.
J.-P Serre, Topics in Galois Theory (Research Notes in Mathematics. J.-P. Serre. Topics in Galois Theory (Research Notes in Mathematics) [Hardcover].
. A K Peters, Ltd , 2 editionA. K. Peters, Ltd., 2 edition, 2008.
Galois group of an equation X n − aX + b = 0. K Uchida, Tohoku Math. J. 222K. Uchida. Galois group of an equation X n − aX + b = 0. Tohoku Math. J. (2), 22(4):670-678, 1970.
|
[] |
[
"arXiv:hep-ph/0510146v1 12 Oct 2005 Study on production of exotic 0 + meson D * sJ (2317) in decays of ψ(4415)",
"arXiv:hep-ph/0510146v1 12 Oct 2005 Study on production of exotic 0 + meson D * sJ (2317) in decays of ψ(4415)"
] |
[
"Xin-Heng Guo \nInstitute of Low Energy Nuclear Physics\nBeijing Normal University\n100875BeijingChina\n",
"Hong-Wei Ke \nDepartment of Physics\nNankai University\n300071TianjinChina\n",
"Xue-Qian Li \nDepartment of Physics\nNankai University\n300071TianjinChina\n",
"Xiang Liu \nDepartment of Physics\nNankai University\n300071TianjinChina\n",
"Shu-Min Zhao \nDepartment of Physics\nNankai University\n300071TianjinChina\n"
] |
[
"Institute of Low Energy Nuclear Physics\nBeijing Normal University\n100875BeijingChina",
"Department of Physics\nNankai University\n300071TianjinChina",
"Department of Physics\nNankai University\n300071TianjinChina",
"Department of Physics\nNankai University\n300071TianjinChina",
"Department of Physics\nNankai University\n300071TianjinChina"
] |
[] |
The newly observed D * sJ family containing D * sJ (2317), D sJ (2460) and D sJ (2632) attracts great interests. Determining their structure may be important tasks for both theorists and experimentalists. In this work we use the heavy quark effective theory (HQET) and a non-relativistic model to evaluate the production rate of D * sJ (2317) from the decays of ψ(4415), and we find that it is sizable and may be observed at BES III and CLEO, if it is a p-wave excited state of D s (1968). Unfortunately, the other two members of the family cannot be observed through decays of charmonia, because of the constraints from the final state phase space.
|
10.1088/0253-6102/48/3/025
|
[
"https://arxiv.org/pdf/hep-ph/0510146v1.pdf"
] | 18,768,598 |
hep-ph/0510146
|
55600bbdd2dabd88b7e1df215072abbceedc92f2
|
arXiv:hep-ph/0510146v1 12 Oct 2005 Study on production of exotic 0 + meson D * sJ (2317) in decays of ψ(4415)
October 30, 2018
Xin-Heng Guo
Institute of Low Energy Nuclear Physics
Beijing Normal University
100875BeijingChina
Hong-Wei Ke
Department of Physics
Nankai University
300071TianjinChina
Xue-Qian Li
Department of Physics
Nankai University
300071TianjinChina
Xiang Liu
Department of Physics
Nankai University
300071TianjinChina
Shu-Min Zhao
Department of Physics
Nankai University
300071TianjinChina
arXiv:hep-ph/0510146v1 12 Oct 2005 Study on production of exotic 0 + meson D * sJ (2317) in decays of ψ(4415)
October 30, 2018numbers: 1325Gv1239Hg1238Lg1440Lb
The newly observed D * sJ family containing D * sJ (2317), D sJ (2460) and D sJ (2632) attracts great interests. Determining their structure may be important tasks for both theorists and experimentalists. In this work we use the heavy quark effective theory (HQET) and a non-relativistic model to evaluate the production rate of D * sJ (2317) from the decays of ψ(4415), and we find that it is sizable and may be observed at BES III and CLEO, if it is a p-wave excited state of D s (1968). Unfortunately, the other two members of the family cannot be observed through decays of charmonia, because of the constraints from the final state phase space.
Introduction
The recently observed exotic mesons D * sJ (2317), D sJ (2460) and D sJ (2632) [1] seem to constitute a new family of mesons which are composed of charm and strange flavors. The mesons possess spin-parity structures of 0 + , 1 + and 0 + respectively. This new discovery draws great interests of both theorists and experimentalists of high energy physics. Some authors [2] suppose that D * sJ (2317) and D sJ (2460) are the chiral partners of the regular D s and D * s , while D sJ (2632) may be a radially excited state of D * sJ (2317). They may also be considered to be p-wave excited states of D s [3]. Alternatively, many authors suggest that they can possiblly be four-quark states or molecular states [4,5]. The most peculiar phenomenon is that in some experiments the three resonances are observed with clear signals [1], whereas not by other prestigious experimental groups. One would ask if the observed resonances actually exist or the background was misidentified as a signal. It is noted that similar situations exist for pentaquarks [6]. The goal of the research is to help designing experiments which can help clarifying the mist.
The key point is to experimentally explore the resonances and find a convincing explanation why they are observed in certain experiments, but not in others. However, before it, one needs to design certain experiments to confirm the existence of D * sJ (2317), D sJ (2460) and D sJ (2632) and determine their hadronic structures. As aforementioned, there are several different postulates. Measurements may tell which one is more realistic.
Because of the constraint of final-state phase space, observing a final state which involves any of the exotic states can only be realized via decays of higher excited states in the ψ family. From the data-booklet [7], we can see that the lowest excited state which can offer sufficient energy to produce D * sJ (2317) +D s (1968) is ψ(4415), but still not enough for D * sJ (2317) +D * sJ (2317). However, since D * sJ (2317) is a 0 + meson and D s (1968) is a 0 − meson, a careful analysis on the total angular momentum and parity indicates the decay of ψ(4415) → D * sJ (2317) +D s (1968) is forbidden. Moreover, if only considering the central value of ψ(4415), the phase space is not enough for D * sJ (2317) +D s (1968) + π and D * sJ (2317) +D * s (2112) which could be produced via pure strong interaction and the only possible mode is the radiative decay ψ(4415) → D * sJ (2317) +D s (1968) + γ. That is a decay with a three-body final state and is an electromagnetic process where a p-wave is necessary to conserve the total angular momentum and parity. This observation tells us that the corresponding branching ratio must be very suppressed and is a rare decay. Recently, Barnes et al. [8] suggest to observe D * sJ (2317) via the process
ψ(4415) → D * sJ (2317) +D * s (2112).
The advantage is that it is a strong decay with a two-body final state, therefore the amplitude may be large, but meanwhile, by the central values, m D * sJ (2317) +m D * s (2112) > 4415 MeV, thus this reaction can only occur via the threshold effect and would suffer from a corresponding suppression. If it is of a larger rate (we will estimate it later in the work), the decays ψ(4415) → D * sJ (2317)+D s (1968)+π and ψ(4415) → D * sJ (2317)+D s (1968)+γ can also be realized via secondary decays ofD * s (2112) →D s (1968) + π andD * s (2112) → D s (1968) + γ and these are the dominant modes over the direct decay modes ψ(4415) → D * sJ (2317) +D s (1968) + π and ψ(4415) → D * sJ (2317) +D s (1968) + γ which are three-body decays.
The picture is that the charmonium ψ(4415) dissolves into a cc pair and both c and c are free and on mass shell, and the soft gluons emitted from cc can excite the physical vacuum to create a pair of ss. The process of ss pair creation is quantitatively described by the quark-pair-creation model (QPC) [9,10]. Then the ss join the correspondingc and c to compose charmed mesons. Indeed, the creation process is fully governed by the non-perturbative QCD effects, thus the rate is not reliably calculable so far and can only be estimated in terms of models. In this work, we use QPC model [9,10] to evaluate the rates of ψ(4415) → D * sJ (2317) +D * s (2112) and the direct decay ψ(4415) → D * sJ (2317) + D s (1968) + γ where a photon is emitted during the process.
In this work, we consider the transitions of ψ(4415) → D * sJ (2317) +D * s (2112) and the subsequent observable modes ψ(4415) → D * sJ (2317)+D * s (2112) → D * sJ (2317)+D s (1968)+ γ (D * sJ (2317) +D s (1968) + π). We also calculate the ratio of the direct radiative decay process ψ(4415) → D * sJ (2317) +D s (1968) + γ which is not produced via the resonance D * s (2112).
The key point is how to evaluate the hadronic matrix elements. Here we must adopt suitable models to do the job. all are heavy mesons, therefore one can expect that the heavy quark effective theory (HQET) applies for evaluating the hadronic matrix elements. For a completeness, we keep the 1/m c corrections in the formulation, however, it is obvious that such corrections are practically negligible in the concerned case, so that we do not really include them in the numerical calculations. As a check, we employ a non-relativistic model to re-evaluate the hadronic matrix elements and compare the results obtained in the two approaches.
To obtain the concerned parameters and testify the applicability of the model, we calculate the branching ratios of ψ(4040) → D ( * ) +D ( * ) and D s +D s . By fitting data, we determine the vacuum production rate of the quark-pairs in HQET. Moreover, when using the non-relativistic model, we also need to determine the concerned parameters in the wavefunctions. More concretely, there are several decay channels in ψ(4040) with c andc in the final states (D 0( * )D0( * ) , D ±( * )D∓( * ) ), and their branching ratios are experimentally measured. Actually, in HQET, the only free parameter is the rate of quark-pair creation from vacuum, i.e. γ q , then one mode is enough to fix it. We can check the obtained model and the parameter by applying them to evaluate other modes which have also been experimentally measured. Our numerical results respect the pattern determined by the experiments. For ψ(4415) more channels are available, that is D sDs (or D + s D − s ), etc. We may naively consider that the production of D sDs in ψ(4415) → D s +D s is somehow related to ψ(4040) → D ( * ) +D ( * ) , then all the parameters obtained from decays of ψ(4040) can be applied to study decays of ψ(4415) while assuming the parameters are not very sensitive to the energy scale.
In both HQET and non-relativistic model, we derive the formulation for the branching ratio of ψ(4415) → D * sJ (2317) +D * s (2112) and obtain the final numerical results. We also formulate the direct process ψ(4415) → D * sJ (2317) +D s (1986) + γ which is the only channel allowed by the phase space if neglecting the threshold effects. Even though these results with the aforementioned approximations cannot be very accurate, one expects that the order of magnitude of the calculated result would be right.
If the exotic states D * sJ (2317) is of the 4-quark structure as suggested [4,5], in the production process at least three pairs of quarks are created from vacuum, and the final state would involve more quarks and anti-quarks, thus the integration over the final-state phase space would greatly suppress the rate. By our rough numerical evaluation, at least 4 orders suppression would be resulted for the decays, if the exotic meson D * sJ (2317) is a fourquark state. Thus by measuring the branching ratio of ψ(4415) → D * sJ (2317) +D * s (2112), we may judge (1) if the exotic meson D * sJ (2317) indeed exists, (2) what quark structure it possesses.
This work is organized as follows, after the introduction, in Sect. 2, we formulate the decay rates of ψ(4415) → D * sJ (2317) +D * s (2112) and direct process ψ(4415) → D * sJ (2317) +D s (1968) + γ. In Sect.3, we present our numerical results along with all the input parameters. Finally, Sect. 4 is devoted to discussion and conclusion. Some detailed expressions are collected in Appendix.
Formulation
The QPC model about the process that a pair of quarks with quantum number J P C = 0 ++ is created from vacuum was first proposed by Micu [9] in 1969. In the 1970s, QPC model was developed by Yaouanc et al. [10,11,12,13] and applied to study hadron decays extensively. Recently there are some works [14,15] to study QPC model and its applications [16]. In the QPC model, the interaction which represents the mechanism of a pair of quarks created from vacuum can be written as [15]
S vac = g Iq d 4 xψ q ψ q , with L = g Iqψ q ψ q ,(1)
where g Iq = 2m q γ q . γ q is a dimensionless constant which denotes the strength of quark pair creation from vacuum, and can only be obtained by fitting data. m q (q = u, d, s) are the masses of light quarks. In the non-relativistic approximation [13], the interaction Hamiltonian (1) can be expressed as
H vac → H non vac = i,j dp q dpq[3γ q δ 3 (p q + pq) m 1, 1; m, −m|0, 0 ×Y m 1 (p q − pq)(χ −m 1 ϕ 0 ω 0 ) i,j ]b + i (p q , s)d + j (pq, s ′ ),(2)
where i and j are SU(3)-color indices of the created quarks and anti-quarks; s and s ′ are spin polarizations; ϕ 0 = (uū + dd + ss)/ √ 3 and (ω 0 ) ij = δ ij for flavor and color singlets respectively; χ m 1 is a triplet state of spin, Y m 1 is a solid harmonic polynomial corresponding to the p-wave quark pair.
The transition amplitude of ψ(4415) in QPC model.
In this work, we study the strong decay ψ(4415) →D * s (2112) + D * sJ (2317) and the direct radiative decay ψ(4415) → γ +D s (1968) + D * sJ (2317). With the QPC model, during these transitions charm-quark and antiquark from ψ(4415) combine with thess created from vacuum to form final state particles. The Feynman diagrams of these transitions are depicted in Fig. 1.
The transition matrix element of ψ(4415) →D * s (2112) + D * sJ (2317) is T strong = D * s (2112)D * sJ (2317)|H vac (x)|ψ(4415) .(3)
For the direct radiative decay ψ(4415) → γ +D s (1968) + D * sJ (2317), the transition matrix element reads as
T dir = D s (1968)D * sJ (2317)γ|T dxdy[L vac (x)L em (y)]|ψ(4415) ,(4)
where L em (y) is the electromagnetic interaction Hamiltonian and have the following form
L em (y) = ± 2e 3 d 4 xΨγ µ ΨA µ (y). (5) ψ(4415) c c p1 p2 ×D * s D * sJ (2317) ψ(4415) ψ(4415) c c p1 p2 ×D s D * sJ (2317) γ ψ(4415) ψ(4415) c c p1 p2 ×D s D * sJ (2317) γ (a) (b) (c) ψ(4415) ψ(4415) c c p1 p2 ×D s D * sJ (2317) γ ψ(4415) ψ(4415) c c p1 p2 ×D s D * sJ (2317) γ (d) (f) Figure 1: (a) is the Feynman diagram of strong decay ψ(4415) →D * s (2112) + D * sJ (2317). (b)-(f) are the Feynman diagrams for direct ψ(4415) → γ +D s (1968) + D * sJ (2317).
where the sign ± corresponds to charges of c andc respectively. Considering the weak binding approximation, |ψ(4415) can be expressed as
|ψ(4415) → N Ψ(0)cǫ /c|0 ,(6)
where Ψ(0) is the wave function at origin and ǫ denotes the polarization vector. N is the normalization constant.
It is also noted that for decays ψ(4040) → D ( * )D( * ) , the Feynman diagrams are similar to that in Fig.1 a.
Evaluation of the hadronic matrix elements in HQET.
(i) The strong decay ψ(4415) →D * s (2112) + D * sJ (2317). The diagram in Fig. 1 (a) involves the q-meson-Q vertices. In refs. [17,18], the effective Lagrangian for these vertices has been constructed based on the heavy quark symmetry and chiral symmetry
L HL =h v (iv · ∂)h v − [gχ(H +S)h v + H.c.] + g ′ [T r(HH) + T r(SS)],(7)
where the first term is the kinetic term of heavy quarks with v /h v = h v ; H is the super-field corresponding to the doublet (0 − , 1 − ) of negative parity and has an explicit matrix representation: H = 1+v / 2 (P * µ γ µ −P γ 5 ); P and P * µ are the annihilation operators of pseudoscalar and vector mesons which are normalized as 0|P
|M (0 − ) = √ M H , 0|P * µ |M (1 − ) = √ M H ǫ µ ;
S is the super-fields related to (0 + , 1 + ) and S = 1+v / 2 [P * ′ 1µ γ µ γ 5 − P 0 ]; χ = ξq (q = u, d, s is the light quark field and ξ = e iπ f , here we only take the leading order as ξ ≈ 1).
The effective Lagrangian in Eq. (7) contains only the leading order for the coupling of meson with quarks. We may also include the 1/m Q corrections in the expressions. The heavy-light quark interaction Lagrangian given in [18] is
L HL = Q(i∂ / − m Q )Q − 2g 2 Λ 2 Qγ µ λ A 2 Qψγ µ λ A 2 ψ,(8)
where Q = (b, c) and ψ = (u, d, s), m Q is the heavy quark mass. We can obtain the 1/m Q corrections from two aspects. The first comes from the quark wavefunction [19] Q (7). Secondly, the superfields H and S in (7) should also receive 1/m Q correction. Falk et al. [20] presented the changes as
(x) = e −im Q v·x 1 + iD / ⊥ 2m Q + · · · h v (x), D / ⊥ = D µ − v µ v · D,(9)the 1/m Q correction is obtained by replacing h v with (1 + iD / ⊥ /2m Q )h v inH → H + 1 2m Q [γ µ , iD µ H], S → S + 1 2m Q {γ µ , iD µ S}.
Then, we can include the 1/m Q corrections in (7). Now, let us write down the transition amplitude for the decay of ψ(4415) →D * s (2112)+ D * sJ (2317) as
M(ψ(4415) →D * s (2112) + D * sJ (2317)) = Ψ(0) 6 √ M A Tr g ǫ / D * s M C i p / C − p / A /2 − m s g Is i p / A /2 − p / B − m s g M B ×(M A + p / A )ǫ / A = 2Ψ(0)g 2 g Is √ M B M C 3 √ M A M A m 2 q − 1 4 M 3 A + M A (p A · p C ) − M A M 2 C (ǫ D * s · ǫ A ) × 1 [(p A /2 − p B ) 2 − m 2 s ][(p A /2 − p C ) 2 − m 2 s ] ,(10)
where p A , p B and p C represent the four momenta of ψ(4415), D * sJ (2317) and D * s (2112); M A and ǫ A are the mass and the polarization vector of ψ(4415); M B and M C are the masses of two produced mesons; g is the coupling constant of Q-meson-q vertex which is given in literature [17]. It is noted that by the central values
m D * sJ (2317) + m D * s (2112) > 4415 MeV,
thus the process can only occur through the threshold effect. The resonance ψ(4415) has a total width Γ A . Considering the distribution, we adopt the typical Gaussian form suggested by the data group [7], and set the lower and upper bound for the integration of final phase space as M A − δ < M < M A + δ and the delta-function guarantees the energy-momentum conservation.
Finally we obtain the width
Γ(ψ(4415) →D * s (2112) + D * sJ (2317)) = 2 (1 − β) √ 2π Γ A M A +δ M A −δ 1 6M d 3 p B d 3 p C (2π) 3 2E B (2π) 3 2E C |M(ψ(4415) →D * s D * sJ (2317))| 2 ×(2π) 4 δ 4 (M − p B − p C ) exp − (M − M A ) 2 2(Γ A /2) 2 dM,(11)
where (ii) The direct radiative decay ψ(4415) → γ +D s (1968) + D * sJ (2317). This process is much more complicated than the strong decay depicted in (i). By the QPC model a pair of ss quarks is created from vacuum and the underlying mechanism is the soft gluon exchanges which excite the vacuum sea. The momentum of the light quark pair created from vacuum is small and the photon hardly has possibility to be produced from light quark. Thus we can ignore the contribution of Fig. 1 (d) and (f) for the direct ψ(4415) → γ +D s (1968) + D * sJ (2317) process. The amplitude for radiative decay ψ(4415) → γ +D s (1968) + D * sJ (2317) includes several pieces. For Fig. 1 (b),
|p B | = (M 2 − (M B + M C ) 2 )(M 2 − (M B − M C ) 2 ) 2M ,(12)E B = M 2 B + p 2 B , E C = M 2 C + p 2 B ; δ = 1.64 Γ A 2 , β = 10%[7].(13)M (b) = Q c 1 m c dq (2π) 3/2 ψ s 1 s 2 (q)[v(p 2 , s 2 )O (b) u(p 1 , s 1 )],(16)O (b) = γ 5 g M C i p / C − p / A /2 − m s · g Is i −p / B + p / A /2 − k / − m s g M B × i p / A /2 − k / − m c ǫ / k .
for Fig. 1 (c), the transition amplitude reads as
M (c) = −Q c 1 m c dq (2π) 3/2 ψ s 1 s 2 (q)[v(p 2 , s 2 )O (c) u(p 1 , s 1 )],(17)O (c) = ǫ / k i k / − p / A /2 − m c γ 5 g M C i p / A /2 − p / B − m s g Is × i p / A /2 − p / B − m s g M B
where p 1 , p 2 and s 1 , s 2 are the momenta and spin projections of the charm quark and anti-charm quark, and the following relations hold
p 1 + p 2 = p A = (M A , 0), p 1 − p 2 = 2q = (0, 2q), s 1 ,s 2 dq|ψ s 1 s 2 (q)| 2 = 1.
Using the method of [21], we have
1 m c dq (2π) 3/2 ψ s 1 s 2 (q)v(p 2 , s 2 )O (i) u(p 1 , s 1 ) = 1 m c dq (2π) 3/2 ψ s 1 s 2 (q)T r[(M A O (i) 0 + {O (i) 0 , q /} + + M A q ·Ô (i) ) 1 + γ 0 2 √ 2 (−ǫ / A )],(18)
where O
(i) 0 = O (i) | q≡0 andÔ (i) ) ≡ ∂ ∂q µ O (i) | q=0
, and i denotes (b) or (c). Dissociation of the charmonium into cc can be well described by the non-relativistic model where the wavefunction at origin Ψ(0) corresponds to the binding effect. (16) and (17) can be further expressed as
M (b) = Ψ(0)Q c 6 √ M A T r γ 5 g M C i p / C − p / A /2 − m s · g Is · i −p / B + p / A /2 − k / − m s ×g M B i p / A /2 − k / − m c ǫ / k (M A + p / A )ǫ / A ,(19)M (c) = − Ψ(0)Q c 6 √ M A T r ǫ / k i k / − p / A /2 − m c γ 5 g M C i p / A /2 − p / B − m s ×g Is i p / A /2 − p / B − m s g M B (M A + p / A )ǫ / A ,(20)
where p A , p B , p C and k correspond to the four momenta of ψ(4415), D * sJ (2317), D s (1968) and photon respectively; ǫ k is the polarization vector of the emitted photon.
The decay width for ψ(4415) → γ +D s (1968) + D * sJ (2317) radiative decay is expressed as
Γ = 1 6M A i d 3 p i (2π) 3 2E i (2π) 4 δ 4 (M A − p B − p C − k)|M (b) + M (c) | 2 .(21)
In next section, we carry out the multiple integration to obtain numerical results.
Evaluation of the hadronic matrix elements in the non-relativistic model.
As discussed above, for a comparison 1 we are going to employ a non-relativistic model i.e. the harmonic oscillator model to repeat the calculations made in terms of HQET. Application of such model should be reasonable in this case.
(i) Strong decay ψ(4415) →D * s (2112) + D * sJ (2317). We calculate the ψ(4415) →D * s (2112) + D * sJ (2317) by using QPC model in the nonrelativistic approximation. The decay width is
Γ(ψ(4415) →D * s (2112) + D * sJ (2317)) = 2 (1 − β) √ 2π Γ A M A +δ M A −δ 2π E B E C |k| M l,s |M ls | 2 exp − (M − M A ) 2 2(Γ A /2) 2 dM,(22)
and the concrete expression of l,s |M ls | is collected in Appendix.The definitions of δ and β in the expression are exactly the same as those given in eq.(13).
(ii) For direct radiative decay ψ(4415) → γ +D s (1968) + D * sJ (2317). Following the traditional method [13], the matrix element of radiative decay ψ(4415) → γ +D s (1968) + D * sJ (2317) in nonrelativistic approximation can be written as
Ψ Ds (p ′ 2 , s ′ 2 ; p ′ 4 , s ′ 4 )Ψ Ds(2317) (p ′ 1 , s ′ 1 ; p ′ 3 , s ′ 3 )Ψ γ (k, ǫ(k))|T[H non vac ·H em ]|Ψ ψ(4415) (p 1 , s 1 ; p 2 , s 2 ) = γ F C n i χ n ψ s 1 ,s 2 χ n D sJ s ′ 1 ,s ′ 3 χ n Ds s ′ 2 ,s ′ 4 χ n 1 4 a=1 dp a 4 b=1 dp ′ b ×δ 3 (p 1 + p 2 − p ψ )δ 3 (p ′ 2 + p ′ 4 − p Ds )δ 3 (p 3 + p 4 )δ 3 (p ′ 1 + p ′ 3 − p D sJ ) × 1, 1; −n D sJ , n D sJ |0, 0 1, 1; n, −n|0, 0 Y n 1 (p 3 − p 4 ) ×ϕ ψ (p 1 − 1 2 P ψ , p 2 − 1 2 p ψ )ϕ D sJ (p ′ 1 − 1 2 p D sJ , p ′ 3 − 1 2 p D sJ ) ×ϕ Ds (p ′ 2 − 1 2 p Ds , p ′ 4 − 1 2 p Ds ) 0|b p ′ 1 d p ′ 3 d p ′ 2 b p ′ 4 a k d 3 x 2e 3Ψ c γ µ Ψ c A µ (x) ×b † p 4 d † p 3 d † p 2 b † p 1 |0 ,(23)
where F and C correspond to the flavor and color factors in this transition; χ ′ s are the spin wave functions; p ψ , p Ds and p D sJ are three-momenta of ψ(4415), D s (1968) and D * sJ (2317); ϕ ψ , ϕ D sJ and ϕ Ds are the harmonic oscillator wave functions of ψ(4415), D * sJ (2317) and D s (1968) respectively. 1 There is another reason to employ the non-relativistic harmonic oscillator model. If D * sJ (2317) is of four-quark structure, the HQET no longer applies and the only model we can use for the multi-constituents structure is the harmonic oscillator model. Therefore a comparison of the results in the model with that obtained by HQET is indeed meaningful. Namely, HQET is believed to be applicable in this case, thus consistence of the results obtained in the two approaches can confirm applicability of the harmonic oscillator model. Then we can use it to calculate the production rate if D * sJ (2317) is of four-quark structure.
Neglecting some technical details, finally one can derive the decay amplitude of Fig. 1
(b) M (b) (ψ(4415) → γ + D * sJ (2317) +D s (1968)) = iγSΨ(0) 4R 3 A √ 35 R 2 B π 3/4 √ 2 9 R 5/2 C π 1/4 − 2e 3 1 2π 2/3 1 √ 2E k (2π) 4 × dp 2 exp − 1 8 R 2 A (2p 2 ) 2 − 1 8 R 2 C (p C + 2p B + 2p 2 ) 2 − 1 8 R 2 B (2p 2 + p B ) 2 ×Y −n B 1 (−2p 2 − p B )Y n 1 (−2p B − 2p A + 2p 2 )v(p 2 , s 2 )γ µ v(p C + p B − p A + p 2 , s 2 ′ )ε µ γ (k) × −105 + 210R 2 A p 2 2 − 84R 4 A p 4 2 + 8R 6 A p 6 2 12 √ 35 ,(24)
where S is a spin factor, Ψ(0) is the wave function of ψ(4415) at origin. The indices A, B and C are for ψ(4415), D * sJ (2317) and D s (1968) respectively. With the same treatment, we also obtain the amplitude M (c) of Fig. 1 (c), and for saving space we keep its expression in Appendix.
A rough estimation of the production rate of
D * sJ (2317) in ψ(4415) decays, if D * sJ (2317)
is of a four-quark structure.
There have been some works which suggest that D * sJ (2317) is of a four-quark structure [4,5], the situation would be completely different. We draw a possible Feynman diagram in Fig.2, and one can notice that three quark-pairs are created from vacuum. As more particles are produced, the final state phase space would greatly reduce the rate.
Thus the decay width reads as Γ(ψ(4415) → D * sJ (2317) +c + s + q +q)
= 1 6M A d 3 p 1 (2π) 3 2ω 1 4 i=1 d 3 k i (2π) 3 m i E i (2π) 4 δ 4 (M A − p 1 − 4 i=1 k i ) ×|M(ψ(4415) → D * sJ (2317) +c + s + q +q)| 2 .(26)
Two points are noted: First this amplitude cannot be evaluated in the framework of HQET, but only in the non-relativistic model because of the complicated quark-structure; Secondly Fig.2 depicts an inclusive process ψ(4415) → D * sJ (2317)+4 free quarks, considering hadronization, observable processes can only be ψ(4415) → D * sJ (2317) +D s (1968) + π and ψ(4415) → D * sJ (2317) +D s (1968) + γ where qq annihilate into a photon. As discussed above, such direct processes are much suppressed.
Because the inclusive decay ψ(4415) → D * sJ (2317) is related to multi-body final states, the calculation is very complicated. The multi-integration over the phase space is very difficult, even in terms of the Monte-Carlo method. Generally the rate is proportional to
α ∼ (γ q ) 3 4π (2π) 3 4 ,
which is a remarkable suppression factor.
Thus if D * sJ (2317) is of a four-quark structure, one can expect that the inclusive decay width of ψ(4415) → D * sJ (2317) is at least four orders smaller than the corresponding value if D * sJ (2317) is a p-wave excited state of the regular D s meson.
Numerical results
(1)Determination of the concerned parameters in the two approaches.
(a) The parameters for the transition in HQET.
The coefficient G = γg 2 , which is introduced in the transition amplitude, is obtained by fitting the decay width of ψ(4040) → DD. In Appendix, we present the formula for the decay rate of charmonium into two charmed pseudoscalar mesons in HQET. For calculating G, the value of Ψ ψ(4040) (0) is obtained by fitting the experimental data of ψ(4040) → e + e − [22]. We get Ψ ψ(4040) (0) = 0.101 GeV 3/2 and G = 12.3 GeV −1 . In ref. [17], the value of g 2 is obtained as g 2 = 4.17 GeV −1 , thus we obtain γ q = 2.95. Yaouanc et al. used to employ the harmonic oscillator to evaluate such processes, and they got γ q ≈ 3 [13], which is very close to the value we obtain with HQET. However, it is also noted that γ q is purely a phenomenological parameter and its value may vary within a reasonable range, for example, in their later work, Yaouanc et al. took γ q to be 4 instead, when they fitted data [12].
Since there are not enough data to determine γ s , we adopt the relation [12]
γ s = γ q / √ 3,
for later numerical computations.
(b) The parameters in the non-relativistic model.
In this scenario, the non-relativistic approximation is taken and the expression is no longer Lorentz invariant, the relevant parameters may be somehow different from the values in HQET, especially the value of γ q which corresponds to the vacuum creation of a quark pair. However, as pointed above the values obtained in these two approaches are very close, thus we can use γ q = 2.95 for later calculations.
Using the experimental results of ψ(4040) → DD, ψ(4040) → D * D * and ψ(4040) → D + s D − s decays [23] 2 , we obtain all the relevant parameters which are needed for later numerical computations, For the readers' convenience the relevant formulations [12] are collected in Appendix. With all the information, we obtain the values of R ′ s in the harmonic oscillator wave functions as: R 2 ψ = 6.00 GeV −2 , R 2 D = 5.25 ± 0.22 GeV −2 , R 2 D * = 6.70 ± 0.67 GeV −2 and R 2 Ds = 5.20 ± 0.58 GeV −2 . However, R D * sJ (2317) and R D * s corresponding to D * sJ (2317) and D * s cannot be obtained by fitting data, because such decay modes do not exist for the sake of phase space of final states. As priori assumed, D * sJ (2317) is a p-wave excited state of D s [3], therefore the difference in R of D * sJ (2317) and D * s is due to the L · S coupling which is proportional to 1/m c . Furthermore, due to the heavy quark symmetry, the difference in R of D * s (2112) and D s is also of order 1/m c , thus R 2
D * sJ (2317) ≈ R 2 D * s ≈ R 2
Ds are employed in the calculations of ψ(4415) →D * s (2112) + D * sJ (2317) and direct decay ψ(4415) → γ +D s (1968) + D * sJ (2317). It is believed that this approximation is reasonable for estimating the order of magnitude of these transitions. [7]. By fitting the data of ψ(4415) → e + e − which is available at present, we obtain Ψ ψ(4415) (0) = 0.088 GeV 3/2 .
We now present the numerical results obtained with the two approaches in Table. 2.
I II Br(ψ(4415) →D * s (2112) + D * sJ (2317)) 9.16% 9.58% Br(ψ(4415) → γ +D s (1968) + D * sJ (2317))(ind) 8.63% 9.03% Br(ψ(4415) → π +D s (1968) + D * sJ (2317))(ind) 5.31 × 10 −3 5.56 × 10 −3 Br(ψ(4415) → γ +D s (1968) + D * sJ (2317))(dir) 8.46 × 10 −5 2.29 × 10 −5 Table 1: Columns I and II correspond to the numerical results obtained in HQET and the non-relativistic model respectively. 2 The BES measurements of the inclusive charm cross section at 4.03 GeV are [23]: σ D 0 + σD0 = 19.9 ± 0.6 ± 2.3 nb, σ D + + σD− = 6.5 ± 0.2 ± 0.8 nb and σ D +
Discussion and conclusion
It is obvious that the newly discovered D sJ family may be very significant for better understanding of the hadronic structure and low energy QCD. The members of the family, D * sJ (2317), D sJ (2460), D sJ (2632), all have positive parity, so that they cannot fit in an s-wave cs(cs) structure. The literatures suggest that they may be p-wave excited states, namely chiral partners of D s , D * s etc. or four-quark states as well as molecular states. It is necessary to look for a more plausible way to determine their configurations, i.e. design an experiment(s) to clarify the picture. At least we would like to find an experiment to judge (1) if such states indeed exist, (2) their quark configuration (p-wave excited states or four-quark states).
To have a larger production rate, it is reasonable to look for D * sJ (2317) via strong decays of higher excited states of charmonia. The most possibly available charmonium is ψ(4415). In this work, we carefully study the production of D * sJ (2317) in the decays of ψ(4415) and evaluate its production rate. The processes are realized as the charmonium ψ(4415) dissolves into a cc pair which then combines withss created from vacuum due to the non-perturbative QCD effects and constitute two mesons. The first step is determined by the wavefunction of ψ(4415) at origin and the light-quark-pair creation is described by the QPC model [11]. For evaluating the hadronic transition matrix elements, we employ two approaches, i.e. HQET and the non-relativistic model. Our final numerical results achieved in the two approaches confirm this allegation as they are reasonably consistent with each other.
To guarantee the plausibility of the results, we obtain all necessary parameters by fitting data. However, it is understood that there must be some errors from both theoretical and experimental aspects, and the parameters should have some uncertainties, especially the vacuum creation rate of the light quark pair. Thus the real rates may be within a range around the values estimated with the input parameters and theoretical approaches, the order of magnitude should be correct and trustworthy.
For a comparison, we have also evaluated the transition rate of the direct radiative decay ψ(4415) → D * sJ (2317) +D s (1968) + γ in the same approaches and find that the resultant rate is two orders smaller than that through the intermediate state D * s (2112), even though it is realized via the threshold effects.
k 2 R 2 A (R 2 B + R 2 C ) 8(R 2 A + R 2 B + R 2 C )
R 6 A − 48a 5 (2ζ + 1)η 3 k 6 +8a 6 η 3 2ζ(ζ + 1)ηk 2 + 3 k 6 − 336a 3 (2ζ + 1)η 2 k 4 + 12a 4 η 2 14ζ(ζ + 1)ηk 2 +8 √ 3 + 3 k 4 − 420a(2ζ + 1)ηk 2 + 42a 2 η 10ζ(ζ + 1)ηk 2 + 8 √ 3 + 3 k 2 +105 2ζ(ζ + 1)ηk 2 + 9 − 84ηR 4 A − 16a 3 (2ζ + 1)η 2 k 4 + 4a 4 η 2 2ζ(ζ + 1)ηk 2 + 3 k 4 −40a(2ζ + 1)ηk 2 − 4a 2 η 2ζ(ζ + 1)ηk 2 − 4 √ 3 + 21 k 2 + 15 2ζ(ζ + 1)ηk 2 + 7 +1680η 2 R 2 A 6ζ(ζ + 1)ηk 2 − 4a(2ζ + 1)ηk 2 + 2a 2 η 2ζ(2ζ + 1)ηk 2 + 3 k 2 + 15 −6720η 3 2ζ(ζ + 1)ηk 2 + 3 ,
where
ζ = R 2 A R 2 A + R 2 B + R 2 C , η = R 2 A + R 2 B + R 2 C 8
, a = 1 + ζ.
1 √ 2E k (2π) 4 × dp 1 exp − 1 8 R 2 A (2p 1 ) 2 − 1 8 R 2 B (p B + 2p C + 2p 1 ) 2 − 1 8 R 2 C (2p 1 + p C ) 2 ×Y −n B 1 (p B + 2p C + 2p 1 )Y n 1 (−2p C − 2p 1 )ū(p B + p C + p 1 , s 2 )γ µ u(p 1 , s 2 ′ )ε µ γ (k) × −105 + 210R 2 A p 2 2 − 84R 4 A p 4 2 + 8R 6 A p 6 2 12 √ 35 .(28)
(c) The amplitude of ψ(4040) decay into two pseudoscalar mesons in HQET is
M(ψ(4040) → P + P ) = 2ig Iq g 2 m q √ M A M B M C Ψ(0) 3[(p A /2 − p B ) 2 − m 2 q ] 2 ǫ A · (p B − p C ).(29)
(d) In ref. [12], the authors gave a general expression for calculating decays of ψ(4040) → DD, ψ(4040) → DD * +DD * and ψ(4040) → D * D * Γ(ψ(4040)) = Ck 3 N 2 (k 2 ),
where C means a spin-SU(3) factor corresponding to the particular channel under consideration (C = 1/3 for DD, C = 4/3 for DD * +DD * and C = 7/3 for D * D * ), k is the three-momentum of the final particles in the CM frame of ψ(4040), and N 2 (k 2 ) is a normalization factor and has the following expression
N 2 (k 2 ) = R 3 γ 2 M 43740π 3/2 [L 3/2 2 (4ξ) exp(−ξ)] 2 ,(31)
where L 3/2 2 is a Laguerre polynomial, and ξ = k 2 R 2 /6.
For
the indirect subsequent decays ψ(4415) → D * sJ (2317) +D * s (2112) → D * sJ (2317) + D s (1968) + γ and ψ(4415) → D * sJ (2317) +D * s (2112) → D * sJ (2317) +D s (1968) + π, the rates are obtained as Γ ind (ψ(4415) → D * sJ (2317) +D s (1968) + γ) = Γ(ψ(4415) → D * sJ (2317) +D * s (2112)) × BR(D * s (2112) →D s (1968) + γ), (14) Γ ind (ψ(4415) → D * sJ (2317) +D s (1968) + π) = Γ(ψ(4415) → D * sJ (2317) +D * s (2112)) × BR(D * s (2112) →D s (1968) + π). (15)
Figure 2 :
2The Feynman diagram describing the production of D * sJ (2317) in ψ(4415) inclusive decay considering the four quark structure of D * sJ (2317) in the QPC model, where ellipsis denotes the diagrams for other possible quark combinations. The inclusive transition matrix element can be written as c, s, q,q, D * sJ (2317)|T [H non vac (x 1 )H non vac (x 2 )H non vac (x 3 )]|ψ(4415) .
( 2 )
2Our numerical results for the D * sJ (2317) production. In the calculations of ψ(4415) →D * s (2112)+D * sJ (2317) and the direct decay ψ(4415) → γ +D s (1968) + D * s (2317) by the two approaches, we employ the following parameters as inputs: M ψ(4415) = 4.415 GeV, M D ± s = 1.968 GeV, M D * s = 2.112 GeV, M D * sJ (2317) = 2.317 GeV
.81 ± 0.16 ± 0.27 nb. Considering the relation [13]: Γ(D * 0D * 0 ) : Γ(D * 0D0 + D 0D * 0 ) : Γ(D 0D0 ) ≈ 1 : 7 : 9, we obtain the following decay widths of ψ(4040): Γ(ψ(4040) → DD) = 2.97 ± 0.68 MeV, Γ(ψ(4040) → D * D * ) = 26.73 ± 6.13 MeV and Γ(ψ(4040) → D + s D − s ) = 1.55 ± 0.69 MeV.
Since D * sJ (2317) mesons have positive parity, the decay mode of ψ(4415) → D * sJ (2317) +D s (1968) is forbidden and if considering the central values of the masses of the concerned particles and constraints from phase space of final states, only ψ(4415) → D * sJ (2317)+D s (1968)+ γ is allowed. This direct radiative decay must be much suppressed as discussed in the introduction. Barnes et al. [8] suggested to observe decay ψ(4415) → D * sJ (2317) +D * s (2112) which can occur via the threshold effects. The consequent decays ψ(4415) → D * sJ (2317) + D * s (2112) → D * sJ (2317)+D s (1968)+γ and ψ(4415) → D * sJ (2317)+D * s (2112) → D * sJ (2317)+ D s (1968) + π can be observed. Even though such processes may only occur via threshold effects and should be suppressed, it is noted that m D * sJ (2317) + m D * s (2112) is only slightly above 4415 MeV, one can expect that the suppression is not very strong.
(a) In the Eq. (22), the l,s |M ls | is
( b )
bThe concrete expression of M (c) is M (c) (ψ(4415) → γ + D * s (2317) +D s (
Acknowledgment:This work is supported by the National Natural Science Foundation of China (NNSFC).We find that even though the threshold effects suppress the production rate of ψ(4415) → D * sJ (2317) +D * s (2112), it is still sizable if D * sJ (2317) is a p-wave excited state. If so, it can be observed in the future experiments of BES III, CLEO and maybe at Babar or even LHC-b. However, as our calculations indicate that if D * sJ (2317) is of a four-quark structure, its production rate is much more suppressed and cannot be observed in decays of charmonia.Unfortunately, in such decays, one can only expect to observe D * sJ (2317), but not the two other members of the new family. However, once the existence and structure of D * sJ (2317) are definitely confirmed, we have reasons to believe existence of the other two. Moreover, we can have more knowledge on the hadronic structure and may design experiments to testify the other two. We are looking forward to new experimental results to clarify this theoretical problem. In a recent work, some authors[24]calculate the decay rates of D * sJ (2317) and D sJ (2460). They claim that their results prefer the ordinary cs(cs) quark-structure for the mesons. However, a decisive conclusion must be drawn from a deterministic experiment(s), and D * sJ (2317) +D * s (2112) suggested by Barnes et al. as well as subsequent observable modes D * sJ (2317) +D s (1968) + γ, D * sJ (2317) +D s (1968) + π would provide an ideal possibility to make this judgement.Appendix
. B Aubert, BABAR CollaborationPhys. Rev. Lett. B. 90Eur. Phys. J.B.Aubert et al., The BABAR Collaboration, Phys. Rev. Lett. B 90, 242001, 2003; F. Porter, Eur. Phys. J. C33, S219-S222, 2004;
The BABAR Collaboration. P Krokovny, CLEO Collaboration ; SLEX CollaborationPhys. Rev. Lett. 91242001Phys. Rev. Lett.P. Krokovny et al., The BABAR Collabo- ration, Phys. Rev. Lett. 91, 262002, 2003; D. Besson et al., The CLEO Collaboration, Phys. Rev. D68, 032002, 2003; A. Evdokimov et al., The SLEX Collaboration, Phys. Rev. Lett. 93, 242001, 2004.
. W A Bardeen, E J Eichten, C T A Hill ; M, M Nowak, I Rho, Zahed, Acta Phys.Polon. 68Phys. Rev. DW.A. Bardeen, E.J. Eichten and C.T. Hill, Phys. Rev. D 68 , 054024, 2003; M.A. Nowak, M. Rho and I. Zahed, Acta Phys.Polon. B35, 2377-2392, 2004.
. K D Chao, Phys. Lett. 599K.D. Chao, Phys. Lett. B599, 43-47, 2004.
. T Barnes, F Close, H P Lipkin ; A, ; E Szczepaniak, G Beveren, Rupp, hep-ph/0305035Phys. Rev. D68. H.Y. Cheng and W.S. Hou054006Phys. Lett.T. Barnes, F. Close and H. Lipkin, Phys. Rev. D68, 054006, 2003; A.P. Szczepaniak, Phys. Lett. B567, 23-26, 2003; E. Beveren and G. Rupp, hep-ph/0305035; H.Y. Cheng and W.S. Hou, Phys. Lett. B566, 193-200, 2003.
. Y Q Chen, X Q Li, Phys. Rev. Lett. 93232001Y.Q. Chen and X.Q. Li, Phys. Rev. Lett. 93, 232001, 2004.
. H Lipkin, arXiv:hep-ph/0501209H. Lipkin, arXiv: hep-ph/0501209.
The Data Group. Phys. Lett. 5921The Data Group, Phys. Lett. B592, 1, 2004.
. T Barnes, S Godfrey, E S Swanson, arXiv:hep-ph/0505002T. Barnes, S. Godfrey and E.S. Swanson, arXiv: hep-ph/0505002.
. L Micu, Nucl. Phys. B. 10521L. Micu, Nucl. Phys. B 10, 521, 1969.
. H G Blundell, S Godfrey ; A. Le Yaouanc, L Oliver, O Pène, J R Raynal ; P, ; S Page, N Capstick, Isgur, Phys. Lett. B71. 37002809Phys. Rev. D34H.G. Blundell and S. Godfrey, Phys. Rev. D53, 3700, 1996; A. Le Yaouanc, L. Oliver, O. Pène and J. Raynal, Phys. Lett. B71, 397, 1977; B72, 57, 1977; P.R. Page, Nucl. Phys. B446, 189, 1995; S. Capstick and N. Isgur, Phys. Rev. D34, 2809, 1986.
. A Le Yaouanc, L Oliver, O Pène, J , Phys. Rev. D8. 957Phys. lett.A. Le Yaouanc, L. Oliver, O. Pène and J. Raynal, Phys. Rev. D8, 2223, 1973; D9, 1415, 1974; D11, 1272, 1975; Phys. lett. B71, 57(1977).
. A Le Yaouanc, L Oliver, O Pène, J , Phys. lett. B72. 57A. Le Yaouanc, L. Oliver, O. Pène and J. Raynal, Phys. lett. B72, 57, 1977.
Hadron Transitions in the Quark Model, Gordon and breach science publishers. A Le Yaouanc, L Oliver, O Pène, J , New YorkA. Le Yaouanc, L. Oliver, O. Pène and J. Raynal, Hadron Transitions in the Quark Model, Gordon and breach science publishers, New York, 1987.
. S Capstick, W Roberts, Phys. Rev. D49. 4570S. Capstick and W. Roberts, Phys. Rev. D49, 4570, 1994.
. E S Ackleh, T Barnes, Phys. Rev. D54. 6811E.S. Ackleh and T. Barnes, Phys. Rev. D54, 6811, 1996.
. R Ping, H Jiang, P Shen, B Zou, Chin. Phys. Lett. H.Q. Zhou, R.G. Ping and B.S. Zou19123Phys. Lett.R.G Ping, H.Q Jiang, P.N Shen and B.S Zou, Chin. Phys. Lett. 19, 1592, 2002; H.Q. Zhou, R.G. Ping and B.S. Zou, Phys. Lett. B611, 123, 2005.
. A D Polosa, Riv. Nuovo Cim. 23N11. 1A.D. Polosa, Riv. Nuovo Cim. 23N11, 1, 2000.
. W A Bardeen, C T Hill, Phys. Rev. D49. 409W.A. Bardeen and C.T. Hill, Phys. Rev. D49, 409, 1994.
. M Neubert, Phys. Rep. 245259M. Neubert, Phys. Rep. 245, 259 (1994).
. A F Falk, T Mehen, Phys.Rev. D. 53231A.F. Falk and T. Mehen, Phys.Rev. D 53, 231, 1996.
. J H Kühn, Nucl. Phys. L. Bergström and P. Ernström157111Phys. Lett.J.H. Kühn, Nucl. Phys. B157, 125, 1979. L. Bergström and P. Ernström, Phys. Lett. B267, 111, 1991.
. R V Royen, V F Weisskopf, Nuov. Cim. 50583R.V. Royen and V.F. Weisskopf, Nuov. Cim. 50, 617, 1967; 51, 583 (1967).
. Phys. Rev. D62. 12002BES Collaboration, Phys. Rev. D62, 012002, 2000.
. Wei Wei, Peng-Zhi Huang, Shi-Lin Zhu, arXiv:hep-ph/0510039Wei Wei, Peng-Zhi Huang and Shi-Lin Zhu, arXiv: hep-ph/0510039.
|
[] |
[
"The HARPS survey for southern extra-solar planets ⋆ II. A 14 Earth-masses exoplanet around µ Arae",
"The HARPS survey for southern extra-solar planets ⋆ II. A 14 Earth-masses exoplanet around µ Arae"
] |
[
"N C Santos \nCentro de Astronomia e Astrofísica da Universidade de Lisboa\nObservatório Astronómico de Lisboa\nTapada da Ajuda\n1349-018LisboaPortugal\n\nObservatoire de Genève\n51 ch. des MaillettesCH-1290SauvernySwitzerland\n",
"F Bouchy \nTraverse du Siphon\nLaboratoire d'Astrophysique de Marseille\n13013MarseilleFrance\n",
"M Mayor \nObservatoire de Genève\n51 ch. des MaillettesCH-1290SauvernySwitzerland\n",
"F Pepe \nObservatoire de Genève\n51 ch. des MaillettesCH-1290SauvernySwitzerland\n",
"D Queloz \nObservatoire de Genève\n51 ch. des MaillettesCH-1290SauvernySwitzerland\n",
"S Udry \nObservatoire de Genève\n51 ch. des MaillettesCH-1290SauvernySwitzerland\n",
"C Lovis \nObservatoire de Genève\n51 ch. des MaillettesCH-1290SauvernySwitzerland\n",
"M Bazot \nLaboratoire d'Astrophysique\nObservatoire Midi-Pyrénées\n14 Avenue Edouard Belin31400ToulouseFrance\n",
"W Benz \nPhysikalisches Institut\nUniversität Bern\nSidlerstrasse 5CH-3012BernSwitzerland\n",
"J.-L Bertaux \nService d'Aéronomie du CNRS\nBP 391371Verrières-le-BuissonFrance\n",
"G Lo Curto \nEuropean Southern Observatory\nSantiago 1919001CasillaChile\n",
"X Delfosse \nLaboratoire d'Astrophysique de l'Observatoire de Grenoble\n414 rue de la piscine38400Saint Martin d'HèreFrance\n",
"C Mordasini \nPhysikalisches Institut\nUniversität Bern\nSidlerstrasse 5CH-3012BernSwitzerland\n",
"D Naef \nObservatoire de Genève\n51 ch. des MaillettesCH-1290SauvernySwitzerland\n\nEuropean Southern Observatory\nSantiago 1919001CasillaChile\n",
"J.-P Sivan \nTraverse du Siphon\nLaboratoire d'Astrophysique de Marseille\n13013MarseilleFrance\n",
"S Vauclair \nLaboratoire d'Astrophysique\nObservatoire Midi-Pyrénées\n14 Avenue Edouard Belin31400ToulouseFrance\n"
] |
[
"Centro de Astronomia e Astrofísica da Universidade de Lisboa\nObservatório Astronómico de Lisboa\nTapada da Ajuda\n1349-018LisboaPortugal",
"Observatoire de Genève\n51 ch. des MaillettesCH-1290SauvernySwitzerland",
"Traverse du Siphon\nLaboratoire d'Astrophysique de Marseille\n13013MarseilleFrance",
"Observatoire de Genève\n51 ch. des MaillettesCH-1290SauvernySwitzerland",
"Observatoire de Genève\n51 ch. des MaillettesCH-1290SauvernySwitzerland",
"Observatoire de Genève\n51 ch. des MaillettesCH-1290SauvernySwitzerland",
"Observatoire de Genève\n51 ch. des MaillettesCH-1290SauvernySwitzerland",
"Observatoire de Genève\n51 ch. des MaillettesCH-1290SauvernySwitzerland",
"Laboratoire d'Astrophysique\nObservatoire Midi-Pyrénées\n14 Avenue Edouard Belin31400ToulouseFrance",
"Physikalisches Institut\nUniversität Bern\nSidlerstrasse 5CH-3012BernSwitzerland",
"Service d'Aéronomie du CNRS\nBP 391371Verrières-le-BuissonFrance",
"European Southern Observatory\nSantiago 1919001CasillaChile",
"Laboratoire d'Astrophysique de l'Observatoire de Grenoble\n414 rue de la piscine38400Saint Martin d'HèreFrance",
"Physikalisches Institut\nUniversität Bern\nSidlerstrasse 5CH-3012BernSwitzerland",
"Observatoire de Genève\n51 ch. des MaillettesCH-1290SauvernySwitzerland",
"European Southern Observatory\nSantiago 1919001CasillaChile",
"Traverse du Siphon\nLaboratoire d'Astrophysique de Marseille\n13013MarseilleFrance",
"Laboratoire d'Astrophysique\nObservatoire Midi-Pyrénées\n14 Avenue Edouard Belin31400ToulouseFrance"
] |
[] |
In this letter we present the discovery of a very light planetary companion to the star µ Ara (HD 160691). The planet orbits its host once every 9.5 days, and induces a sinusoidal radial velocity signal with a semi-amplitude of 4.1 m s −1 , the smallest Doppler amplitude detected so far. These values imply a mass of m2 sin i=14 M⊕ (earth-masses). This detection represents the discovery of a planet with a mass slightly smaller than that of Uranus, the smallest "ice giant" in our Solar System. Whether this planet can be considered an ice giant or a super-earth planet is discussed in the context of the core-accretion and migration models. Key words. Stars: individual: HD 160691 -planetary systems -Techniques: radial velocities Recently, with the installation of the new HARPS spectrograph (Pepe et al. 2002) at the 3.6-m ESO telescope (La Silla, Chile) a significant quantitative advance has been possible. This state of the art instrument is capable of attaining a Send offprint requests to: Nuno C. Santos, e-mail: [email protected] ⋆ Based on observations collected at La Silla Observatory, ESO, Chile, with the HARPS spectrograph, at the 3.6-m ESO telescope (programs 073.D-0578 and 072.C-0488).
|
10.1051/0004-6361:200400076
|
[
"https://arxiv.org/pdf/astro-ph/0408471v2.pdf"
] | 14,938,593 |
astro-ph/0408471
|
3c65ed50c2a5824341e7f9fd9551d052081ed5e3
|
The HARPS survey for southern extra-solar planets ⋆ II. A 14 Earth-masses exoplanet around µ Arae
June 11, 2018
N C Santos
Centro de Astronomia e Astrofísica da Universidade de Lisboa
Observatório Astronómico de Lisboa
Tapada da Ajuda
1349-018LisboaPortugal
Observatoire de Genève
51 ch. des MaillettesCH-1290SauvernySwitzerland
F Bouchy
Traverse du Siphon
Laboratoire d'Astrophysique de Marseille
13013MarseilleFrance
M Mayor
Observatoire de Genève
51 ch. des MaillettesCH-1290SauvernySwitzerland
F Pepe
Observatoire de Genève
51 ch. des MaillettesCH-1290SauvernySwitzerland
D Queloz
Observatoire de Genève
51 ch. des MaillettesCH-1290SauvernySwitzerland
S Udry
Observatoire de Genève
51 ch. des MaillettesCH-1290SauvernySwitzerland
C Lovis
Observatoire de Genève
51 ch. des MaillettesCH-1290SauvernySwitzerland
M Bazot
Laboratoire d'Astrophysique
Observatoire Midi-Pyrénées
14 Avenue Edouard Belin31400ToulouseFrance
W Benz
Physikalisches Institut
Universität Bern
Sidlerstrasse 5CH-3012BernSwitzerland
J.-L Bertaux
Service d'Aéronomie du CNRS
BP 391371Verrières-le-BuissonFrance
G Lo Curto
European Southern Observatory
Santiago 1919001CasillaChile
X Delfosse
Laboratoire d'Astrophysique de l'Observatoire de Grenoble
414 rue de la piscine38400Saint Martin d'HèreFrance
C Mordasini
Physikalisches Institut
Universität Bern
Sidlerstrasse 5CH-3012BernSwitzerland
D Naef
Observatoire de Genève
51 ch. des MaillettesCH-1290SauvernySwitzerland
European Southern Observatory
Santiago 1919001CasillaChile
J.-P Sivan
Traverse du Siphon
Laboratoire d'Astrophysique de Marseille
13013MarseilleFrance
S Vauclair
Laboratoire d'Astrophysique
Observatoire Midi-Pyrénées
14 Avenue Edouard Belin31400ToulouseFrance
The HARPS survey for southern extra-solar planets ⋆ II. A 14 Earth-masses exoplanet around µ Arae
June 11, 2018Received / AcceptedarXiv:astro-ph/0408471v2 10 Sep 2004 Astronomy & Astrophysics manuscript no. santos˙inpress (DOI: will be inserted by hand later)Stars: individual: HD 160691 -planetary systems -Techniques: radial velocities
In this letter we present the discovery of a very light planetary companion to the star µ Ara (HD 160691). The planet orbits its host once every 9.5 days, and induces a sinusoidal radial velocity signal with a semi-amplitude of 4.1 m s −1 , the smallest Doppler amplitude detected so far. These values imply a mass of m2 sin i=14 M⊕ (earth-masses). This detection represents the discovery of a planet with a mass slightly smaller than that of Uranus, the smallest "ice giant" in our Solar System. Whether this planet can be considered an ice giant or a super-earth planet is discussed in the context of the core-accretion and migration models. Key words. Stars: individual: HD 160691 -planetary systems -Techniques: radial velocities Recently, with the installation of the new HARPS spectrograph (Pepe et al. 2002) at the 3.6-m ESO telescope (La Silla, Chile) a significant quantitative advance has been possible. This state of the art instrument is capable of attaining a Send offprint requests to: Nuno C. Santos, e-mail: [email protected] ⋆ Based on observations collected at La Silla Observatory, ESO, Chile, with the HARPS spectrograph, at the 3.6-m ESO telescope (programs 073.D-0578 and 072.C-0488).
Introduction
The discovery of giant planets around other solar-type stars has opened the way to a new era of planetary research. The new worlds present a wide variety of orbital characteristics and minimum masses, and 9 years after the first announcement (Mayor & Queloz 1995), some of their properties are still defying the theories of planetary formation. The increasing number of known systems is, however, giving the possibility to explore their properties from a statistical point of view (e.g. Santos et al. 2001;Zucker & Mazeh 2002;Udry et al. 2003;Eggenberger et al. 2004), and the observational and theoretical approaches are now starting to converge (e.g. Trilling et al. 2002;Alibert et al. 2004;Ida & Lin 2004a).
precision better than 1 m s −1 . After only a few weeks of operation, it has discovered a first "hot-jupiter" (Pepe et al. 2004) orbiting the K dwarf HD 330075. The level of precision in radialvelocity measurements achieved with HARPS gives now, for the first time, the possibility of lowering significantly the detection limit to the "few-earth-mass" regime, provided that the signal induced by stellar oscillations can be reduced with the use of an appropriate observing strategy (Bouchy et al., in prep.).
In this letter we present the discovery of a ∼14-M ⊕ short period (P∼9.5 days) extra-solar planet orbiting the star µ Ara, a star that was already known to be orbited by a longer period giant planet (Butler et al. 2001). Together with the very low mass companion to 55 Cnc (McArthur et al. 2004), these are the only two sub-neptunian planets discovered to date. They are suspected to be earth-like rocky planets, orbiting solar-type stars.
Stellar characteristics of µ Ara
µ Ara (HD 160691, HR 6585, GJ 691) is a nearby V=5.12 magnitude southern G5V star in the constellation Ara, the Altar, and according to the Hipparcos catalog (ESA 1997), it has a parallax of 65.5±0.8 mas, which implies a distance from the Sun of From a HARPS spectrum with a S/N ratio of the order of ∼1000 (average of 275 individual spectra), we have derived the stellar parameters for µ Ara using a fully spectroscopic analysis . The resulting parameters (T eff , log g, V t , [Fe/H])=(5813±40 K, 4.25±0.07 dex, 1.30±0.05 km s −1 , +0.32±0.05 dex), are in almost perfect agreement with the values published in Santos et al. (2004), Bensby et al. (2004), and Laws et al. (2003). The surface gravity derived using the Hipparcos parallax and an effective temperature of 5800 K is 4.25 dex (see e.g. Santos et al. 2004).
Using the temperature, [Fe/H], absolute magnitude and bolometric correction (Flower 1996), we derived a stellar mass of 1.10±0.05 M ⊙ for µ Ara, from an interpolation of the theoretical isochrones of Schaerer et al. (1993). This is in excellent agreement with the 1.08 and 1.14 M ⊙ derived by Butler et al. (2001) and Laws et al. (2003), respectively. Preliminary results from the asteroseismology analysis are also in excellent agreement with these values (Bazot et al., in prep.).
From the width of the CORALIE Cross-Correlation Function (CCF) we have computed a projected rotational velocity of 2.4 km s −1 for µ Ara (Santos et al. 2002). This value is in agreement with the low chromospheric activity level of the star, log R ′ HK =−5.034±0.006, obtained from the HARPS spectra. Similar values of −5.02 were obtained both from the CORALIE data (Santos et al. 2000) and by Henry et al. (1996) at different epochs. The inactivity of this star is further supported by its low (and non-variable) X-ray luminosity (Marino 2002), as well as by the lack of significant photometric variation in the Hipparcos data (ESA 1997).
From the observed value of log R ′ HK we can infer an age above ∼2 Gyr (Pace & Pasquini 2004) and a rotational period of ∼31 days (Noyes et al. 1984). This age is compatible with the 4.5 Gyr obtained from an interpolation of theoretical isochrones (e.g. Laws et al. 2003), and with the upper value for the lithium abundance log ǫ(Li)<0.86 dex derived by Israelian et al. (2004) for this dwarf.
Radial velocities
In June 2004, µ Ara was intensively measured over 8 consecutive nights with the HARPS spectrograph as part of an asteroseismology program (Bouchy et al., in prep). During each night, we obtained more than 250 spectra of this star, from which we derived accurate radial velocities. The average radial velocity for each night was then computed from a weighted average of each individual value, its precision being limited by the uncertainty in the wavelength calibration 1 .
The main motivation of this program was to study the possibility that the high metal content of the planet-host stars (e.g. Gonzalez 1998;Santos et al. 2001Santos et al. , 2004, and references therein) is due to the engulfment of metal rich planetary material into their convective envelopes. Although current studies seem to favor that the observed "excess" metallicity reflects a higher metal content of the cloud of gas and dust that gave origin to the star and planetary system, recent results have suggested that this matter may still be unsettled (e.g. Vauclair 2004). The asteroseismology technique provides us with a good tool to possibly solve this problem. As shown by Bazot & Vauclair (2004), precise stellar oscillation measurements may be able to determine if there is some metallicity gradient in the stellar interior, that could be a hint of strong stellar "pollution" Table 1. Orbital elements of the fitted 9.5-days period orbit and main planetary properties.
P 9.55 ± 0.03 [d] T 2453168.94 ± 0.05 [d] e 0.00 ± 0.02 ω 4 ± 2 [deg] K1 4.1 ± 0.2 [m s −1 ] a1 sin i 0.5396 [Gm] f1(m) 0.6869 [10 −13 M⊙] σ(O − C) 0.9 [m s −1 ] N 24 m2 sin i 14 [M⊕] a 0.09 [AU] Teq ∼900 ⋆ [K]
⋆ Equilibrium temperature computed with an albedo of 0.35.
events. The results of the asteroseismology campaign will be presented in Bouchy et al. (in prep.) and Bazot et al. (in prep). A first analysis of the data revealed what could be a periodic variation with an amplitude of about 4 m s −1 (see Figs. 1 and 2). As part of the HARPS GTO program, this star was then closely followed from July 14th to August 19th 2004 (16 radial-velocity measurements were obtained). Each night the radial velocity was measured from the average of about 15 consecutive independent radial velocity estimates (computed from different spectra) taken during a period of ∼20 minutes. This methodology makes it possible to average the radial-velocity variations due to stellar oscillations ) -see also Bouchy et al. (in prep.). As seen in Fig. 1, the measurements done during the first 8 nights (when the star was followed during the whole night) have a considerable lower rms around the best keplerian fit than the following measurements. This scatter results from the photon noise error (∼20 cm s −1 ), the calibration uncertainty (∼40 cm s −1 ), and from the stellar noise (∼80 cm s −1 ) that is not completely averaged on the nights with only 15 radial velocity measurements (Bouchy et al., in prep.).
µ Ara was previously announced to harbor a giant planet in a long period (∼740 days) orbit (Butler et al. 2001). This orbital solution has since been updated by Jones et al. (2002), who found that the residuals of the radial-velocity planetary fit followed a long term trend, due to the presence of a second body in the system. In Fig. 3 we plot the radial-velocity measurements of µ Ara obtained during the last 6 years using three different instruments (see figure caption), as well as the best 2-keplerian fit. The orbit of the ∼740-day period planet (actually with a period of ∼660 days) is confirmed. However, the orbital parameters of the second (longer period) companion are not well constrained; we find a strong degeneracy between the derived orbital period and the value of the orbital excentricity, making it possible to fit the data with the former parameter varying between ∼3000 and 10000 days. Although not precisely determined, the mass of this companion remains probably in the planet regime. Despite of the still unconstrained long period of this outer companion, some stability studies of the system has been discussed (e.g. Gozdziewski et al. 2003).
A 9.5-days period planet with 14 Earth-masses
In Figs. 1 and 2 we present the HARPS radial-velocity measurements of µ Ara as a function of time. In this figure, the curve represents the best fit to the data, obtained with the sum of a keplerian function and a linear trend. The derived slope of this trend is in agreement with the expected effect due to the longer period companions (see Fig. 3). The analysis of the radial velocity measurements reveals a variation with a period of 9.5 days, and a semi-amplitude of about 4 m s −1 . These values can be explained by the presence of a m 2 sin i = 14 M ⊕ planet orbiting µ Ara in a circular orbit.
The residuals around the best fit to the HARPS data are flat, with a rms of only of 0.9 m s −1 . This rms decreases to the calibration level (0.43 m s −1 ) for the first 8 nights, attesting the incredible precision of this instrument. Despite the low amplitude of the radial velocity signal, the false alarm probability that it is due to random noise is lower than 1%, as derived through a Monte-Carlo simulation.
From the stellar luminosity and effective temperature we can derive a radius of ∼1.32 solar radii for µ Ara. Combined with the rotational period of 31 days (see Sect. 2), this implies a rotational velocity of the order of 2.2 km s −1 for µ Ara, close to the measured value v sin i=2.4 km s −1 . Supposing that the orbital plane is perpendicular to the stellar rotation axis, this means that the orbital inclination sin i is close to unity, and that the observed minimum mass for the planet is not very different from its real mass.
Using the HARPS spectra we have derived both an activity index, based on Ca II H and K lines, and the bisector of the cross-correlation function from the individual spectra. No correlation is found between these quantities and the radial velocities within the measurement precision. Given the very low activity level of µ Ara and the inferred rotational period of ∼30 days, it is very unlikely that rotational modulation is capable of producing the observed stable periodic radial-velocity variation. Furthermore, to have a rotational period of 9.5 days, this star would have to rotate at about 7 km s −1 . Such a rotational velocity would imply a much younger age for µ Ara, not compatible with its low level of activity.
The presence of a 14 M ⊕ planet around µ Ara thus remains the only credible explanation for the observed 9.5-days period radial-velocity variation.
Discussion
As current planetary formation models are still far from being able to account for all the amazing diversity observed amongst the exoplanets discovered thus far, we can only speculate on the true nature of the present object.
First, given its location and the characteristics of the central star, it is unlikely that this object was in fact a much more massive giant planet which has lost a large fraction of its envelope over its lifetime. This is supported by the fact that more massive planets exist orbiting much closer to stars with similar characteristics and by calculations by Baraffe et al. (2004) and Lecavelier des Etangs et al. (2004) which show that only planets significanly less massive than Jupiter would evaporate at 0.09 AU. Except if outward migration has occurred, we conclude that the mass of this object has always remained small.
To understand the consequences of this, it is necessary to recall that in the current paradigm of giant planet formation, a core is formed first through the accretion of solid planetesimals. Once this core reaches a critical mass (m crit ), accretion of gas in a runaway fashion becomes possible and the mass of the planet increases rapidly (e.g. Ida & Lin 2004b). This therefore implies that the current object has never reached the critical mass, for otherwise the planet would have become much more massive. Furthermore, recent giant planet formation models including disk evolution and migration (Alibert et al. 2004) have shown that these effects greatly shorten the formation time. Hence, it is unlikely that the planet has migrated over large distances before reaching its present location. It was thus probably formed inside the ice radius (∼3.2 AU -Ida & Lin 2004a), and its composition should be dominated by rocky (telluric) material. We note that the high [Fe/H] of µ Ara makes this case possible (Ida & Lin 2004a). Curiously, with 14 M ⊕ and a=0.09 AU, this planet is near the borderline of the massperiod desert defined by Ida & Lin (2004b), where no planets are supposed to exist.
The above considerations lead us towards the following scenario for the formation of the present planetary system. The more massive planet, with the present ∼660 days period orbit, begins to form first and migrates inwards while growing in mass. Towards the end of the lifetime of the disk, the smaller planet is formed inside the orbit of the larger one, probably at a distance not exceeding 3 AU. Thus, we expect this object to have a massive, essentially rocky core (as opposed to icy), surrounded by a gaseous envelope with ∼5-10% of its mass. It therefore probably qualifies as a super-Earth and not as a failed ice-giant.
The discovery of this extremely low-mass planet represents a new benchmark for planet surveys, and demonstrates the ability of instruments like HARPS to detect telluric planets with just a few times the mass of the Earth. In the future these detections will give the possibility to study the low end of the planetary-mass distribution. This kind of planets may be relatively common, as according to recent simulations (Ida & Lin 2004a), very low-mass planets may be more frequent than the previously found giant worlds. This is further supported by the recent detection of a first neptunian planet in a short period orbit around 55 Cnc (McArthur et al. 2004) 2 . Such planets will be preferential targets for space missions like the photometric satellites COROT and Kepler. Furthermore, the discovery of such low mass planets around stars that have at least one more giant exoplanet, makes of these systems very interesting cases to understand the processes of planetary formation and evolution.
Fig. 1 .
1HARPS radial-velocity measurements of µ Ara as a function of time. The filled line represents the best fit to the data, obtained with the sum of a keplerian function and a linear trend, representing the effect of the long period companions to the system. The residuals of the fit, with an rms of only 0.9 m s −1 , are shown in the lower panel.15.3 pc, and an absolute magnitude of M v =4.20. Its color index B − V is 0.694.
Fig. 2 .
2Phase-folded radial-velocity measurements of µ Ara after subtraction of the linear trend shown in the upper panel ofFig. 1. In both panels the error bars represent the rms around the weighted average of the individual measurements for a given night.
Fig. 3 .
3Radial velocity measurements of µ Ara obtained during the past 6 years with the CORALIE (dots) and HARPS spectrographs (open triangles), and byJones et al. (2002) (open circles). The curve represents the best 2-body keplerian fit to the data. In the lower panel we present the rms around the fit. For the longer period keplerian fit, the eccentricity was fixed to a value of 0.2.
The nightly average of the HARPS radial velocities will be available in electronic form at CDS
N.C.Santos et al.: The first 14 earth-mass exoplanet
A companion to the M-dwarf GJ 436 with a minimum mass m2 sin i=21 M⊕ was also announced byButler et al. (2004) after the submission of the current letter.
Acknowledgements. We would like to thank Y. Alibert and S. Randich for the fruitful discussion. We would like to thank the support from the Swiss National Science Foundation and the Portuguese Fundação para a Ciência e a Tecnologia. S. Vauclair acknowledges a grant from Institut Universitaire de France. This study benifited from the support of the HPRN-CT-2002-00308 European programme.
. Y Alibert, C Mordasini, W Benz, A&A. 41725Alibert, Y., Mordasini, C., Benz, W. 2004, A&A, 417, L25
. I Baraffe, F Selsis, G Chabrier, A&A. 41913Baraffe, I., Selsis, F., Chabrier, G., et al. 2004, A&A, 419, L13
. M Bazot, S Vauclair, A&A. submitted (astroph/0407544Bazot, M., & Vauclair, S. 2004, A&A, submitted (astro- ph/0407544)
. T Bensby, S Feltzing, I Lundström, A&A. 410527Bensby, T., Feltzing, S., & Lundström, I. 2003, A&A, 410, 527
. R P Butler, S Vogt, G Marcy, ApJ. 555410ApJButler, R.P., Vogt, S., Marcy, G., et al. 2004, ApJ, in press Butler, R.P., Tinney, C.G., Marcy, G., et al. 2001, ApJ, 555, 410
The Hipparcos and Tycho Cat. A Eggenberger, S Udry, M Mayor, ESA SP-1200A&A. 417Eggenberger, A., Udry, S., & Mayor, M. 2004, A&A, 417, 353 ESA 1997, The Hipparcos and Tycho Cat., ESA SP-1200
. P J Flower, ApJ. 469355Flower, P.J. 1996, ApJ 469, 355
. G Gonzalez, A&A. 334221Gonzalez, G. 1998, A&A, 334, 221
. K Gozdziewski, M Konacki, A Maciejewski, ApJ. 5941019Gozdziewski, K., Konacki, M., & Maciejewski, A. 2003, ApJ, 594, 1019
. T J Henry, D R Soderblom, R A Donahue, S L Baliunas, AJ. 111439Henry, T.J., Soderblom, D.R., Donahue, R.A., & Baliunas, S.L. 1996, AJ, 111, 439
. S Ida, D N Lin, S Ida, D N Lin, astro-ph/0408019ApJ. 604388ApJIda, S., & Lin, D.N.C. 2004a, ApJ, in press (astro-ph/0408019) Ida, S., & Lin, D.N.C. 2004b, ApJ, 604, 388
. G Israelian, N C Santos, M Mayor, R Rebolo, A&A. 414601Israelian, G., Santos, N.C., Mayor, M., & Rebolo, R. 2004, A&A, 414, 601
. H R A Jones, R P Butler, G W Marcy, MNRAS. 3371170Jones, H.R.A., Butler, R.P., Marcy, G.W., et al. 2002, MNRAS 337, 1170
. C Laws, G Gonzalez, K Walker, AJ. 1252664Laws, C., Gonzalez, G., Walker, K., et al. 2003, AJ 125, 2664
. A Lecavelier Des Etangs, A Vidal-Madjar, J C Mcconnell, G Hébrard, A&A. 4181Lecavelier des Etangs, A., Vidal-Madjar, A., McConnell, J.C., & Hébrard, G. 2004, A&A, 418, L1
. A Marino, G Micela, G Peres, S Sciortino, A&A. 383210Marino, A., Micela, G., Peres, G., & Sciortino, S. 2002, A&A, 383, 210
. M Mayor, F Pepe, D Queloz, The ESO Messenger. 11420Mayor, M., Pepe, F., Queloz, D., et al. 2003, The ESO Messenger, 114, 20
. M Mayor, D Queloz, Nature. 378355Mayor, M., & Queloz, D. 1995, Nature, 378, 355
. B Mcarthur, M Endl, W Cochran, ApJ. 279763ApJMcArthur, B., Endl, M., Cochran, W., et al., ApJ, in press Noyes, R.W., Hartmann, L.W., Baliunas, S.L., et al. 1984, ApJ, 279, 763
. G Pace, L Pasquini, astro- ph/0406651A&A. in pressPace, G., & Pasquini, L. 2004, A&A, in press (astro- ph/0406651)
. F Pepe, M Mayor, D Queloz, A&A. 423385Pepe, F., Mayor, M., Queloz, D., et al. 2004, A&A, 423, 385
. F Pepe, M Mayor, G Rupprecht, The ESO Messenger. 1109Pepe, F., Mayor, M., Rupprecht, G., et al. 2002, The ESO Messenger, 110, 9
. N C Santos, G Israelian, M Mayor, A&A. 4151153Santos, N.C., Israelian, G., & Mayor, M. 2004, A&A, 415, 1153
. N C Santos, M Mayor, D Naef, A&A. 392215Santos, N. C., Mayor, M., Naef, D., et al. 2002, A&A, 392, 215
. N C Santos, G Israelian, M Mayor, A&A. 3731019Santos, N.C., Israelian, G., & Mayor, M. 2001, A&A, 373, 1019
. N C Santos, M Mayor, D Naef, A&A. 361265Santos, N. C., Mayor, M., Naef, D., et al. 2000, A&A, 361, 265
. D Schaerer, C Charbonnel, G Meynet, A Maeder, G Schaller, A&AS. 102339Schaerer, D., Charbonnel, C., Meynet, G., Maeder, A., & Schaller, G. 1993, A&AS, 102, 339
. D Trilling, J Lunine, W Benz, A&A. 394241Trilling, D., Lunine, J., & Benz, W. 2002, A&A, 394, 241
. S Udry, M Mayor, N C Santos, A&A. 407369Udry, S., Mayor, M., & Santos, N.C. 2003, A&A, 407, 369
. S Vauclair, ApJ. 605874Vauclair, S. 2004, ApJ, 605, 874
. S Zucker, T Mazeh, ApJ. 568113Zucker, S., & Mazeh, T. 2002, ApJ, 568, L113
|
[] |
[
"Extended Higgs sector beyond the MSSM and the LHC",
"Extended Higgs sector beyond the MSSM and the LHC"
] |
[
"Rui Santos [email protected] \nCentro de Física Teórica e Computacional\nFaculdade de Ciências\nISEL -Instituto Superior de Engenharia de Lisboa\nUniversidade de Lisboa\nEdifício C81749-016Campo Grande, LisboaPortugal\n\nInstituto Politécnico de Lisboa\n1959-007LisboaPortugal\n"
] |
[
"Centro de Física Teórica e Computacional\nFaculdade de Ciências\nISEL -Instituto Superior de Engenharia de Lisboa\nUniversidade de Lisboa\nEdifício C81749-016Campo Grande, LisboaPortugal",
"Instituto Politécnico de Lisboa\n1959-007LisboaPortugal"
] |
[] |
One Higgs was found. Are there more? In this work we discuss simple extension of the scalar sector of the Standard Model (SM) used as benchmark models by ATLAS and CMS in the searches for new scalars at the LHC. We discuss how much the discovered 125 GeV Higgs at the LHC resembles the SM Higgs and how will our understanding of the Higgs nature improve at future electron-positron colliders. Models with extended Higgs sectors provide very interesting scenarios, from the existence of charged Higgs bosons to CP-violating scalars that can be probed by the experimental collaborations at the LHC. Comparison between the rates in the different models show that in some cases the models could be distinguished.
|
10.22323/1.330.0059
|
[
"https://arxiv.org/pdf/1809.00234v1.pdf"
] | 118,992,511 |
1809.00234
|
62b432cdcb026ed4ebe31527b63f93454e6da3c9
|
Extended Higgs sector beyond the MSSM and the LHC
Rui Santos [email protected]
Centro de Física Teórica e Computacional
Faculdade de Ciências
ISEL -Instituto Superior de Engenharia de Lisboa
Universidade de Lisboa
Edifício C81749-016Campo Grande, LisboaPortugal
Instituto Politécnico de Lisboa
1959-007LisboaPortugal
Extended Higgs sector beyond the MSSM and the LHC
One Higgs was found. Are there more? In this work we discuss simple extension of the scalar sector of the Standard Model (SM) used as benchmark models by ATLAS and CMS in the searches for new scalars at the LHC. We discuss how much the discovered 125 GeV Higgs at the LHC resembles the SM Higgs and how will our understanding of the Higgs nature improve at future electron-positron colliders. Models with extended Higgs sectors provide very interesting scenarios, from the existence of charged Higgs bosons to CP-violating scalars that can be probed by the experimental collaborations at the LHC. Comparison between the rates in the different models show that in some cases the models could be distinguished.
Introduction
After the discovery of the Higgs boson, the search for new scalars by the experimental groups at CERN, further motivated the study of extensions of the Standard Model (SM). Besides supersymmetric models, the simplest extensions of the scalar sector of the SM provide an excellent framework for the interpretation of many searches and to motivate new searches. In this work we discuss a few extensions of the scalar sector of the SM. We will discuss how efficiently can the parameter space of these simple extensions be constrained through the measurements of the Higgs couplings and how SM-like is the SM-like Higgs boson. We furthermore try to understand if these models can be distinguished if a new scalar is found. All models have a limit where the 125 GeV Higgs looks exactly like the SM at tree level and if all other particles are very heavy it will be hard to probe their existence. At this stage it is not clear if electroweak radiative corrections play an important role in the model's phenomenology. In fact, although the corrections may change significantly the tree-level couplings of the 125 GeV Higgs, there are large regions of the parameter space of the models where they are small enough to be inside the predicted error for the future LHC Higgs couplings measurements. If the future measurements of the 125 GeV Higgs couplings are compatible with the SM predictions with ever increasing precision, the models will approach more and more their SM-like limit, where they are all very similar as we will show. Only if a new scalar is found can we start probing the different possibilities for the new models. If this is the case, some models show very interesting properties, some of which are very characteristic of specific models. We will discuss some particularly interesting scenarios of selected models.
Building the models
When building extensions of the SM there are some very general features which make the models comply with experimental results in a simpler way. However, if it is true that all models need to provide a 125 GeV Higgs that is not a pure CP-odd scalar, any other constraints should only be seen as a guide to build the simpler extensions compatible with the experimental results. Among the most relevant are:
• The ρ parameter, which is measured with great precision, can be written as a function of the SU(2) L Isospin T i , the Hypercharge Y , and the vacuum expectation value of the fields v i , as
ρ = m 2 W m 2 W cos 2 θ W = ∑ i [4T i (T i + 1) −Y 2 i ] |v i | 2 c i ∑ i 2Y 2 i |v i | 2 (2.1)
where c i = 1(1/2) for complex (real) representations. The simplest representation with ρ = 1 is the singlet. The next one is the doublet and after that comes the septet. Hence, extended models with an arbitrary number of singlets and doublets have ρ = 1 at tree-level. Extensions with any other representations will need some kind of fine-tuning to comply with ρ = 1 at tree-level.
• Tree-level flavour changing neutral currents (FCNC) are experimentally very constrained. Models with more than one doublet can give rise to tree-level FCNC. This problem is usually fixed with the introduction of ad-hoc discrete symmetries imposed both on the scalar and on the fermion fields.
SM + singlet (RxSM and CxSM)
We now present a few of the simplest models that obey all the conditions above. The simplest extension of the scalar potential of the SM is the addition of either a real (RxSM) or a complex (CxSM) singlet. The complex field S = S + iA has zero isospin and zero hypercharge and therefore only enters the model via mixing with the scalar field from the SM doublet. The CxSM version of the potential is invariant under a global U(1) symmetry, softly broken by linear and quartic terms,
V = m 2 2 H † H + λ 4 (H † H) 2 + δ 2 2 H † H|S| 2 + b 2 2 |S| 2 + d 2 4 |S| 4 + b 1 4 S 2 + a 1 S + c.c. , (2.2)
with the fields defined as
H = G + 1 √ 2 (v + h + iG 0 ) and S = 1 √ 2 [v S + s + i(v A + a)] ,(2.3)
where v ≈ 246 GeV is the vacuum expectation value (VEV) of the h field and v S and v A are the VEVs of the real and imaginary parts of the complex singlet field, respectively. Imposing invariance under S → S * (or A → −A), implies that a 1 and b 1 are real. The vacuum structure determines the number of stable particles (see discussion in [1]). In the broken phase, where all 3 VEVs are non-zero and the three CP-even scalars mix, the mass eigenstates H i are obtained via the rotation matrix R, which we parametrize as
R = c 1 c 2 s 1 c 2 s 2 −(c 1 s 2 s 3 + s 1 c 3 ) c 1 c 3 − s 1 s 2 s 3 c 2 s 3 −c 1 s 2 c 3 + s 1 s 3 −(c 1 s 3 + s 1 s 2 c 3 ) c 2 c 3 , (2.4)
where we have defined s i ≡ sin α i and c i ≡ cos α i , with the angles varying in the range −π/2 ≤ α i < π/2 and the masses of the neutral Higgs ordered as m H 1 ≤ m H 2 ≤ m H 3 . A detailed account of the models can be found in [1].
SM + doublet (2HDM and C2HDM)
The potential for the real (2HDM) and complex (C2HDM [2]) versions of the 2-Higgs-Doublet model, is chosen to be invariant under the
Z 2 transformations Φ 1 → Φ 1 and Φ 2 → −Φ 2 , V = m 2 11 |Φ 1 | 2 + m 2 22 |Φ 2 | 2 − m 2 12 (Φ † 1 Φ 2 + h.c.) + λ 1 2 (Φ † 1 Φ 1 ) 2 + λ 2 2 (Φ † 2 Φ 2 ) 2 +λ 3 (Φ † 1 Φ 1 )(Φ † 2 Φ 2 ) + λ 4 (Φ † 1 Φ 2 )(Φ † 2 Φ 1 ) + λ 5 2 [(Φ † 1 Φ 2 ) 2 + h.c.] . (2.5)
The 2HDM is defined with all parameters and VEVs real, while the C2HDM is built with real VEVs, but m 2 12 and λ 5 complex. The particle spectrum of the 2HDM include two CP-even scalars, one CP-odd scalar and two charged Higgs. The C2HDM has two charged scalars and three neutral scalar bosons with no definite CP H i (i = 1, 2, 3), ordered by ascending mass according to m H 1 ≤ m H 2 ≤ m H 3 . In the C2HDM the neutral mass eigenstates are obtained via the rotation of a matrix we again parametrise as R in ( Eq. (2.4)), with the same allowed range for the mixing angles. The 2HDM has 8 independent parameters while the C2HDM has 9 free parameters. For both models we define the common parameters v = v 2 1 + v 2 2 ≈ 246 GeV and tan β = v 2 /v 1 . The remaining free parameters for the 2HDM are α, m h , m H , m A , m H ± and m 2 12 , where α is the rotation angle in the CPeven sector. For the C2HDM the remaining free parameters are α 1,2,3 , m H i , m H j , m H ± and Re(m 2 12 ). The third neutral Higgs mass is obtained from the other parameters [3]. The 2HDM and C2HDM discussed in this work have no tree-level FCNCs due to the global Z 2 symmetry imposed on the scalar doublets which is extended to the fermions leading to the four independent Yukawa versions of the model: Type I, Type II, Flipped and Lepton Specific. All couplings for the C2HDM can be found in [4].
SM + doublet + singlet (N2HDM)
The potential chosen for the N2HDM [5] is invariant under the Z 2 symmetries
Φ 1 → Φ 1 , Φ 2 → −Φ 2 , Φ S → Φ S (2.6)
which is softly broken by the m 2 12 term and
Φ 1 → Φ 1 , Φ 2 → Φ 2 , Φ S → −Φ S (2.7)
broken spontaneously by the singlet VEV. We write the potential as
V = m 2 11 |Φ 1 | 2 + m 2 22 |Φ 2 | 2 − m 2 12 (Φ † 1 Φ 2 + h.c.) + λ 1 2 (Φ † 1 Φ 1 ) 2 + λ 2 2 (Φ † 2 Φ 2 ) 2 +λ 3 (Φ † 1 Φ 1 )(Φ † 2 Φ 2 ) + λ 4 (Φ † 1 Φ 2 )(Φ † 2 Φ 1 ) + λ 5 2 [(Φ † 1 Φ 2 ) 2 + h.c.] + 1 2 m 2 S Φ 2 S + λ 6 8 Φ 4 S + λ 7 2 (Φ † 1 Φ 1 )Φ 2 S + λ 8 2 (Φ † 2 Φ 2 )Φ 2 S . (2.8)
This model is CP-conserving and has no dark matter candidate. The particle spectrum includes two charged Higgs, one CP-odd boson and three CP-even scalars which again we denote by H i . One of the CP-even scalars is chosen to be the 125 GeV Higgs. The rotation from the gauge eigenstates to the mass eigenstates in the CP-even sector is again given by R with the angles α i varying in the same range as before. The model has 12 independent parameters: v, tan β , α 1,2,3 , m H 1,2,3 , m A , m H ± and m 2 12 . Extending the Z 2 symmetry to the Yukawa sector we end up with the same four types of Yukawa models. A detailed study of the N2HDM was performed in [5].
The 125 GeV Higgs
Higgs couplings to gauge bosons
The values of the tree-level 125 GeV Higgs couplings to gauge bosons are all smaller (in all the models discussed) than the corresponding SM coupling. This is a consequence of unitarity -the sum of the squared couplings g h i VV , where V = W, Z and H i denotes one of the CP-even Higgs boson, has to be equal to the corresponding SM coupling g SM hVV . Taking the lightest Higgs in the model to be the 125 GeV one (and just call it h), to make the discussion easy, the couplings to gauge bosons in the RxSM and in the 2HDM are modified relative to the SM as
g RxSM hVV = cos(α 1 )g SM hVV ; g 2HDM hVV = sin(β − α)g SM hVV , (3.1)
while for the CxSM, C2HDM and N2HDM the couplings are modified relative to the RxSM and to the 2HDM, respectively, as
g CxSM hVV = cos(α 2 )g RxSM hVV ; g N2HDM hVV = cos(α 2 )g 2HDM hVV ; g C2HDM hVV = cos(α 2 )g 2HDM hVV . (3.2)
However, the angle α 2 has very different meanings in these models, that is, it measures different contributions to the 125 GeV Higgs: the imaginary component of the singlet in the CxSM, the singlet component in the N2HDM and the CP-odd component in the C2HDM.
Higgs couplings to fermions
In the case of the singlet extensions, the 125 GeV Higgs Yukawa couplings are modified relative to SM by the the same factor that modified the Higgs to gauge boson couplings: cos(α 1 ) for the RxSM and cos(α 1 ) cos(α 2 ) for the CxSM,
Y RxSM
h f f = cos(α 1 )Y SM h f f ; Y CxSM h f f = cos(α 2 )Y RxSM h f f ,(3.3)
while for the N2HDM and for the C2HDM the couplings are modified relative to the 2HDM, respectively, as
Y N2HDM h f f = cos(α 2 )Y 2HDM h f f ; Re(Y C2HDM h f f ) = cos(α 2 )Y 2HDM h f f . (3.4)
That is, they are modified exactly like for the gauge bosons, except for the C2HDM for which there is an imaginary component of the Yukawa coupling that may have one of the following forms
Im(Y C2HDM h f f ) = ±i sin(α 2 ) tan β Y SM h f f ; Im(Y C2HDM h f f ) = ±i sin(α 2 ) tan β Y SM h f f . (3.5)
depending on the model type (see [4,6] for details). As discussed in [6], even if the angle that measures the amount of CP-violation α 2 is small, the pseudoscalar component can still be large if tan β is large.
Bounds on the h 125 components
Model CxSM C2HDM II C2HDM I N2HDM II N2HDM I NMSSM (Σ or Ψ) allowed 11% 10% 20% 55% 25% 41% In the models discussed in this work, the 125 GeV Higgs can be a combination from the two doublets like in 2HDM it can have a CP-even and a CP-odd admixture as in the C2HDM or it can have a singlet admixture as in the singlet extension and in the N2HDM. Using the ATLAS and CMS combined measurements [7] of the Higgs couplings after the LHC Run 1, we can derive the maximum allowed admixtures [8] which are shown in table 1 , where Σ (Ψ) stands for the singlet (pseudoscalar) admixture of the 125 GeV Higgs. Results are shown for a few selected models including the Next-to-Minimal Supersymmetric Standard Model (NMSSM). It is clear from the table that substantial admixtures are still allowed after Run 1. However, in a future electron-positron collider such as CLIC, the precise measurements of the couplings will reduce the admixtures well bellow the percent level. Using the CLIC predictions for the measurements of the Higgs couplings [9,10] we found that [11] the bounds on the admixtures are completely dominated by the measurement of κ HZZ for √ s = 350 GeV and a luminosity of 500 fb −1 and by κ HWW for √ s = 3 TeV and a luminosity of + 2.0 ab −1 , where κ 2 Hii = Γ BSM Hii /Γ SM Hii . With very precise measurements of κ ZZ,WW and because the unitary relation [11]
κ 2 ZZ,WW + Ψ + Σ ≤ 1 . (3.6)
holds in all models and is independent of the Yukawa type, the bounds on the admixtures (assuming that the central values are the SM predictions), will be roughly the same for all models and are given by [11] • √ s = 350 GeV and a luminosity of 500 fb −1 : Σ, Ψ < 0.85% from κ HZZ • √ s = 3 TeV and a luminosity of + 2.0 ab −1 : Σ, Ψ < 0.30% from κ HWW .
Non-SM like scenarios
There are many phenomenologically interesting scenarios for the extended Higgs models. New scalars are predicted and particularly charged Higgs bosons which would definitely signal new physics beyond the SM. The most interesting signals which would change our view of the scalar sector are given by the C2HDM. In fact, as discussed in [12,13], if a new Higgs is found with substantial decays to H 2 → h 125 Z and H 2 → ZZ when combined with the already observed h 125 → ZZ it would strongly hint a CP-violating scalar sector. However, only a detailed investigation of the model could confirm the CP-nature of the new sector because decays of the type A → ZZ are induced at one-loop in CP-conserving scalar sectors. CP-violating sectors also allow for peculiar situations such as the one described in figure 1 . We present the allowed points for the Type II C2HDM for H 2 = 125 GeV. In the left panel points that are pure pseudoscalar are still allowed (in b and τ couplings) while in the right panel we see that only points with a very small pseudoscalar component are allowed. Therefore, if direct detection concludes that the Higgs is mostly a scalar in the ttH coupling but it is mostly a pseudoscalar in the ττH coupling, this can be a sign of CP-violation. Finally we note that a decay of a scalar into two other scalars of different masses is sometimes one of the best search channels [14] for the models where these decays are allowed. This is not possible in the 2HDM but it is possible for all other models presented in this work. Experimental searches for these type of decays are therefore important for the next LHC Run. All points comply with the most relevant theoretical bounds and the most up-to-date experimental results.
Comparing models
If a new scalar is found we need to understand if its properties point to a particular model or if there are models where it is excluded. We have compared several models in recent papers to find that event rates are sometimes enough to choose particular models in given regions of their parameter space. Furthermore, even the different Yukawa versions of a specific model can sometimes be distinguished. In figure 2 we show the total rates for the production (in gg+bb) of a h 125 Higgs decaying into two lighter scalars of the same mass. In the left panel we show the results for Type I and Type II and in the right panel we show results for the Lepton Specific and Flipped models. We use the notation H ↓ to identify the lightest scalar (non-125) in the model. Clearly, all versions of the C2HDM can be probed at the next LHC run. Also, if the cross sections are above 1 pb some of Yukawa versions are favoured [4], Type II on the left and the flipped model on the right.
Conclusions
In this work we have presented and discussed some simple extension of the scalar sector of the SM. We have shown how the 125 Higgs can deviate from the doublet like structure by looking at the admixture with the singlet component and in the case of the C2HDM the admixture with the CP-odd component. We concluded that until the end of the LHC the bounds will be quite different in the models presented and will be of the order of tens of percent. At a future electron-positron collider such as CLIC the bounds on the admixtures become very strong (below 1%) and all models have roughly the same bounds due to unitarity. In such a scenario, new physics can only be seen through the discovery of a new scalar.
Some of the extension presented provide very interesting signals of new physics. Not only charged Higgs boson are predicted in most models but the C2HDM is particularly interesting if certain combinations of three decays are seen or if the CP-nature of the scalars can be studied in different channels in direct searches.
There has been an effort to calculate electroweak radiative corrections to Higgs decays in these models and in particular for the singlet extension [15][16][17] for the 2HDM [18,19] and for the N2HDM [20]. Taking into account the uncertainties in those corrections and the very broad allowed parameter space in the models, no relevant conclusions can be drawn except perhaps if a new particle is found.
Several codes based on HDECAY [21,22] for each of the models presented are available for the calculation of all Higgs branching ratios, including the state-of-the art higher order QCD corrections and possible off-shell decays:
• Singlet extension, both for the RxSM and for the CxSM in their symmetric and broken phases [14], named sHDECAY 1 .
• 2HDM [23] as part of the HDECAY release and C2HDM [4] named C2HDM_HDECAY 2 .
• N2HDM named N2HDECAY 3 [5,24] which implements the N2HDM in several phases. Finally, it can happen that the LHC will not show any signs of new physics in the next years. In that case, particle physicists can spend their time having fun building new models and exploring them. It could be that one of these models will finally answer the outstanding problems in particle physics and that it will predict signatures that were not searched for so far at the LHC. It is the right of a theorist to party! (see figure 3. 4 ) 1 The program sHDECAY can be downloaded from the url: http://www.itp.kit.edu/~maggie/ sHDECAY. 2 The program C2HDM_HDECAY can be downloaded from the url: https://www.itp.kit.edu/~maggie/ C2HDM. 3 The program N2HDECAY is available at: https://gitlab.com/jonaswittbrodt/N2HDECAY. 4 Figure from wikipedia.
Figure 1 :
1Allowed points in the Type II C2HDM for the case when H 2 is the 125 GeV Higgs. We show the points in the plane odd versus even Yukawa couplings. Left: b and τ couplings; right: t couplings. The Lagrangian is written asproportional toψ f [c e (H i f f ) + ic o (H i f f )γ 5 ] ψ f H i .
Figure 2 :
2Total rates for the production of a h 125 Higgs decaying into two lighter scalars of the same mass. Left: Type I and Type II; right: Lepton Specific and Flipped.
Figure 3 :
3The right to party.
Table 1 :
1Allowed singlet and pseudoscalar (for the C2HDM) admixtures after the LHC Run 1.
. R Coimbra, M O P Sampaio, R Santos, Eur. Phys. J. C. 732428R. Coimbra, M. O. P. Sampaio and R. Santos, Eur. Phys. J. C 73 (2013) 2428.
I F Ginzburg, M Krawczyk, P Osland, hep-ph/0211371Linear colliders* 90-94. I. F. Ginzburg, M. Krawczyk and P. Osland, In *Seogwipo 2002, Linear colliders* 90-94 [hep-ph/0211371].
. A W El Kaffas, P Osland, O M Ogreid, Nonlin. Phenom. Complex Syst. 10347A. W. El Kaffas, P. Osland and O. M. Ogreid, Nonlin. Phenom. Complex Syst. 10 (2007) 347.
. D Fontes, M Mühlleitner, J C Romão, R Santos, J P Silva, J Wittbrodt, JHEP. 180273D. Fontes, M. Mühlleitner, J. C. Romão, R. Santos, J. P. Silva and J. Wittbrodt, JHEP 1802 (2018) 073.
. M Mühlleitner, M O P Sampaio, R Santos, J Wittbrodt, JHEP. 170394M. Mühlleitner, M. O. P. Sampaio, R. Santos and J. Wittbrodt, JHEP 1703 (2017) 094.
. D Fontes, J C Romão, R Santos, J P Silva, JHEP. 150660D. Fontes, J. C. Romão, R. Santos and J. P. Silva, JHEP 1506 (2015) 060.
. G Aad, ATLAS and CMS CollaborationsPhys. Rev. Lett. 114191803G. Aad et al. [ATLAS and CMS Collaborations], Phys. Rev. Lett. 114 (2015) 191803.
. M Mühlleitner, M O P Sampaio, R Santos, J Wittbrodt, JHEP. 1708132M. Mühlleitner, M. O. P. Sampaio, R. Santos and J. Wittbrodt, JHEP 1708 (2017) 132.
. E Sicking, CLICdp CollaborationNucl. Part. Phys. Proc. 801E. Sicking [CLICdp Collaboration], Nucl. Part. Phys. Proc. 273-275 (2016) 801.
. H Abramowicz, CLIC Detector and Physics Study CollaborationarXiv:1307.5288hep-exH. Abramowicz et al. [CLIC Detector and Physics Study Collaboration], arXiv:1307.5288 [hep-ex].
. D Azevedo, P Ferreira, M Mühlleitner, R Santos, J Wittbrodt, arXiv:1808.00755hep-phD. Azevedo, P. Ferreira, M. Mühlleitner, R. Santos and J. Wittbrodt, arXiv:1808.00755 [hep-ph].
. D Fontes, J C Romão, R Santos, J P Silva, Phys. Rev. D. 92555014D. Fontes, J. C. Romão, R. Santos and J. P. Silva, Phys. Rev. D 92 (2015) no.5, 055014.
. S F King, M Muhlleitner, R Nevzorov, K Walz, Nucl. Phys. B. 901526S. F. King, M. Muhlleitner, R. Nevzorov and K. Walz, Nucl. Phys. B 901 (2015) 526.
. R Costa, M Mühlleitner, M O P Sampaio, R Santos, JHEP. 160634R. Costa, M. Mühlleitner, M. O. P. Sampaio and R. Santos, JHEP 1606 (2016) 034.
. F Bojarski, G Chalons, D Lopez-Val, T Robens, JHEP. 1602147F. Bojarski, G. Chalons, D. Lopez-Val and T. Robens, JHEP 1602 (2016) 147.
. S Kanemura, M Kikuchi, K Sakurai, K Yagyu, Phys. Rev. D. 96335014S. Kanemura, M. Kikuchi, K. Sakurai and K. Yagyu, Phys. Rev. D 96 (2017) no.3, 035014.
. R Costa, M O P Sampaio, R Santos, JHEP. 170781R. Costa, M. O. P. Sampaio and R. Santos, JHEP 1707 (2017) 081.
. M Krause, R Lorenz, M Mühlleitner, R Santos, H Ziesche, JHEP. 1609143M. Krause, R. Lorenz, M. Mühlleitner, R. Santos and H. Ziesche, JHEP 1609 (2016) 143.
. A Denner, L Jenniches, J N Lang, C Sturm, JHEP. 1609115A. Denner, L. Jenniches, J. N. Lang and C. Sturm, JHEP 1609 (2016) 115.
. M Krause, D Lopez-Val, M Mühlleitner, R Santos, JHEP. 171277M. Krause, D. Lopez-Val, M. Mühlleitner and R. Santos, JHEP 1712 (2017) 077.
. A Djouadi, J Kalinowski, M Spira, Comput. Phys. Commun. 10856A. Djouadi, J. Kalinowski and M. Spira, Comput. Phys. Commun. 108 (1998) 56.
. A Djouadi, J Kalinowski, M Muehlleitner, M Spira, arXiv:1801.09506hep-phA. Djouadi, J. Kalinowski, M. Muehlleitner and M. Spira, arXiv:1801.09506 [hep-ph].
. R Harlander, M Mühlleitner, J Rathsman, M Spira, O Stal, arXiv:1312.5571hep-phR. Harlander, M. Mühlleitner, J. Rathsman, M. Spira and O. Stal, arXiv:1312.5571 [hep-ph].
. I Engeln, M Mühlleitner, J Wittbrodt, arXiv:1805.00966hep-phI. Engeln, M. Mühlleitner and J. Wittbrodt, arXiv:1805.00966 [hep-ph].
|
[] |
[] |
[
"\nEUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH\n\n"
] |
[
"EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH\n"
] |
[] |
The four LEP collaborations, ALEPH, DELPHI, L3 and OPAL, have collected 2465 pb −1 of e + e − collision data at energies between 189 and 209 GeV, of which 542 pb −1 were collected above 206 GeV. Searches for the Standard Model Higgs boson have been performed by each of the LEP collaborations. Their data have been combined and examined for their consistency with the Standard Model background and various Standard Model Higgs boson mass hypotheses. A lower bound of 114.1 GeV has been obtained at the 95% confidence level for the mass of the Higgs boson. The likelihood analysis shows a preference for a Higgs boson with a mass of 115.6 GeV. At this mass, the probability for the background to generate the observed effect is 3.4%.
|
10.1007/978-3-642-59982-8_180
|
[
"https://arxiv.org/pdf/hep-ex/0107029v4.pdf"
] | 13,056,625 |
hep-ex/0107029
|
5ec49619f132c99c81d32166c775c46a248fac8b
|
arXiv:hep-ex/0107029v4 17 Jul 2001 October 27, 2018
EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH
arXiv:hep-ex/0107029v4 17 Jul 2001 October 27, 2018Search for the Standard Model Higgs Boson at LEP ALEPH, DELPHI, L3 and OPAL Collaborations The LEP working group for Higgs boson searches THE RESULTS QUOTED IN THIS PAPER ARE NOT FINAL Contributed paper for EPS'01 in Budapest and LP'01 in Rome A rapid analysis which included the bulk part of the new data resulted in the probabilities
The four LEP collaborations, ALEPH, DELPHI, L3 and OPAL, have collected 2465 pb −1 of e + e − collision data at energies between 189 and 209 GeV, of which 542 pb −1 were collected above 206 GeV. Searches for the Standard Model Higgs boson have been performed by each of the LEP collaborations. Their data have been combined and examined for their consistency with the Standard Model background and various Standard Model Higgs boson mass hypotheses. A lower bound of 114.1 GeV has been obtained at the 95% confidence level for the mass of the Higgs boson. The likelihood analysis shows a preference for a Higgs boson with a mass of 115.6 GeV. At this mass, the probability for the background to generate the observed effect is 3.4%.
Introduction
The Higgs mechanism [1] plays a central role in the unification of the electromagnetic and weak interactions by providing mass to the intermediate vector bosons, W and Z, without violating local gauge invariance. Within the Standard Model (SM) [2], the Higgs mechanism predicts a single neutral scalar particle, the Higgs boson. Its mass is arbitrary; however, self-consistency of the model up to a scale Λ imposes an upper [3] and lower bound [4]. If Λ is close to the Planck scale, the mass of the SM Higgs boson is confined between about 130 and 190 GeV [5]. A mass less than 130 GeV would indicate physics beyond the SM to set in below the Planck scale; for example, in the minimal supersymmetric extension of the SM the mass of the lightest neutral scalar h 0 is predicted to be less than 135 GeV [6]. Even stronger bounds are obtained using arguments of naturalness and fine-tuning [7].
Indirect experimental constraints are derived from precision measurements of electroweak parameters which depend in their interpretation on the log of the Higgs boson mass via radiative corrections. If the SM is assumed, the currently preferred mass value is m H = 88 +53 −33 GeV, and the 95% confidence level upper bound on the mass is 196 GeV [8].
Direct searches carried out by the four LEP collaborations in data collected prior to the year 2000 did not reveal any signal for the SM Higgs boson. When the LEP data were statistically added, the observed event rate and their distributions have shown good agreement with the SM background processes [9,10,11].
The situation changed during summer 2000 with the advent of new LEP data, at centre-ofmass energies exceeding 206 GeV. At the session of the LEP Committee of September 5, 2000, ALEPH reported an excess of events suggesting the production of a SM Higgs boson with mass in the vicinity of 115 GeV [12] while DELPHI, L3 and OPAL did not support this observation. The quoted probabilities for the SM background to produce the observed event configuration (1 − CL b , as defined below in Section 2.5), are listed in the first line of Table 1 where the LEP combined result is also quoted. Due to this ambiguous situation, the LEP shutdown planned for the end of September was postponed by one month, and all effort was made to maximize the LEP energy. [13].
listed in the second line of Table 1. These results were presented at the LEP Committee meeting of November 3, 2000 [13]. The ALEPH excess was slightly attenuated, as indicated by the increased background probability. On the other hand, L3 reported some candidates suggesting the Higgs boson interpretation [14].
After a thorough review of the analysis procedures, the LEP collaborations have published their results [15,16,17,18], updating them to include all data. The L3 publication [17] is final. The review addressed many potential systematic errors, especially in the handling of a signal at the kinematic limit of the production process e + e − → ZH. Also, the uncertainties from Monte Carlo statistics were reduced and in some cases the search sensitivity has been improved. The published background probabilities at a test-mass of 115 GeV are reported in the last line of Table 1. The ALEPH [15] and L3 [17] excesses have decreased since the beginning of November.
In this paper we present combined results from LEP which are based on these recent publications. However, the inputs also include data collected before the year 2000. The c.m. energies (E cm ) thus span the range from 189 GeV to 209 GeV. The integrated luminosities by experiment and energy are given in Table 2 At LEP energies, the SM Higgs boson is expected to be produced mainly in association with a Z boson through the Higgsstrahlung process, e + e − →HZ [19]. Small additional contributions are expected from t-channel W and Z boson fusion processes, which produce a Higgs boson and either a pair of neutrinos or electrons in the final state [20]. For masses in the vicinity of 115 GeV (the kinematic limit for Higgsstrahlung at E cm ≈ 206 GeV), the SM Higgs boson is expected to decay mainly into bb quark pairs (74%) while decays to tau lepton pairs, WW * , gluon pairs (≈ 7% each), and to cc (≈ 4%) are all less important. The final-state topologies are determined by these decays and by the decay of the associated Z boson. The searches at LEP encompass the four-jet final state (H→bb)qq, the missing energy final state (H→bb)νν, the leptonic final state (H→bb)ℓ + ℓ − where ℓ denotes an electron or a muon, and the tau lepton final states (H→bb)τ + τ − and (H→τ + τ − )(Z→qq).
Preselection cuts are applied to reduce the main background from two-photon processes and from radiative returns to the Z boson, e + e − →Zγ(γ). The remaining background, mainly from fermion pairs (possibly with photon or gluon radiation), WW, and ZZ, is reduced by applying cuts which make use of kinematic differences between the signal and the background processes and of the requirement of b-flavour, abundant in the decay of the Higgs boson. The detailed implementation of these selections and analysis procedures is different for each experiment [15]- [18]. In some search channels 1 , the selection depends explicitly upon the hypothesized Higgs boson mass.
2 Combination procedure and results
Input provided by the experiments
The information provided by the LEP experiments as input to the combination is in most cases binned in two discriminating variables: (i) the reconstructed Higgs boson mass m rec H , and (ii) a variable G which combines many features of the events and allows the analysis to distinguish on a statistical basis between events from the Higgs boson signal and events from background processes. This variable is typically the outcome of a likelihood analysis or the output of an artificial neural network. Variables which tag b-flavoured jets contribute in an essential way to the value of G.
In a given bin i of the plane defined by m rec H and G, the experiments provide the number N i of selected data events, the expected background rate b i , and the expected signal s i (m H ) for a set of hypothesized Higgs boson masses (test-mass m H hereafter). In those channels where the selection depends on m H , the values of N i and b i are also given for a set of m H values. For a given test-mass, a weight of s/b can thus be assigned to each selected candidate, depending on m H and the bin where it is reconstructed. The estimation of s i and b i makes use of detailed Monte Carlo simulations which take into account all known experimental features such as the c.m. energy and integrated luminosity of the data samples, cross-sections and decay branching ratios for the signal and background processes, selection efficiencies, experimental resolutions with non-gaussian contributions and systematic errors with their correlations. Since the simulation is done at fixed sets of E cm and m H , interpolation procedures such as [21] are applied to obtain the distributions which correspond to arbitrary energies and test-masses. In order to avoid problems which might arise in some bins due to low Monte Carlo statistics, smoothing procedures such as [22] are applied which use the corresponding information in the neighbouring bins.
Hypothesis testing
The observed data configuration in the [m rec H , G] plane is subjected to a likelihood test of two hypothetical scenarios. In the background scenario it is assumed that the data receive contributions from the SM background processes only while in the signal+background scenario the contribution from a Higgs boson of test-mass m H is assumed in addition. The expressions for the corresponding likelihoods, L b and L s+b , are given e.g. in Appendix A of Ref. [9]. The ratio
Q = L s+b /L b(1)
serves as test-statistic allowing to rank any data configuration between the background and signal+background hypotheses. For convenience, the quantity
− 2 ln Q = 2s tot − 2 i N i ln[1 + s i /b i ](2)
is used since in the limit of high statistics it corresponds to the difference in χ 2 between the two hypotheses. In the above expression, s tot = i s i is the total expected signal rate. This test-statistic has been adopted since it makes the most efficient use of the information available in the observed event configuration of a search, similarly to the way the principle of maximum likelihood gives the most efficient estimators of parameters in a measurement. Figure 1 shows the test-statistic −2 ln Q as a function of the test-mass for the present combination of LEP data. The expected curves and their spreads are obtained by replacing the observed data configuration by a large number of simulated event configurations.
There is a minimum in the observed −2 ln Q at m H = 115.6 GeV (maximum of the likelihood ratio Q) indicating a deviation from the background hypothesis. The minimum coincides with the signal+background expectation for the same test-mass. The value of −2 ln Q at m H = 115.6 GeV is −2.88.
Another feature in Figure 1 is a persistent tail in the observation towards lower test-masses where the observed curve stays away from the prediction for background. This is interpreted as being due to a large extent to the experimental resolution. A test has been performed where the signal expected from a 115 GeV Higgs boson was injected in the background simulation and propagated through the likelihood ratio calculation at each m H value. Although the resulting curve (dotted line) reproduces the main feature of the observed tail 2 , local excess events due to statistical fluctuations can also contribute to the tail.
In Figures 2 and 3 the likelihood test is applied to subsets of the data, from individual experiments and final-state topologies. In the vicinity of m H = 115 GeV, the signal-like behaviour mainly originates from the ALEPH data and is concentrated in the four-jet final state. One should note that none of the four experiments, taken separately, have the statistical power to distinguish between the background and the signal+background hypotheses at the level of two standard deviations for a test mass of 115 GeV (see the intersection of the signal+background curve with the lower edge of the light-shaded bands). Among the final-state topologies, only the LEP combined four-jet channel is sufficiently powerful to do so.
Contributions from single events
The likelihood ratio −2 ln Q is built up from individual event weights ln(1 + s/b). The 20 candidates with the highest weights at m H = 115 GeV are listed in Table 3. Some of these candidates are discussed in detail in Ref's [15], [14], [17], [18] and [23]. For the events of each experiment with the highest weight at m H = 115 GeV, the evolution of ln(1 + s/b) with test-mass is shown in Figure 4. Due to the experimental resolution, candidate events with a given reconstructed mass are seen to have sizeable weights for a range of test-masses, with the maximum weight being for test-masses close to the reconstructed mass.
Expt E cm Decay channel M rec H (GeV) ln(1 + s/b) @115
The distribution of event weights for the test-mass fixed at m H = 115.6 GeV is shown in the upper part of Figure 5 (log 10 s/b is plotted for better visibility). For the purpose of this figure, a cut at s/b > 0.01 has been introduced. The upper right plot shows the integrals of these distributions, starting from high values of s/b (note that the bins are correlated). The data prefer slightly the signal+background hypothesis over the background hypothesis although the separation is weak. The two plots in the lower part show the corresponding distributions for a test-mass chosen arbitrarily at m H = 110 GeV. The data show clear preference for the background hypothesis in this case.
There is a general agreement between the observed and simulated rates, see Table 4.
m H ln(1 + s/b) min Expected
Distributions of the reconstructed Higgs boson mass
The reconstructed Higgs boson mass m rec H is just one of several discriminating variables contributing to the separation of the signal and the background processes and the construction of the likelihood ratio Q. Since in some channels the event selection depends explicitly on the test-mass, the reconstructed mass distributions resulting from the standard combination procedure are biased. The distributions shown in Figure 6 are therefore obtained from special selections where the cuts are applied on quantities (e.g. b-tag variables) which introduce little bias into the m rec H distribution. Three such selections are shown, with increasing signal purity. In the loose/medium/tight selections the cuts are adjusted in each decay channel to obtain for m H = 115 GeV a signal over background ratio 3 of 0.5/1/2 in the reconstructed mass region above 109 GeV. These spectra are shown merely to illustrate the agreement between the data and the simulation in this important discriminating variable, and should not be used to draw conclusions regarding the significance of a possible signal. Most importantly, it is not claimed that the slight excess at high mass in the tight selection (4 events for an expected background of 1.25 events) is solely responsible for the result quoted below. Table 5.
Confidence level calculation
It should be noted that these probabilities refer to local fluctuations of the background. To obtain the probability for such a fluctuation to appear anywhere within a given mass range of interest, a multiplicative factor has to be applied which is approximated by the width of the mass range divided by the mass resolution. In the present case the range of interest is limited from below by previous exclusion limits (107.9 GeV [9]) and from above by the kinematic limit of the production process e + e − → HZ (about 116 GeV). The mass resolution averaged over the final-state topologies and experiments is about 3.5 GeV.
Bounds for the Higgs boson mass and coupling
The ratio CL s = CL s+b /CL b as a function of the test-mass, shown in Figure 9, is used to derive a lower bound for the SM Higgs boson mass ( [9], Appendix A). The test-mass corresponding to CL s = 0.05 defines the lower bound at the 95% confidence level. The expected and observed lower bounds obtained for the SM Higgs boson mass are listed in Table 6. The current lower bound from LEP is 114.1 GeV at the 95% confidence level.
The LEP data are used also to set 95% CL upper bounds on the square of the HZZ coupling in non-standard models which assume the same Higgs decay properties as in the SM but where the HZZ coupling may be different. Figure 10 shows the upper bound on ξ 2 = (g HZZ /g SM HZZ ) 2 , the square of the ratio of the coupling in such a model to the SM coupling, as a function of the Higgs boson mass. In deriving this limit, the data collected at E cm = 161, 172 and 183 GeV were also included.
3 Cross-checks, uncertainties (i) It is legitimate to ask whether the excess at 115 GeV mass could be induced by an inadequate treatment of the data close to the kinematic limit of the process e + e − → HZ. To test this hypothesis, the −2 ln Q curves (the equivalents to Figure 1) have been produced separately for data of different c.m. energies, see Figure 11 . In each plot, the vertical line indicates the test-mass m H = E cm − M Z GeV, just at the kinematic limit.
In the 189 GeV data, an excess at m = 97 GeV has indeed been observed [25] (see the large negative value of −2 ln Q close to the signal+background prediction) which was due mainly to small excesses in ALEPH and OPAL data compatible with e + e − → ZZ, the dominant background in the vicinity of that mass. This excess still has a significance of about two standard deviations when LEP data from all energies are combined, and one cannot exclude a physics interpretation beyond the SM (e.g. MSSM with several neutral Higgs bosons). However, there is no evidence for a systematic effect at threshold in the data collected at the other energies below 206 GeV.
(ii) The LEP experiments claim systematic errors of typically 5% for their signal estimates and 10% for their background estimates. Most of the errors are estimated from calibration data (e.g. data taken at E cm = M Z to calibrate the b-tagging performance or to determine the level of non-b background) or from measurements of the e + e − annihilations into fermion pairs, WW and ZZ processes. The current implementation of systematic uncertainties (see Ref's [15]- [18] for details) treats errors from the same source as fully correlated between experiments and errors from different sources as uncorrelated. Furthermore, all bins within the same channel have the same errors, and these errors are assumed to have Gaussian distributions. Several tests have been performed to assess the possible impact of this simplified treatment on the result.
Internal consistency
The excess at m H = 115.6 GeV has been examined in subsets obtained by dividing the data by experiment and by decay channel. It has also been analysed as a function of signal purity.
The first two subdivisions have been addressed in Figures 2 and 3 and Table 5. The corresponding probability density distributions for m H = 115.6 GeV are shown in Figure 13. The largest difference occurs between the subsets of ALEPH and DELPHI. Looking separately at the final state topologies, the excess is mainly concentrating in the four-jet channel. Combining the four experiments while leaving out the four-jet channel, the lowest plot in Figure 13 is obtained.
As seen in Figure 5, the presence of a Higgs boson should affect a substantial part of the event weight distribution. If the data set is subdivided in high-and low-purity subsets by selecting s/b > 1 and s/b < 1, at which point the two subsamples have approximately equal expected sensitivity, the contributions to −2 ln Q are consistent, and slightly more signal-like (negative) in the low-purity subset. Hence, the observed excess is not due to a few events with exceptionally high weights only, but is reflected by the whole distribution of event weights.
Conclusion
Combining the data from the four LEP experiments, a new lower bound for the mass of the Standard Model Higgs boson has been derived, which is 114.1 GeV at the 95% confidence level. There is an excess which can be interpreted as production of a Standard Model Higgs boson with a mass higher than the quoted limit. It is concentrated mainly in the data sets with centre-of-mass energies higher than 206 GeV. The likelihood test designates 115.6 GeV as the preferred mass. The probability for a fluctuation of the Standard Model background is 3.4%. This effect is mainly driven by the ALEPH data and the four-jet final state. mass m H obtained when the combination procedure is applied to the data sets from single experiments (see Figure 1 for the notations). Table 3.
The expected distributions of −2 ln Q for a test-mass of 115.6 GeV (a slice ofFigure 1at m H = 115.6 GeV) are shown inFigure 7. The distributions for the background and the sig-nal+background hypotheses are normalized and represent probability density functions. The vertical line indicating the observed value lies within the distribution for the signal+background hypothesis. The integral of the background distribution from −∞ to the observed value, 1 − CL b , measures the compatibility of the observation with the background hypothesis. Given a large number of background experiments, it is the probability to obtain an event configuration more signal-like than the one observed. Similarly, the integral from +∞ to the observed value of the signal+background distribution, CL s+b , is a measure of compatibility with the signal+background hypothesis.Calculating 1 − CL b for test-masses between 100 and 120 GeV, Figure 8 is obtained. At m H = 115.6 GeV, where the −2 ln Q has its minimum (see Figure 1), one gets 1 − CL b = 0.034, which corresponds to about two standard deviations 4 . Values of 1−CL b and CL s corresponding to m H = 115.6 GeV are listed in
(a) If the systematic errors are ignored, 1 − CL b decreases from 3.4% to 3.2%.(b) The backgrounds in all channels have to be increased coherently by 13% to reduce the excess at 115.6 GeV to the level of one standard deviation, and by 26% to get a typical background result (1 − CL b = 0.5). Such large coherent changes are not consistent with the quoted error estimates.(c) In a test, the value of 1 − CL b for the observed data was recomputed 1000 times, each time with a set of signal and background estimations chosen randomly according to the assigned systematic uncertainties and their correlations. The distribution of 1−CL b at m H = 115.6 GeV is shown inFigure 12. From the r.m.s. width of the distribution (about 50% of the mean value) and its asymmetry, one can conclude that the spread of results one can obtain by varying the signal and background levels according to their errors is of approximately ±0.2 standard deviations, when 1 − CL b is interpreted in terms of standard deviations. The systematic errors are already incorporated into the quoted result, and this information is provided to demonstrate the limited sensitivity to the quoted systematic effects.These tests do not address the question of completeness of the systematic errors provided by the experiments for the combination. Since only one of the experimental collaborations has published its final results, changes to the systematic errors provided by the other experiments cannot be excluded.(iii) A technical uncertainty is ascribed to various approximations which are necessary to speed up the computations. This uncertainty is estimated by comparing the results from different software packages and by reproducing the −2 ln Q, 1 − CL b and CL s results of individual experiments prior to the combination. For the present paper, the value of 1−CL b in the vicinity of m H = 115.6 GeV has been determined independently by four combiners; they fall within a range of ±5% (relative). The highest value, 1 − CL b = 0.034, is retained as the result.
Figure 1 :Figure 2 :
12THE RESULTS QUOTED IN THIS PAPER ARE NOT FINAL SINCE THEY COMBINE PRELIMINARY RESULTS FROM THREE EXPERIMENTS WITH FINAL RESULTS FROM ONE EXPERIMENT Observed and expected behaviour of the likelihood ratio −2 ln Q as a function of the test-mass m H , obtained by combining the data of all four experiments. The solid line represents the observation; the dashed/dash-dotted lines show the median background/signal+background expectations. The dark/light shaded bands around the background expectation represent the ±1/±2 standard deviation spread of the background expectation obtained from a large number of background experiments. The dotted line is the result of a test where the signal from a 115 GeV Higgs boson has been added to the background and propagated through the likelihood ratio calculation. Observed and expected behaviour of the test statistic (−2 ln Q) as a function of the test-
Figure 3 :Figure 4 :
34Observed and expected behaviour of the test statistic (−2 ln Q) as a function of the testmass m H obtained when the combination procedure is applied to the inputs corresponding to separated decay channels (seeFigure 1for the notations). Evolution of the event weight ln(1 + s/b) with test-mass m H , for the events with the largest contributions to −2 ln Q at m H = 115 GeV. The labels correspond to the numbering in the first column of
Figure 5 :Figure 6 :
56Left hand side: expected and observed distributions of log 10 s/b for a test-mass of m H = 115.6 GeV (upper part) and 110 GeV (lower part). White/shaded histograms: expected distributions for the background/signal; points with error bars: selected data. Right hand side: the integrals, from right to left, of the distributions shown in the plots on the left hand side. Dash-dotted/dotted lines: expected for background/signal+background. Distributions of the reconstructed Higgs mass, m rec H , from three special, non-biasing, selections with increasing purity of a signal from a 115 GeV Higgs boson.
Figure 7 :
7Probability density functions corresponding to a test-mass m H = 115.6 GeV, for the background and signal+background hypotheses. The observed value of −2 ln Q which corresponds to the data is indicated by the vertical line. The light shaded region is a measure of the compatibility with the background hypothesis, 1 − CL b , and the dark shaded region is a measure of compatibility with the signal+background hypothesis, CL s+b .
Figure 8 :
8The probability 1 − CL b as a function of the test-mass m H . Solid line: observation; dashed/dash-dotted lines: expected probability for the background/signal+background hypotheses. See Footnote 4 for the transformation of 1 − CL b values into standard deviations.
Figure 9 :
9Confidence level CL s for the signal+background hypothesis. Solid line: observation; dashed line: median background expectation. The dark/light shaded bands around the median expected line correspond to the ±1/±2 standard deviation spreads from a large number of background experiments.
Figure 10 :
10The 95% CL upper bound on ξ 2 as a function of m H , where ξ = g HZZ /g SM HZZ is the HZZ coupling relative to the SM coupling. The dark/light shaded bands around the median expected line correspond to the ±1/±2 standard deviation spreads from a large number of background experiments. The horizontal line corresponds to the SM coupling.
Figure 11 :Figure 12 :
1112Behaviour of −2 ln Q in subsets collected at different c.m. energies. In each plot, the full curve shows the observed behaviour, the dashed/dotted lines show the expected behaviour for background/signal+background, and the vertical line indicates the test-mass m = E cm − M Z GeV, just at the kinematic limit. (The subset labelled 208 GeV has very low statistics.) Distribution of the background probability 1 − CL b for a test-mass of 115.6 GeV obtained from 1000 simulated experiments where the expected signal and background has been varied randomly according to the systematic errors and their correlations.
Table 1 :
1Background probabilities (1 − CL b ) at a Higgs boson test-mass of m H = 115 GeV, for the individual experiments and for the LEP data combined. ( * ) The results presented at the Sept. 5 LEPC have been revised for the LEPC of Nov. 3. The values listed are the revised onesALEPH
DELPHI
L3
OPAL
LEP
LEPC, Sept 5 ( * )
1.6 × 10 −4
0.67
0.84
0.47 2.5 × 10 −2
LEPC, Nov 3 [13]
6.5 × 10 −4
0.68
0.068 0.19 4.2 × 10 −3
Ref's [15, 16, 17, 18] 2.6 × 10 −3
0.77
0.32
0.20
.ALEPH DELPHI L3 OPAL LEP
E cm ≥ 189 GeV
629
610
627
599
2465
E cm ≥ 206 GeV
130
142
139
130
542
Table 2 :
2Integrated luminosities (pb −1 ) of the data samples provided by the four experiments for the present combination, and of the total LEP sample. Subsets taken at energies larger than 206 GeV are listed separately.
Table 3 :
3Properties of the 20 candidates contributing with the highest weight ln(1 + s/b) to −2 ln Q at m H = 115 GeV. The experiment, c.m. energy, decay channel, the reconstructed mass and the weight at m H = 115 GeV are listed. This list is obtained requiring s/b > 0.2 or ln(1 + s/b > 0.18 at m H = 115 GeV. The corresponding expected signal and background rates are 8.8 and 16.5 events, respectively.
Table 4 :
4Expected signal rates (for a SM Higgs boson with a mass of 110, 115, and 115.6 GeV) and background rates, and the observed event count, for various cuts in ln(1 + s/b).
1 −
1CL bCL s+b
ALEPH
2.0 × 10 −3 0.94
DELPHI
0.87
0.02
L3
0.24
0.47
OPAL
0.22
0.47
DLO
0.49
0.07
ALO
3.7 × 10 −3 0.83
Four-jet
0.016
0.74
Missing energy
0.40
0.26
All but four-jet
0.34
0.19
LEP
0.034
0.44
Table 5 :
5The background probability 1−CL b and the signal+background probability CL s+b at m H = 115.6 GeV, for subsets and for all LEP data. DLO/ALO designate subset where the ALEPH/DELPHI data are left out of the combination.
Table 6 :
6Expected (median) and observed 95% CL lower bounds on the SM Higgs boson mass, for the individual experiments, for DLO (with ALEPH left out of the combination) and for all LEP data combined.
In the following, the word channel designates any subset of the data where the Higgs boson search is carried out; these may correspond to different final state topologies, to subsets of data collected at different c.m. energies or to subsets provided by different experiments.
For a Higgs mass of 115.6 GeV, the outcome would follow closely the dotted curve, slightly displaced, so that its minimum coincides with the signal+background expectation (dash-dotted curve) at m H = 115.6 GeV.
The signal-to background ratio used in these selection is different from the ratio s/b describing event weights.
For the conversion of 1 − CL b into standard deviations (σ), we adopt a gaussian approximation[24] and use a "one-sided" convention where 1 − CL b = 2.7 × 10 −3 would indicate a 3σ "evidence" and 1 − CL b = 5.7 × 10 −7 a 5σ "discovery". The median expectation for pure background is 0.5. Values smaller or larger than 0.5 indicate an excess or deficit, respectively. In this scheme, the current result, 1 − CL b = 0.034, corresponds to 2.1σ. The earlier LEP results quoted in the first and second line ofTable 1correspond to 2.2σ and 2.9σ, respectively. This convention is also used inFigure 8to indicate the levels of significance on the right-hand scale. The ±1 and ±2 standard deviation "bands" which show up e.g. in the −2 ln Q plots correspond to a slightly different, "two-sided", convention.
ACKNOWLEDGEMENTSWe congratulate our colleagues from the LEP Accelerator Division for the successful running in the year 2000 at the highest energies, and would like to express our thanks to the engineers and technicians in all our institutions for their contributions to the excellent performance of the four LEP experiments. The LEP Higgs working group acknowledges the fruitful cooperation between the experiments in developing the combination procedures and in putting them into application.
. P W Higgs, Phys. Lett. 12132P.W. Higgs, Phys. Lett. 12 (1964) 132;
. Phys. Rev. Lett. 13508Phys. Rev. Lett. 13 (1964) 508;
. Phys. Rev. 1451156Phys. Rev. 145 (1966) 1156;
. F Englert, R Brout, Phys. Rev. Lett. 13321F. Englert and R. Brout, Phys. Rev. Lett. 13 (1964) 321;
. G S Guralnik, C R Hagen, T W B Kibble, Phys. Rev. Lett. 13585G.S. Guralnik, C.R. Hagen, and T.W.B. Kibble, Phys. Rev. Lett. 13 (1964) 585.
. S Weinberg, Phys. Rev. Lett. A. Salam, ed. N. Svartholm (Almquist and Wiksells19367Elementary Particle TheoryS. Weinberg, Phys. Rev. Lett. 19 (1967) 1264; Elementary Particle Theory, A. Salam, ed. N. Svartholm (Almquist and Wiksells, Stockholm, 1968),367.
. N Cabibbo, L Maiani, G Parisi, R Petronzio, Nucl. Phys. 158295N. Cabibbo, L. Maiani, G. Parisi and R. Petronzio, Nucl. Phys. B158 (1979) 295;
. R Dashen, H Neuberger, Phys. Rev. Lett. 501897R. Dashen and H. Neuberger, Phys. Rev. Lett. 50 (1983) 1897.
. M Lindner, M Sher, H W Zaglauer, Phys. Lett. 228139M. Lindner, M. Sher and H.W. Zaglauer, Phys. Lett.228B (1989) 139;
. M Sher, 159; ibid. 331BPhys. Lett. 317448M. Sher, Phys. Lett. 317B (1993) 159; ibid. 331B (1994) 448;
. G Altarelli, I Isidori, Phys. Lett. 337141G. Altarelli and I. Isidori, Phys. Lett. 337B (1994) 141;
. J A Casas, J R Espinosa, M Quirós, Phys. Lett. 34289J.A. Casas, J.R. Espinosa and M. Quirós, Phys. Lett. 342B (1995) 89.
. T Hambye, K Riesselmann, Phys. Rev. 557255T. Hambye and K. Riesselmann, Phys. Rev. D55 (1997) 7255.
. Y Okada, M Yamaguchi, T Yanagida, Theor. Phys. 851Y. Okada, M. Yamaguchi, and T. Yanagida, Theor. Phys. 85 (1991) 1;
. H Haber, R Hempfling, Phys. Lett. 661815H. Haber and R. Hempfling, Phys. Lett. 66 (1991) 1815;
. J Ellis, G Ridolfi, F Zwirner, Phys. Lett. 25783J. Ellis, G. Ridolfi, and F. Zwirner, Phys. Lett. B257 (1991) 83;
. R Barbieri, M Frigeni, Phys. Lett. 258395R. Barbieri and M. Frigeni, Phys. Lett. B258 (1991) 395;
. S Heinemeyer, W Hollik, G Weiglein, Eur. Phys. Jour. 9343S. Heinemeyer, W. Hollik and G. Weiglein, Eur. Phys. Jour. C9 (1999) 343;
. M Carena, M Quirós, C , M. Carena, M. Quirós and C.
. Wagner, Nucl. Phys. 461407Wagner, Nucl. Phys. B461 (1996) 407;
. H Haber, R Hempfling, A Hoang, Z. Phys. 75539H. Haber, R. Hempfling and A. Hoang, Z. Phys. C75 (1997) 539.
Ch, H Kolda, Murayama, hep-ph/0003170The Higgs Mass and New Physics Scales in the Minimal Standard Model. Ch. Kolda and H. Murayama, The Higgs Mass and New Physics Scales in the Minimal Standard Model, hep-ph/0003170 (March 2000).
. Lep Electroweak Working The, Group, public pageThe LEP Electroweak Working Group, public page, http://lepewwg.web.cern.ch/LEPEWWG/ (updated July 10, 2001).
The LEP working group for Higgs boson searches, Searches for Higgs bosons: Preliminary combined results using LEP data collected at energies up to 202 GeV. Delphi Aleph, Collaborations, CERN-EP/2000-055ALEPH, DELPHI, L3 and OPAL Collaborations, The LEP working group for Higgs boson searches, Searches for Higgs bosons: Preliminary combined results using LEP data collected at energies up to 202 GeV, CERN-EP/2000-055.
The LEP working group for Higgs boson searches, Searches for Higgs bosons: Preliminary combined results using LEP data collected at energies up to 209 GeV. Delphi Aleph, L3 Note 2600Collaborations, L3 Note 2600DELPHI 2000-148 CONF 447Osaka, JapanOPAL Technical Note TN661ALEPH, DELPHI, L3 and OPAL Collaborations, The LEP working group for Higgs boson searches, Searches for Higgs bosons: Preliminary combined results using LEP data collected at energies up to 209 GeV, ALEPH 2000-074 CONF 2000-051, DELPHI 2000-148 CONF 447,L3 Note 2600, OPAL Technical Note TN661, submitted to ICHEP'2000, Osaka, Japan, July 27-August2, 2000.
Shan Jin, Searches for New Particles and New Physics: Results from e + e − Colliders, ibidem. C.S. Lim, T. YamanakaII133Search for Standard Model Higgs Boson at LEP2, Proc. ICHEP-2000Shan Jin, Search for Standard Model Higgs Boson at LEP2, Proc. ICHEP-2000, Ed. C.S. Lim, T. Yamanaka, Vol II, p. 1105; P. Igo-Kemenes, Searches for New Particles and New Physics: Results from e + e − Colliders, ibidem, Vol. I, p. 133.
. LEP Committee Open Session. 59D. Schlatter for the ALEPH Collaboration, LEP Committee Open Session, 5.9.2000.
. P Igo, LEP Committee Open Session. Kemenes for the LEP Higgs working groupP. Igo-Kemenes for the LEP Higgs working group, LEP Committee Open Session, 3 Novem- ber, 2000, http://lephiggs.web.cern.ch/LEPHIGGS/talks/index.html.
. M Acciarri, L3 CollaborationPhys. Lett. 49518L3 Collaboration, M. Acciarri et al., Phys. Lett. B495 (2000) 18.
. R Barate, ALEPH CollaborationPhys.Lett. 4951ALEPH Collaboration, R. Barate et al., Phys.Lett. B495 (2000) 1.
. P Abreu, DELPHI CollaborationPhys. Lett. 49923DELPHI Collaboration, P. Abreu et al., Phys. Lett. B499 (2001) 23.
. M Acciarri, L3 CollaborationPhys. Lett. B. submitted for publicationL3 Collaboration, M. Acciarri et al., Phys. Lett. B, submitted for publication.
. G Abbiendi, OPAL CollaborationPhys. Lett. 49938OPAL Collaboration, G. Abbiendi et al., Phys. Lett. B499 (2001) 38.
. J Ellis, M K Gaillard, D V Nanopoulos, Nucl. Phys. 106292J. Ellis, M.K. Gaillard, and D.V. Nanopoulos, Nucl. Phys. B106 (1976) 292;
. B L Joffe, V A Khoze, Sov. J. Part. Phys. 950B.L. Joffe and V.A. Khoze, Sov. J. Part. Phys. 9 (1978) 50;
. B W Lee, C Quigg, H B Thacker, Phys. Rev. 161519B.W. Lee, C. Quigg and H.B. Thacker, Phys. Rev. D16 (1977) 1519;
. J D Bjorken, Pproc , SLAC Summer Inst. Part. Phys. M.C. Zipf1981SLAC reportJ.D. Bjorken, Pproc. 1976 SLAC Summer Inst. Part. Phys., ed. M.C. Zipf (SLAC report 198,1977) 1.
. F A Behrends, R Kleiss, Nucl. Phys. 26032F.A. Behrends and R. Kleiss, Nucl. Phys. B260 (1985) 32.
. W Kilian, M Kramer, P M Zerwas, Phys. Lett. 373135W. Kilian, M. Kramer, and P.M. Zerwas, Phys. Lett. B373 (1996) 135.
Linear interpolation of histograms. A L Read, Nucl. Instr. Methods A. 425357A.L. Read, Linear interpolation of histograms, Nucl. Instr. Methods A 425 (1999) 357.
Kernel Estimation in High Energy Physics. K S Cranmer, Comput. Phys. Commun. 136198K.S. Cranmer, Kernel Estimation in High Energy Physics, Comput. Phys. Commun. 136 (2001) 198.
Search for the SM Higgs boson with the L3 experiment at LEP, L3 Note 2688. L3 Collaboration, Search for the SM Higgs boson with the L3 experiment at LEP, L3 Note 2688, June 2001.
Review of particle physics. D E Groom, Eur. Phys. Journ. 151D.E. Groom et al., Review of particle physics, Eur. Phys. Journ. C15 (2000) 1.
Searches for Higgs bosons: Preliminary combined results from the four LEP experiments at √ s ≈ 189 GeV. Delphi Aleph, Collaborations, Higgs Working, Group, ALEPH-CONF 99-0526614DELPHI-CONF 327, L3 Note 2442, OPAL TNALEPH, DELPHI, L3 and OPAL Collaborations, and the LEP Higgs Working Group, Searches for Higgs bosons: Preliminary combined results from the four LEP experiments at √ s ≈ 189 GeV, ALEPH-CONF 99-052, DELPHI-CONF 327, L3 Note 2442, OPAL TN 6614 (July 1999).
|
[] |
[
"First-principles calculations of spin and angle-resolved resonant photoemission spectra of Cr(110) surfaces at the 2p -3d resonance",
"First-principles calculations of spin and angle-resolved resonant photoemission spectra of Cr(110) surfaces at the 2p -3d resonance"
] |
[
"F Da Pieve \nALGC\nVrije Universiteit Brussel\nPleinlaan 21050BrusselsBelgium\n",
"P Krüger \nICB\nUMR 6303\nCNRS\nUniversité de Bourgogne\nF-21078DijonFrance\n"
] |
[
"ALGC\nVrije Universiteit Brussel\nPleinlaan 21050BrusselsBelgium",
"ICB\nUMR 6303\nCNRS\nUniversité de Bourgogne\nF-21078DijonFrance"
] |
[] |
A first principles approach for spin and angle resolved resonant photoemission is developed within multiple scattering theory and applied to a Cr(110) surface at the 2p-3d resonance. The resonant photocurrent from this non ferromagnetic system is found to be strongly spin polarized by circularly polarized light, in agreement with experiments on antiferromagnetic and magnetically disordered systems. By comparing the antiferromagnetic and Pauli-paramagnetic phases of Cr, we explicitly show that the spin polarization of the photocurrent is independent of the existence of local magnetic moments, solving a long-standing debate on the origin of such polarization. New spin polarization effects are predicted for the paramagnetic phase even with unpolarized light, opening new directions for full mapping of spin interactions in macroscopically non magnetic or nanostructured systems. PACS numbers: 78.20.Bh,75.20.Ls, In recent years, the theoretical description of absorption/photoemission spectroscopy in the X-ray region has been boosted by the merge of density functional theory (DFT) with many body approaches such as dynamical mean field theory [1, 2], many body perturbation theory[3][4][5]and by the development of time-dependent DFT [6]. However, second order processes, like resonant inelastic X-ray scattering (RIXS) and resonant photoemission (RPES), remain a major challenge for theory. For RPES, existing approaches are semiempirical [7-10], based on a well defined two-holes final state and on small clusters, and thus do not take into account the delocalization of intermediate states, the bandstructure of the system and multiple scattering effects in the propagation of photoelectrons.The huge experimental output from RPES on correlated materials[7,[11][12][13][14][15][16]and the intriguing quest for a determination of local magnetic properties put forward by pioneering experiments [14-16] call for advancements in the theoretical description of this spectroscopy. In experiments on CuO and Ni, it was shown that the RPES photocurrent with circular polarized light is spin polarized in antiferromagnets[14,15]and Curie paramagnets[16]. It was claimed that a specific combination of spin resolved spectra provides a direct measure of the local magnetic moments[14][15][16]. The issue is of fundamental importance in the search for a tool to access the local magnetic properties in antiferromagnetic, magnetically disordered and/or nanostructured systems at their crossover with the transition temperature. The interpretation was however rejected on the basis of symmetry analysis [17], but explicit calculations predicting the lineshape and intensity of such fundamental signal are still lacking and remain highly desirable.In this letter, we present the first ab-initio method for RPES in solids, based on a combined formulation within the real space multiple scattering (RSMS) ap-proach[18,19]and DFT, and its application to Cr(110) at the 2p-3d resonance. By comparing the antiferromagnetic (AFM) and Pauli-paramagnetic (PM) phase of Cr, we solve the long-standing debate about the possibility to determine local magnetic moments in macroscopically non magnetic systems by means of spin resolved RPES with circular polarized light. New interesting effects in the PM phase by unpolarized light suggest that other mechanisms are active and could be exploited for mapping the origin of the different spin polarization (SP) components in paramagnets and magnetically disordered systems.Theoretical formulation. The cross section for valence band photoemission to a final state |v, k , where v denotes a valence band hole and k a photoelectron state, is given bywhere ω and q are the photon energy and polarization. Here the independent particle approximation has been assumed (i.e., all many-electron eigenstates are single Slater determinants corresponding to the same effective one-electron hamiltonian). According to the Heisenberg-Kramers formula[20], the transition matrix element T kv (ω, q) is the sum of a direct and a resonant term. In the latter, photon absorption leads to an intermediate state |c, u , with a core hole (c) and an electron in a formerly unoccupied state |u , which decays to the final state |v, k through a participator Auger process[20,21]. To lowest order in the autoionization process, the transition matrix element is given by T kv (ω, q) = k|D q |v + cu kc|V (|vu − |uv ) ω + ǫ c − ǫ u − iΓ u|D q |c(1) where D q is the dipole operator, V the Coulomb operator and Γ the width of the intermediate state. Spec-
|
10.1103/physrevlett.110.127401
|
[
"https://arxiv.org/pdf/1302.7160v1.pdf"
] | 119,182,647 |
1302.7160
|
1e339ee04e58a785e9dad41b2094c439d28ed364
|
First-principles calculations of spin and angle-resolved resonant photoemission spectra of Cr(110) surfaces at the 2p -3d resonance
28 Feb 2013
F Da Pieve
ALGC
Vrije Universiteit Brussel
Pleinlaan 21050BrusselsBelgium
P Krüger
ICB
UMR 6303
CNRS
Université de Bourgogne
F-21078DijonFrance
First-principles calculations of spin and angle-resolved resonant photoemission spectra of Cr(110) surfaces at the 2p -3d resonance
28 Feb 2013(Dated: March 1, 2013)arXiv:1302
A first principles approach for spin and angle resolved resonant photoemission is developed within multiple scattering theory and applied to a Cr(110) surface at the 2p-3d resonance. The resonant photocurrent from this non ferromagnetic system is found to be strongly spin polarized by circularly polarized light, in agreement with experiments on antiferromagnetic and magnetically disordered systems. By comparing the antiferromagnetic and Pauli-paramagnetic phases of Cr, we explicitly show that the spin polarization of the photocurrent is independent of the existence of local magnetic moments, solving a long-standing debate on the origin of such polarization. New spin polarization effects are predicted for the paramagnetic phase even with unpolarized light, opening new directions for full mapping of spin interactions in macroscopically non magnetic or nanostructured systems. PACS numbers: 78.20.Bh,75.20.Ls, In recent years, the theoretical description of absorption/photoemission spectroscopy in the X-ray region has been boosted by the merge of density functional theory (DFT) with many body approaches such as dynamical mean field theory [1, 2], many body perturbation theory[3][4][5]and by the development of time-dependent DFT [6]. However, second order processes, like resonant inelastic X-ray scattering (RIXS) and resonant photoemission (RPES), remain a major challenge for theory. For RPES, existing approaches are semiempirical [7-10], based on a well defined two-holes final state and on small clusters, and thus do not take into account the delocalization of intermediate states, the bandstructure of the system and multiple scattering effects in the propagation of photoelectrons.The huge experimental output from RPES on correlated materials[7,[11][12][13][14][15][16]and the intriguing quest for a determination of local magnetic properties put forward by pioneering experiments [14-16] call for advancements in the theoretical description of this spectroscopy. In experiments on CuO and Ni, it was shown that the RPES photocurrent with circular polarized light is spin polarized in antiferromagnets[14,15]and Curie paramagnets[16]. It was claimed that a specific combination of spin resolved spectra provides a direct measure of the local magnetic moments[14][15][16]. The issue is of fundamental importance in the search for a tool to access the local magnetic properties in antiferromagnetic, magnetically disordered and/or nanostructured systems at their crossover with the transition temperature. The interpretation was however rejected on the basis of symmetry analysis [17], but explicit calculations predicting the lineshape and intensity of such fundamental signal are still lacking and remain highly desirable.In this letter, we present the first ab-initio method for RPES in solids, based on a combined formulation within the real space multiple scattering (RSMS) ap-proach[18,19]and DFT, and its application to Cr(110) at the 2p-3d resonance. By comparing the antiferromagnetic (AFM) and Pauli-paramagnetic (PM) phase of Cr, we solve the long-standing debate about the possibility to determine local magnetic moments in macroscopically non magnetic systems by means of spin resolved RPES with circular polarized light. New interesting effects in the PM phase by unpolarized light suggest that other mechanisms are active and could be exploited for mapping the origin of the different spin polarization (SP) components in paramagnets and magnetically disordered systems.Theoretical formulation. The cross section for valence band photoemission to a final state |v, k , where v denotes a valence band hole and k a photoelectron state, is given bywhere ω and q are the photon energy and polarization. Here the independent particle approximation has been assumed (i.e., all many-electron eigenstates are single Slater determinants corresponding to the same effective one-electron hamiltonian). According to the Heisenberg-Kramers formula[20], the transition matrix element T kv (ω, q) is the sum of a direct and a resonant term. In the latter, photon absorption leads to an intermediate state |c, u , with a core hole (c) and an electron in a formerly unoccupied state |u , which decays to the final state |v, k through a participator Auger process[20,21]. To lowest order in the autoionization process, the transition matrix element is given by T kv (ω, q) = k|D q |v + cu kc|V (|vu − |uv ) ω + ǫ c − ǫ u − iΓ u|D q |c(1) where D q is the dipole operator, V the Coulomb operator and Γ the width of the intermediate state. Spec-
A first principles approach for spin and angle resolved resonant photoemission is developed within multiple scattering theory and applied to a Cr(110) surface at the 2p-3d resonance. The resonant photocurrent from this non ferromagnetic system is found to be strongly spin polarized by circularly polarized light, in agreement with experiments on antiferromagnetic and magnetically disordered systems. By comparing the antiferromagnetic and Pauli-paramagnetic phases of Cr, we explicitly show that the spin polarization of the photocurrent is independent of the existence of local magnetic moments, solving a long-standing debate on the origin of such polarization. New spin polarization effects are predicted for the paramagnetic phase even with unpolarized light, opening new directions for full mapping of spin interactions in macroscopically non magnetic or nanostructured systems. In recent years, the theoretical description of absorption/photoemission spectroscopy in the X-ray region has been boosted by the merge of density functional theory (DFT) with many body approaches such as dynamical mean field theory [1,2], many body perturbation theory [3][4][5] and by the development of time-dependent DFT [6]. However, second order processes, like resonant inelastic X-ray scattering (RIXS) and resonant photoemission (RPES), remain a major challenge for theory. For RPES, existing approaches are semiempirical [7][8][9][10], based on a well defined two-holes final state and on small clusters, and thus do not take into account the delocalization of intermediate states, the bandstructure of the system and multiple scattering effects in the propagation of photoelectrons.
The huge experimental output from RPES on correlated materials [7,[11][12][13][14][15][16] and the intriguing quest for a determination of local magnetic properties put forward by pioneering experiments [14][15][16] call for advancements in the theoretical description of this spectroscopy. In experiments on CuO and Ni, it was shown that the RPES photocurrent with circular polarized light is spin polarized in antiferromagnets [14,15] and Curie paramagnets [16]. It was claimed that a specific combination of spin resolved spectra provides a direct measure of the local magnetic moments [14][15][16]. The issue is of fundamental importance in the search for a tool to access the local magnetic properties in antiferromagnetic, magnetically disordered and/or nanostructured systems at their crossover with the transition temperature. The interpretation was however rejected on the basis of symmetry analysis [17], but explicit calculations predicting the lineshape and intensity of such fundamental signal are still lacking and remain highly desirable.
In this letter, we present the first ab-initio method for RPES in solids, based on a combined formulation within the real space multiple scattering (RSMS) ap-proach [18,19] and DFT, and its application to Cr(110) at the 2p-3d resonance. By comparing the antiferromagnetic (AFM) and Pauli-paramagnetic (PM) phase of Cr, we solve the long-standing debate about the possibility to determine local magnetic moments in macroscopically non magnetic systems by means of spin resolved RPES with circular polarized light. New interesting effects in the PM phase by unpolarized light suggest that other mechanisms are active and could be exploited for mapping the origin of the different spin polarization (SP) components in paramagnets and magnetically disordered systems.
Theoretical formulation. The cross section for valence band photoemission to a final state |v, k , where v denotes a valence band hole and k a photoelectron state, is given by
I(ω, q, k) = v |T kv (ω, q)| 2 δ(ǫ k − ǫ v − ω)
where ω and q are the photon energy and polarization. Here the independent particle approximation has been assumed (i.e., all many-electron eigenstates are single Slater determinants corresponding to the same effective one-electron hamiltonian). According to the Heisenberg-Kramers formula [20], the transition matrix element T kv (ω, q) is the sum of a direct and a resonant term. In the latter, photon absorption leads to an intermediate state |c, u , with a core hole (c) and an electron in a formerly unoccupied state |u , which decays to the final state |v, k through a participator Auger process [20,21]. To lowest order in the autoionization process, the transition matrix element is given by
T kv (ω, q) = k|D q |v + cu kc|V (|vu − |uv ) ω + ǫ c − ǫ u − iΓ u|D q |c
(1) where D q is the dipole operator, V the Coulomb operator and Γ the width of the intermediate state. Spec-tator Auger decay leads to different, namely two-hole final states and is not considered here. Participator and spectator channels can in principle be separated experimentally by using a photon bandwidth smaller than the core-hole lifetime, as they show different photon energy dependence (linear for the participator, and no photon energy dependence for the spectator). Here we focus on the physical effects at the origin of spin polarization and dichroism as well as their directional-dependence in the "pure" participator channel.
The RPES intensity can be written in a compact form as
I(ω, q, k) = ijLL ′ σ M ω,q iLσ (k)I ij LL ′ (ǫ v , σ)M ω,q jL ′ σ (k) *
Here, i, j label atomic sites, L ≡ (lm) angular momentum and σ spin quantum numbers.
ǫ v = ǫ k − ω is the energy of the valence hole. The quantity I ij LL ′ ≡ − 1 2iπ (τ − τ † ) ij LL ′
is the essentially imaginary part of the scattering path operator. It comes from the simplification of the sum over delocalized valence states through the so called optical theorem in RSMS [22] and it contains the bandstructure information. The matrix elements M ω,q iLσ (k) are given by
M ω,q iLσ (k) = jL ′ B * jL ′ (k)A jL ′ ,iL (ǫ k σ k , ǫ v σ)
The B jL ′ (k) are the key quantities in the RSMS approach and represent the multiple scattering amplitudes of the continuum state k ≡ (kσ k ) [22]. The matrix elements A jL ′ ,iL (ǫ k σ k , ǫ v σ) are given by the sum of the direct radiative process (A D ), the resonant process with direct Coulomb decay (A C ) and the resonant process with the exchange decay (A X ), see Eq. (1). A D and A C are siteand spin-diagonal (∼ δ ij δ σ k σ ). We have
A D = iǫ k L ′ σ|D q |iǫ v Lσ A C = − j ′ cLuL ′ u σu EF dǫ u I j ′ j ′ LuL ′ u (ǫ u σ u ) ω + ǫ c − ǫ u − iΓ × iǫ k L ′ σ, j ′ c|V |iǫ v Lσ, j ′ ǫ u L u σ u j ′ ǫ u L ′ u σ u |D|j ′ c A X = cLuL ′ u EF dǫ u I ji LuL ′ u (ǫ u σ k ) ω + ǫ c − ǫ u − iΓ × jǫ k L k σ k , ic|V |jǫ u L u σ k , iǫ v Lσ iǫ u L ′ u σ k |D|ic
The sums over unoccupied states u have been again simplified through the optical theorem. The exchange term A X is not strictly site-diagonal because of the nonlocality of the exchange interaction together with the delocalized nature of the states u. In the RSMS approach the Coulomb matrix elements kc|V |vu and kc|V |uv can be exactly developed in one-and two-center terms. In metallic Cr, the Coulomb interaction is strongly screened. As a result, two-center terms are by at least one order of magnitude smaller than the one-center terms [23] and have been neglected here. In general, the 2p-3d excited intermediate states might display excitonic effects, which could be taken account for with a Bethe-Salpeter description [3,5]. For Cr metal, these effects are quite small because of the large 3d band width (∼ 7 eV) and efficient metallic screening of the core hole by nearly free 4sp electrons, and thus neglected here. Photoemission spectra from Cr(110) are calculated in RSMS with a cluster of 151 atoms (see Fig. 1a) and selfconsistent spin polarized potentials, obtained by a scalar relativistic LMTO [24] calculation for bulk Cr in the local spin density approximation. Except for the 2p core level, all states entering the RPES calculation are developed in RSMS. The 2p orbital is obtained by solving the scalar relativistic Schrödinger equation with selfconsistent spin-polarized LMTO potentials. The 2p 3/2 spin-orbit coupled states are then constructed using standard angular momentum algebra and the spin-orbit coupling constant is taken from an atomic calculation [25]. We consider the AFM order of CsCl-type which is a good approximation to the true spin density wave (SDW) ground state of Cr. The calculated magnetic moment is 0.74 µ B in reasonable agreement with experiment (0.62 µ B ). At the (110) surface, the transverse SDW propagates along [100] or [010] [26]. Therefore, we take e z = [001] as magnetization and spin-quantization axis throughout this paper. We also consider the Pauli PM state, corresponding to a non-magnetic calculation. Spin orbit (SO) coupling of the valence and continuum states is neglected (it is as small as 0.03 eV for Cr-3d [27]).
Results. The electronic structure of Cr(110) is well accounted for in the RSMS approach as can be seen from the comparison between the local density of states (DOS) of a Cr atom in the cluster and of bulk Cr (Fig. 1b). Nonresonant angle-resolved photoemission spectra (ARPES) are shown in Fig. 1c. Differences with respect to experiments [28] are expected as our approach does not contain local many-body interactions and layer-dependent potentials, which could play a role for a quantitative description of the peak renormalization and dispersion behaviour of the energetic structures [29]. However, the main features of the experimental spectra are reproduced in the calculation, confirming that RSMS provides a reasonably good description of valence band photoemission from metals as previously shown for Cu(111) [22].
Spin resolved, angle integrated PES and RPES spectra are shown in Fig. 2 for the AFM phase and several photon energies across the L 3 -edge absorption threshold. Left circular polarized light incident along the magnetization axis [001] is considered. In this "parallel" geometry the spectra, right polarized light produces the same spectra but with up and down spin exchanged. The maximum peak intensity as a function of photon energy is plotted in Fig. 2b and shows the expected Fano profile. The first photon energy (551.0 eV) is too low to excite the core electron and so only direct PES is possible. When the photon energy is raised to 552.4 eV, just below the absorption edge, direct and resonant process interfere destructively, giving rise to the dip in the Fano profile. Strong resonant enhancement is observed between 552 and 554.5 eV (see e.g. the spectrum for 554.4 eV), which corresponds to transitions from the 2p 3/2 level into the unoccupied Cr 3d band. At hν = 585.1 eV, well above threshold, the resonant spectrum goes back to the nonresonant one.
The direct PES signal is non spin-polarized as expected for the AFM phase. Appreciable spin-polarization is, however, found in RPES. This effect is here obtained for the first time through first-principles calculations, and confirms the experimental finding in CuO [14], that in AFM systems RPES at the 2p 3/2 -3d resonance is spinpolarized when circular polarized light is used.
We now turn to angle and spin resolved spectra at maximum resonance (hν=554.4 eV), focusing on their four "fundamental" combinations (and their relation to local magnetic properties), constructed by different choices of photoelectron spin (↑,↓) and light helicity (+, −)≡(left,right):
tot ≡ (↑ +) + (↑ −) + (↓ +) + (↓ −) (total) spr ≡ (↑ +) + (↑ −) − (↓ +) − (↓ −) (spin-resolved) dic ≡ (↑ +) − (↑ −) + (↓ +) − (↓ −) (dichroic) mix ≡ (↑ +) − (↑ −) − (↓ +) + (↓ −) (mixed)
The "mixed" spectrum was the one considered in Refs [14,16] and claimed to be sensitive to local magnetic moments in non-ferromagnetic samples.
The normal emission RPES spectra (Fig 3a,b) (total spectra) for parallel geometry consist of a single peak at 0.8-0.9 eV binding energy, very similar to the low energy non resonant spectrum in Fig.1c (θ = 0 o ). AFM and PM spectra are almost identical except for a small shift of ∼ 0.1 eV, which reflects the small exchange splitting of the AFM Cr-3d bands. The dichroic (dic) and spin-resolved (spr) signals vanish for both PM and AFM phase, as expected since the system is globally non-magnetic in both cases, and the set up is non chiral.
However, the mixed signal is non-zero with a large amplitude (∼ 1/3 of total), in agreement with the experimental results in AFM CuO [14]. Surprisingly, we find a non-zero mixed signal not only in the AFM, but also in the PM phase with nearly the same intensity. It is important to note that we are not considering a Curie paramagnet (such as Ni above T C [16]) with disordered and/or fluctuating magnetic moments, but a Pauli PM state, where the magnetization is strictly zero in all points of space. Therefore, our finding that the mixed signal is essentially unchanged when going from the AFM to the PM state unambiguously proves that it is unrelated to local magnetic moments, in contrast to the interpretation in Refs [14,16].
Rather than being of magnetic origin, the non-zero mixed signal is in fact induced by angular momentum transfer from the light helicity to the electron spin via SO in the core shell together with a strong exchange Fig.1 a). Light incidence parallel (a,b,d) or perpendicular (c) to spinquantization axis ez. Light vector in (c) is shown as p in Fig.1a. effect in the decay process. To see this, consider light with left (+) helicity and a non-magnetic ground state. The 2p 3/2 -3d optical transition has a larger amplitude for spin-up than for spin down electrons because of the dominantly parallel alignment of spin and orbit in 2p 3/2 . For example, for an empty or spherically symmetric 3d shell the intensity ratio is 5:3. Consider now a spin-up electron transition. The RPES intermediate state has one extra spin-up electron in the 3d-shell (denoted u↑) and a 2p-hole of dominant spin-up character. This state decays through Coulomb interaction to the photoemission final state with one 3d-hole and the photoelectron. The direct Coulomb matrix elements is of the form kσ, c↑|V |vσ, u↑ which is independent of the photoelectron spin σ. So the direct decay alone would lead to a spin-balanced photocurrent. For the exchange decay, the matrix element is kσ, c↑|V |u↑, vσ ∼ δ(σ,↑). This is roughly as a large as the direct Coulomb term for spin-up electrons (the radial matrix elements are exactly the same) but it is zero for spin-down electrons. Since the exchange matrix elements are substracted from the direct terms in Eq. (1), the transition probability for spin-up electron emission is strongly reduced by the exchange process. This shows that a core-valence transition of a spin up electron leads, through autoionization, to a strongly spin polarized photocurrent with a majority of spin down electrons. As mentioned before, left circular polarized light promotes dominantly spin-up electrons in the 2p 3/2 -3d transition.
Therefore it produces a majority of spin-down photoelectrons. Under the assumption of complete cancellation between direct Coulomb and exchange matrix elements for parallel spins and by neglecting the direct valence photoemission, the ratio of spin-down to spin-up photoelectrons is 5:3, which corresponds to a spin-polarization (ratio of mixed over total signal) of −1/4. In angle integrated RPES at maximum resonance (Fig. 2a, hν=554.4 eV) we find a SP of −0.21, in good agreement with such model estimation. These values agree also well with the measured spin-polarization in CuO [14] and Ni [16], which is 10-40% depending on binding energy. Our findings clarify the physical mechanism inducing the presence of the mixed signal in both phases, and point to a critical re-examination of experimental observations.
Interestingly, we find that, contrary to the previous set up, it is possible to have a net spin polarization signal on the PM phase. This is possible under appropriate geometrical conditions, and even with unpolarized light. Such SP can be of opposite sign and be due to different active mechanisms. In Fig. 3c, normal emission spectra are shown for light incident along [110], i.e. perpendicular to the spin-quantization axis s=e z (perpendicular geometry). As before, the dichroic signal is zero, as light incidence (p) and electron emission vector (n) lie in a mirror plane of the surface (see Fig. 1 a). However, the set up (including spin resolution) is chiral, since the three vectors p, n and s form a right-handed frame. Thus SO-induced SP cannot be ruled out by symmetry and a small, positive SP (in this case transverse to the scattering plane) is indeed observed in RPES, even for unpolarized light. A similar SP from PM surfaces for unpolarized light was theoretically predicted in direct PES [30] in a relativistic approach and confirmed by experiments [31,32]. It was ascribed to broken symmetry due to the off-normal light incidence together with SO in the initial states and phase shift differences. We do not observe this effect in non-resonant PES since the SO coupling in the Cr 3d valence states is very weak and neglected here. However, for RPES, such SP has to be related to the dynamical SP studied in atomic physics, which is known to be related to phase shift differences in the final outgoing waves, and to be generally small [33,34]. Our result confirms that such SP exists for an atom embedded in a solid and that it survives to the multiple scattering effects.
A SP signal in the PM phase is also present for parallel geometry with off-normal emission (Fig. 3d). In this case, the system composed by the surface, light incidence (along e z ) and electron emission vector, is chiral. Therefore a dichroic signal is observed even in non-resonant PES, known as circular dichroism in angular distribution [35]. In RPES, the angular momentum of the photon is partly transferred to the electron spin through the SO coupling in the 2p shell, leading to non-zero intensity also for spin resolved and mixed signals. The spin polarization is negative, i.e. photoelectrons are mainly polarized antiparallel to their emission direction, because of the exchange process in the autoionization decay. This finding suggests a Fano-like effect in resonant processes for off normal emission directions, which could be well studied along the same lines as direct PES on paramagnets [36].
In conclusion, we have presented a first-principles approach for RPES in solids and its application to Cr(110). By comparing Pauli PM and AFM states, we have shown that the mixed signal is essentially independent of local magnetic properties and we have clarified its origin: contrary to previous interpretations, this effect is induced by an angular momentum transfer from the photon to the electron spin, through SO coupling in the core level and the exchange process in the autoionization decay. Our results show that caution must be taken in linking the spin polarized or mixed signal to local magnetic moments, all the more so as the photoelectron spin may have components along and across the light helicity. New effects in the SP suggest that a mapping of spin interactions in paramagnets and disordered magnetic structures could be obtained via full tomography experiments at the core resonances even with unpolarized light.
PACS numbers: 78.20.Bh,78.70.-g,75.20.Ls,79.60.-i
FIG. 1 :
1(a) Cr(110) cluster used in the RSMS calculations. The two magnetic sublattices of the AFM state are in red and blue. (b) DOS in the AFM phase for a bulk atom (LMTO) and a central atom in the cluster (RSMS). (c) ARPES spectra from Cr(110) along the 001 azimuth for different polar angles θ with respect to the surface normal. Unpolarized light along the [001] axis was considered. Spin-resolved, angle integrated RPES and PES spectra of AFM Cr(110) with circular polarized light incoming along the spin quantization axis [001] and photon energies across the L3-edge resonance. A gaussian broadening of 0.27 eV FWHM was applied. Note the different intensity scale for hν = 554.4 eV. In PES, spin-up and down intensities are equal in all cases. b) Maximum peak intensity as a function of photon energy.
FIG. 3 :
3Angle-resolved fundamental spectra of Cr(110). RPES as thick lines for hν = 544.4 eV. Normal (i.e., non resonant) ARPES as thin lines, intensity ×1000. All spectra are rescaled to equal peak height of RPES-tot. (a) AFM, (b-d) PM phase. (a-c) Normal emission. (d) Emission in the xy-plane, off-normal by 23 o (vector e in
. O Šipr, J Minár, A Scherz, H Wende, H Ebert, Phys. Rev. B. 84115102O.Šipr, J. Minár, A. Scherz, H. Wende, and H. Ebert, Phys. Rev. B 84, 115102 (2011).
. J Braun, J Minár, H Ebert, M I Katsnelson, A I Lichtenstein, Phys. Rev. Lett. 97227601J. Braun, J. Minár, H. Ebert, M. I. Katsnelson, and A. I. Lichtenstein, Phys. Rev. Lett. 97, 227601 (2006) .
. W Olovsson, I Tanaka, T Mizoguchi, G Radtke, P Puschnig, C Ambrosch-Draxl, Phys. Rev. B. 83195206W. Olovsson, I. Tanaka, T. Mizoguchi, G. Radtke, P. Puschnig and C. Ambrosch-Draxl, Phys. Rev. B 83, 195206 (2011).
. J Vinson, J J Rehr, J J Kas, E L Shirley, Phys. Rev. B. 83115106J. Vinson, J. J. Rehr, J. J. Kas, and E. L. Shirley, Phys. Rev. B 83, 115106 (2011)
. R Laskowski, P Blaha, Phys. Rev. B. 82205104R. Laskowski and P. Blaha, Phys. Rev. B 82, 205104 (2010)
. O Bunau, Y Joly, Phys. Rev. B. 85155121O. Bunau and Y. Joly, Phys. Rev. B 85, 155121 (2012)
. S R Mishra, T R Cummins, G D Waddill, W J Gammon, G Van Der Laan, K W Goodman, J G Tobin, Phys. Rev. Lett. 811306S. R. Mishra, T.R. Cummins, G.D. Waddill, W.J. Gam- mon, G. van der Laan, K.W. Goodman and J.G. Tobin, Phys. Rev. Lett. 81 1306 (1998).
. C F Chang, D J Huang, A Tanaka, G Y Guo, S C Chung, S.-T Kao, S G Shyu, C T Chen, Phys. Rev. B. 7152407C. F. Chang, D. J. Huang, A. Tanaka, G. Y. Guo, S. C. Chung, S.-T. Kao, S. G. Shyu, and C. T. Chen, Phys. Rev. B 71, 052407 (2005).
. O Tjernberg, G Chiaia, U O Karlsson, F M F De Groot, J. Phys.: Condens. Matter. 99863O. Tjernberg, G. Chiaia, U. O. Karlsson and F. M. F. de Groot, J. Phys.: Condens. Matter 9 (1997) 9863
. H Ogasawara, A Kotani, P Le Fèvre, D Chandresris, H Magnan, Phys. Rev. B. 627970H. Ogasawara, A. Kotani, P. Le Fèvre, D. Chandresris and H. Magnan, Phys. Rev. B 62, 7970 (2000).
. M Morscher, F Nolting, T Brugger, T Greber, Phys. Rev. B. 84140406M. Morscher, F. Nolting, T. Brugger and T. Greber, Phys. Rev. B 84, 140406(R) (2011).
. M C Richter, J.-M Mariot, O Heckmann, L Kjeldgaard, B S Mun, C S Fadley, U Lüders, J.-F Bobo, P De Padova, A Taleb-Ibrahimi, K Hricovini, Eur. Phys. J. Special Topics. 169175M.C. Richter, J.-M.Mariot, O.Heckmann, L. Kjeldgaard, B.S. Mun, C.S. Fadley, U. Lüders, J.-F. Bobo, P. De Padova, A. Taleb-Ibrahimi and K. Hricovini, Eur. Phys. J. Special Topics 169, 175 (2009).
. T Ohtsuki, A Chainani, R Eguchi, M Matsunami, Y Takata, M Taguchi, Y Nishino, K Tamasaku, M , T. Ohtsuki, A. Chainani, R. Eguchi, M. Matsunami, Y. Takata, M. Taguchi, Y. Nishino, K. Tamasaku, M.
. T Yabashi, M Ishikawa, Y Oura, H Senba, S Ohashi, Shin, Phys. Rev. Lett. 10647602Yabashi, T. Ishikawa, M. Oura, Y. Senba, H. Ohashi, and S. Shin, Phys. Rev. Lett. 106, 047602 (2011).
. L H Tjeng, B Sinkovic, N B Brookes, J B Goedkoop, R Hesper, E Pellegrin, F M F De Groot, S Altieri, S L Hulbert, E Shekel, G A Sawatzky, Phys. Rev. Lett. 781126L. H. Tjeng, B. Sinkovic, N. B. Brookes, J. B. Goedkoop, R. Hesper, E. Pellegrin, F. M. F. de Groot, S. Altieri, S. L. Hulbert, E. Shekel, and G. A. Sawatzky, Phys. Rev. Lett. 78, 1126 (1997).
. L H Tjeng, N B Brookes, B Sinkovic, J. Electron Spectrosc. Relat. Phenom. 189L. H. Tjeng, N. B. Brookes, B. Sinkovic, J. Electron Spec- trosc. Relat. Phenom. 117-118, 189 (2001).
. B Sinkovic, L H Tjeng, N B Brookes, J B Goedkoop, R Hesper, E Pellegrin, F M F De Groot, S Altieri, S L Hulbert, E Shekel, G A Sawatzky, Phys. Rev. Lett. 793510B. Sinkovic, L. H. Tjeng, N. B. Brookes, J. B. Goedkoop, R. Hesper, E. Pellegrin, F. M. F. de Groot, S. Altieri, S. L. Hulbert, E. Shekel, and G. A. Sawatzky, Phys. Rev. Lett. 79, 3510 (1997).
. G Van Der Laan, Phys. Rev. Lett. 81733G. van der Laan, Phys. Rev. Lett. 81, 733 (1998).
. J J Rehr, R C Albers, Rev. Mod. Phys. 72621J. J. Rehr and R. C. Albers, Rev. Mod. Phys. 72, 621 (2000).
. D Sébilleau, R Gunnella, Z.-Y Wu, S Di Matteo, C R Natoli, J. Phys.: Condens. Matter. 18175D. Sébilleau, R. Gunnella, Z.-Y. Wu, S. Di Matteo and C. R. Natoli, J. Phys.: Condens. Matter 18, 175 (2006).
. A Tanaka, T Jo, J. Phys. Soc. Jpn. 632788A. Tanaka and T. Jo, J. Phys. Soc. Jpn. 63, 2788 (1994)
. C Janowitz, R Manzke, M Skibowski, Y Takeda, Y Miyamoto, K Cho, Surf. Sci. Letters. 275669C. Janowitz, R. Manzke, M. Skibowski, Y. Takeda, Y. Miyamoto, and K. Cho, Surf. Sci. Letters 275 L669 (1992).
. P Krüger, F Da Pieve, J Osterwalder, Phys. Rev. B. 83115437P. Krüger, F. Da Pieve, and J. Osterwalder, Phys. Rev. B 83, 115437 (2011).
For the one-center terms the average distance is about d/2. Here it is even smaller because of the strong localization of the core-orbital near the nucleus. The screened Coulomb interaction is exp(−r/λ)/r, where λ is the screening length. The ratio between NN and on-site terms is therefore about χ = exp(−d/2/λ)/2. Using Thomas-Fermi theory and taking 1 nearly free electron (4s) for Cr. For nearest-neighbor (NN) two-center Coulomb terms the average electron distance equals the NN distance d. we get λ = 0.55Å and χ = 0.052, i.e. NN Coulomb terms are by a factor of 20 smaller than on-site terms. Further than NN terms are obviously even much smallerFor nearest-neighbor (NN) two-center Coulomb terms the average electron distance equals the NN distance d. For the one-center terms the average distance is about d/2. Here it is even smaller because of the strong local- ization of the core-orbital near the nucleus. The screened Coulomb interaction is exp(−r/λ)/r, where λ is the screening length. The ratio between NN and on-site terms is therefore about χ = exp(−d/2/λ)/2. Using Thomas-Fermi theory and taking 1 nearly free electron (4s) for Cr, we get λ = 0.55Å and χ = 0.052, i.e. NN Coulomb terms are by a factor of 20 smaller than on-site terms. Further than NN terms are obviously even much smaller.
. O K Andersen, Phys. Rev. B. 123060O. K. Andersen, Phys. Rev. B 12, 3060 (1975).
The Theory of Atomic Structure and Spectra. R D Cowan, University of California PressBerkeleyR. D. Cowan, The Theory of Atomic Structure and Spec- tra, University of California Press, Berkeley, 1981.
. K.-F Braun, S Fölsch, G Meyer, K.-H Rieder, Phys. Rev. Lett. 853500K.-F. Braun, S. Fölsch, G. Meyer, and K.-H. Rieder, Phys. Rev. Lett. 85, 3500 (2000)
. G Van Der Laan, B T Thole, Phys. Rev. B. 4313401G. van der Laan and B. T. Thole, Phys. Rev. B 43, 13401 (1991).
. P E S Persson, L I Johansson, Phys. Rev. B. 342284P. E. S. Persson and L. I. Johansson, Phys. Rev. B 34, 2284 (1986).
. J Sànchez-Barriga, Phys.Rev. B. 85205109J.Sànchez-Barriga et al., Phys.Rev. B 85, 205109 (2012)
. E Tamura, R Feder, Europhys. Lett. 16695E. Tamura and R. Feder, Europhys. Lett. 16, 695 (1991).
. J Kirschner, Appl. Phys. A. 443J. Kirschner, Appl. Phys. A 44, 3 (1987).
. N Irmer, R David, B Schmiedeskamp, U Heinzmann, Phys. Rev. B. 453849N. Irmer, R. David, B. Schmiedeskamp, and U. Heinz- mann, Phys. Rev. B 45, 3849 (1992).
. U Hergenhahn, U Becker, J. Electron Spectrosc. Relat. Phenom. 76225U. Hergenhahn, U. Becker, J. Electron Spectrosc. Relat. Phenom. 76, 225 (1995).
. B Lohmann, J.Phys.B:At.Mol.Opt.Phys. 32643B. Lohmann, J.Phys.B:At.Mol.Opt.Phys. 32, L643 (1999)
. J Henk, A M N Niklasson, B Johansson, Phys. Rev. B. 5913986J. Henk, A. M. N. Niklasson, and B. Johansson, Phys. Rev. B 59, 13986 (1999).
. J Minár, H Ebert, G Ghiringhelli, O Tjernberg, N B Brookes, L H Tjeng, Phys. Rev. B. 63144421J. Minár, H. Ebert, G. Ghiringhelli, O. Tjernberg, N. B. Brookes and L. H. Tjeng, Phys. Rev. B 63, 144421 (2001).
|
[] |
[
"A First Look at Emoji Usage on GitHub: An Empirical Study",
"A First Look at Emoji Usage on GitHub: An Empirical Study"
] |
[
"Xuan Lu [email protected] \nMinistry of Education\nKey Laboratory of High Confidence Software Technologies (Peking University)\nPRC\n",
"Yanbin Cao [email protected] \nMinistry of Education\nKey Laboratory of High Confidence Software Technologies (Peking University)\nPRC\n",
"Zhenpeng Chen \nMinistry of Education\nKey Laboratory of High Confidence Software Technologies (Peking University)\nPRC\n",
"Xuanzhe Liu \nMinistry of Education\nKey Laboratory of High Confidence Software Technologies (Peking University)\nPRC\n"
] |
[
"Ministry of Education\nKey Laboratory of High Confidence Software Technologies (Peking University)\nPRC",
"Ministry of Education\nKey Laboratory of High Confidence Software Technologies (Peking University)\nPRC",
"Ministry of Education\nKey Laboratory of High Confidence Software Technologies (Peking University)\nPRC",
"Ministry of Education\nKey Laboratory of High Confidence Software Technologies (Peking University)\nPRC"
] |
[] |
Emoji is becoming a ubiquitous language and gaining worldwide popularity in recent years including the field of software engineering (SE). As nonverbal cues, emojis are widely used in user understanding tasks such as sentiment analysis, but few work has been done to study emojis in SE scenarios. This paper presents a large scale empirical study on how GitHub users use emojis in development-related communications. We find that emojis are used by a considerable proportion of GitHub users. In comparison to Internet users, developers show interesting usage characteristics and have their own interpretation of the meanings of emojis. In addition, the usage of emojis reflects a positive and supportive culture of this community. Through a manual annotation task, we find that sentimental usage is a main intention of using emojis in issues, pull requests, and comments, while emojis are mainly used to emphasize important contents in README. These findings not only deepen our understanding about the culture of SE communities, but also provide implications on how to facilitate SE tasks with emojis such as sentiment analysis.
| null |
[
"https://arxiv.org/pdf/1812.04863v1.pdf"
] | 54,470,542 |
1812.04863
|
5e1fbe770f82f295b4d1723f266410d20a7f0488
|
A First Look at Emoji Usage on GitHub: An Empirical Study
Xuan Lu [email protected]
Ministry of Education
Key Laboratory of High Confidence Software Technologies (Peking University)
PRC
Yanbin Cao [email protected]
Ministry of Education
Key Laboratory of High Confidence Software Technologies (Peking University)
PRC
Zhenpeng Chen
Ministry of Education
Key Laboratory of High Confidence Software Technologies (Peking University)
PRC
Xuanzhe Liu
Ministry of Education
Key Laboratory of High Confidence Software Technologies (Peking University)
PRC
A First Look at Emoji Usage on GitHub: An Empirical Study
Index Terms-emojiGitHubdevelopersentiment
Emoji is becoming a ubiquitous language and gaining worldwide popularity in recent years including the field of software engineering (SE). As nonverbal cues, emojis are widely used in user understanding tasks such as sentiment analysis, but few work has been done to study emojis in SE scenarios. This paper presents a large scale empirical study on how GitHub users use emojis in development-related communications. We find that emojis are used by a considerable proportion of GitHub users. In comparison to Internet users, developers show interesting usage characteristics and have their own interpretation of the meanings of emojis. In addition, the usage of emojis reflects a positive and supportive culture of this community. Through a manual annotation task, we find that sentimental usage is a main intention of using emojis in issues, pull requests, and comments, while emojis are mainly used to emphasize important contents in README. These findings not only deepen our understanding about the culture of SE communities, but also provide implications on how to facilitate SE tasks with emojis such as sentiment analysis.
I. INTRODUCTION
Emoji, defined as "a digital image that is added to a message in electronic communication in order to express a particular idea or feeling", 1 is emerging as a ubiquitous language with its compact visual and live presentation, rich semantics, and understandability, and gaining worldwide popularity in recent years. As nonverbal cues, emojis have been widely adopted on online communication services such as Twitter and Facebook, and various efforts have been made over emojis, in terms of sentiment analysis [1], [2], user profiling [3], personality assessment [4], and even cluture difference analysis [5].
A noticeable trend is that, emojis are not only widely adopted in regular online communication, but also increasingly used in software development practice, ranging from code in programs, 23 to online forums such as StackOverflow and GitHub, and even to the requirements engineering [6]. Given that emojis are expected to provide a new way to enrich the expression, increase the interaction vitality, and improve the communication efficiency, we are motivated to understand how and why emojis are used in software engineering practice.
In this paper, we conduct an empirical study of emoji usage in the software engineering community by exploring GitHub, 4 the most popular online community where millions of programmers host and manage projects and build software collaboratively, to study the way that they communicate with one another in development activities. More specific, we aim to answer the following research questions. RQ1: How are emojis used by developers on GitHub? We begin with investigating the characteristics of emoji usage on GitHub. We establish a large-scale data set of communicational posts (i.e., issue, issue comments, pull requests, pull request comments, and README) collected from GitHub, spanning 66 months since January 2012. We make the descriptive analysis of the favored emojis, the density and position of emojis in GitHub posts, and derive some typical patterns. Domain specific usage can be observed, e.g., emojis can be assembled to represent slangs such as " " (i.e., dogfood). RQ2: Do emojis have domain-specific meanings by developers on GitHub? Suppose that the topics on GitHub can be quite different from daily communication due to the technical nature of the platform, we are interested in there are unique usage specific to the software development activities. We explore the meaning and sentiment of words that can be domain specific [7]. Regarding that emojis are widely used as complements or surrogates of plain texts, we further leverage the state-of-the-art word embedding method for emoji interpretation in the technical context. Interestingly, the semantics of emojis on GitHub can be quite different from that on Twitter. For example, emojis such as and can present domain specific meanings. Results also indicate the tendency to express positive sentiments through emojis on GitHub. RQ3: What are the intentions of using emojis by developers on GitHub? Given the increasing importance of sentiment analysis in software development and the lack of efficient tools in this field [8], [9], we expect that emojis can provide a new complementary signal to understand developers' sentiment in practice. We conduct an annotation task and build an eight dimension taxonomy of emoji usage intentions. We find that sentimental usage is a main intention in issues, pull requests, and comments, while emojis are mainly used to emphasize important content in README. In addition, sentimental emojis can also be used for non-sentimental intentions, indicating the significance of intention recognition before further application of emojis in analysis tasks in the SE field.
Findings. By answering the preceding questions, we find that developers use a diversity of emojis in their communications with domain-specific meanings and usage patterns. The use of emojis on Github reflects a positive and supportive culture of this community. Sentimental usage is still a main intention of using emojis on GitHub. Contributions. To the best of our knowledge, this paper makes the first step of exploring emoji usage on software engineering activities and further understanding the behavior of the software developers with a new signal. Our derived findings and implications can probably help various stakeholders on how to better make use of emojis on GitHub, so as to improve the representation vitality, the communication efficiency, the possible sentiment inference, and to promote better project collaboration, problem solving, and productivity.
The rest of the paper is organized as follows. Section II introduces related work. Section III describes the data used in this study and demonstrates the popularity and distribution of emojis in our data set. Section IV characterizes emoji usage patterns on GitHub. Section V develops an emoji interpretation method based on state-of-the-art embedding technique. Section VI proposes a taxonomy of intentions of using emojis with a manual annotation task. Section VII provides implications. Section VIII discusses threats to validity and future direction. Section IX concludes the paper.
II. RELATED WORK
A. Emoji Usage Analysis
Increasingly popular, emojis are becoming a ubiquitous language and widely adopted by Internet users in recent years. Various studies have been done to analyze emoji usage across countries [10], across cultures [5], and across demographic groups [3]. These research on emoji usage are mainly conducted in input methods [3], [5], [11], instant messaging apps such as WhatsApp [12] and WeChat [13], and social networks [14], [15]. So far, there is no study to analyze emoji usage and effects in a tech community. In addition, the prevalence of emojis also attracted many researchers to study the intentions and effects of using them. These research demonstrated that besides replacing the content words in text, emojis can also be used to provide emotional or situational information, adjust tones, express irony, engage the audience, decorate texts, etc [16]- [18]. However, due to the lack of emoji studies in tech community, we don't know which role emojis play and what functions emojis have in it. To bridge the knowledge gap, we make the first effort to measure the usage of emojis on Github.
B. Sentiment Analysis in Software Engineering
Understanding sentiments of developers is a research focus in software communities [7], [9], [19]- [25]. The sentiments of developers can be a sensor of the contributors' status and activity, as well as the quality of projects. For example, sentiment analysis can be used to detect the psychological state and job satisfaction of developers [26], which are strongly associated with their productivity and task completion quality.
However, current sentiment analysis tools have been demonstrated to have strong limitations on sentiment analysis in the SE field [9]. In fact, to improve the sentiment analysis technique, many researchers in natural language processing community have started to use emojis as weak sentiment labels [27], [28], which is called distant-supervised learning. Such practice with emojis sheds lights on the improvement of SE sentiment analysis tools. To discover the possibility of leveraging emojis in sentiment analysis in SE, we first describe how emojis are used with the data set collected from GitHub in this work.
III. DATA SET
To investigate emoji usage in communications on GitHub, we select five typical types of posts, i.e., issues, issue comments, pull requests, pull request comments, 5 and README. 6 To simplify, we refer to the issues, pull requests and their comments as conversational posts for they can be replied by users.
A. Data Collection
Through the GHTorrent project [29], we collect the conversational posts spanning 66 months from January, 2012 to June, 2017. After a data cleaning process to filter duplicates, spams, and expired URLs, the data set covers 3,088,360 projects and 3,952,924 users. We crawled the README files using the official API of GitHub [30]. Due to the rate limits for API requests, we choose only README files in projects with no less than 10 stars. 7 Table I summaries the data set.
Note that the data of emoji reaction, 8 which is provided by GitHub since March 2016 to respond to conversational posts, are not used in this work. Instead of the six emojis (i.e., , , , , , and ) provided by the reaction function, we collect the emojis that the users spontaneously typed into the free text for they have a greater variety in both the adopted type and usage pattern.
B. Emoji Popularity
Before looking at the emojis, we measure their popularity by demonstrating the proportions of posts and users that adopted emojis in the studied period. We include only the conversational posts here because the README posts are not with timestamps.
The fraction of emoji posts remains at nearly zero in the first few years in our data set, increasing slowly, and shows a sharp increase since March, 2016. Statistics show that in June, 2017, 0.58% of issues, 0.75% of pull requests, 1.32% of comments for issues, and 3.23% of comments for pull requests contain emojis. Although the proportion is much lower than the coverage of emojis on nontechnical communities such as Twitter (13.69% as reported in [31]) possibly due to the nature of technical discussions, the proportion keeps increasing over time in the studied period. We can find a quite high proportion of users who actively use emojis in their conversations. Figure 1 illustrates the proportion of users who have used at least one emoji in each month, over all users involved in different types of posts in the same month. The proportion of emoji users started from below 1% in the beginning of the year 2016 and increased over time. Users are more likely to use emojis in comments than in the original posts of issues and pull requests. In all four categories of the conversational posts, there was a significant increase of emoji users from March to April of 2016, which coincides with the release of the emoji reaction feature. Among users who commented on pull request, this proportion increased sharply for nearly seven times in April. 10.29% of users who commented on pull requests in June of 2017 used emojis in their comments, a surprisingly larger proportion compared to the proportion of emoji posts. As the emoji reactions are not included, the proportion of emojis users may be underestimated.
C. Emoji Distribution
Emojis are not evenly distributed in posts. For example, 0.20% of the issues in our data set contain at least one emoji. After grouping the issues by the number of comments, we find the proportion of emoji issues is 0.40% for the top 10% issues, significantly higher than 0.20% (p < 0.001). Interestingly, such a doubled proportion can be observed in most months in our data set. This finding implies a positive relation between emoji usage and responses to issues.
Similar findings can be made for the users. In general, 3.66% of users have used at least one emoji in conversational posts. When restricting the scope of users to those who have posted issues, this proportion is significantly higher to be 4.71% (p < 0.001). Further, we rank the users in descending order of number of issues they posted, and find that this proportion is 23.39% for the top 10% of the users, significantly higher than 4.71% (p < 0.001). Such findings indicate the potential correlation between emoji usage and user activeness.
Considering that the use of emojis can be influenced by multiple factors, we leave further research of the possible effect of using emojis in the participation and activeness of users for future work. In this paper we focus only on the posts with emojis.
IV. USAGE CHARACTERISTICS
In this section we address RQ1: How are emojis used by developers on GitHub? by looking at the top emojis and typical usage patterns on GitHub.
A. Top Emojis
In total, there are 1,271 emojis used on GitHub. Ranked by occurrence, the 10 most used emojis are , , , , , , , , , and . To compare with, the 10 most used emojis on Twitter tracked by EmojiTracker 9 are (21), (9), (57), (41), (152). The number in parentheses shows the ranking of the corresponding emoji on GitHub. Interestingly, only one emoji ( ) is overlapped by the two sets of top 10 emojis. Most of the left 9 popular emojis on Twitter rank pretty low on GitHub. The most popular emoji (face with tears of joy) on Twitter, which was even elected as "Oxford Dictionaries word of 2015", 10 is the 21st most used emoji on GitHub.
(47),(11)
We conduct a Wilcoxon signed-rank test [32] on rankings of emojis on the two platforms and find significant difference (p = 0.002 for top 10 emojis, p = 0.000 for top 50 emojis), indicating domain specific preferences for emojis on this technical platform.
In the top 10 emojis on GitHub, most emojis can be used to show positivity such as happiness, congratulations, praises, and appreciation, implying a positive atmosphere which can be of significance to communication and project collaboration. Another interesting observation is the popularity of , , and , which can be used as "bullets" for a list of items. When decomposed to different types of post (see Table II), greater variety and more interesting usage can be found in the top emojis. For example, the ship ( ), whose popularity ranks no.609 on Twitter, is the ninth popular emoji in pull request comments. To understand the popularity of such emojis, we will interpret the meanings and intentions in the following sections.
B. Emojis in Text
Used as complements or surrogates of plain text, emojis can generally make texts more vivid, expressive, and easy to read. How frequently are emojis used and where are they put in a post? We investigate the density and position of emojis in English texts.
1) Density:
We define the density of emojis as the number of emojis normalized to the length of post, and present the density distribution with frequency histograms in Fig. 2. Comparing the five distributions, we can find that in README the emoji density is particularly close to zero, possibly due to the relatively long texts. For the four conversational posts, the densities of emojis are mainly distributed between 0.0 and 0.2 with the largest peak near 0.0, indicating that emojis are mostly used with textual words, accounting for a small fraction of length in the post. Interestingly, there are discrete peaks with density of 0.25, 0.33, 0.5, and 1, especially in comments for issue and comments for pull request. We trace back and find massive emoji posts with 3 words (e.g., "hooray for tests "), 2 words (e.g., "Whoops . Merging!"), 1 word (e.g., "Thanks! "), and even no word (e.g., " "). Such use of emojis help explain the discrete peaks.
2) Position: Emojis tend to come at the end of messages, providing cues about how to understand the words that came before them [33]. To verify this rule for emojis on GitHub, we extract emoji sentences from conversational posts and find most of them end with emojis. In particular, emojis are at the end of 66.77% emoji sentences in issue comments. At the post level, we segment sentences in a post to three position classes [34], i.e., the first sentence, the last sentence, and the left in middle. The distribution of emojis in posts containing no less than three sentences is shown in Fig. 3. It can be observed that emojis are mostly used in the middle of issues (75.34%) and pull requests (53.83%). However, in the two types of comments, emojis are more likely to be used at the end. Considering that users often express attitudes or emotions in comments, they can use emojis at the end of comments to enrich such expression.
C. Appearance Patterns
When emojis appear in posts on GitHub, three typical patterns including using an emoji as a post, repeating the same emoji, and assembling different emojis, can be observed. We study such patterns and illustrate them with examples.
• A single emoji as a post. We are interested in the posts with only one emoji but no plain texts, because the understanding of such a post relies on the emoji. We suppose that emojis in such cases can convey relatively clear meanings, otherwise they will be resistance to the communication. Ranking emojis by their possibilities to independently constitute a post, we have (26.33%), (23.76%), (16.67%), (16.13%), (15.23%), (13.24%), (12.50%), and (10.82%) as the top ones. With our interpretation approach in Section V, the and can indicate the "launch" of something such as a new project or a new feature, and the rest ones can show attitudes or emotions. In contrast, emojis such as and are seldom used independently for they need supplementary words to express an complete idea.
• Emoji repetition. Emojis are often repetitively used to communicate a particular type of effect such as emphasis [35]. On GitHub, , , , , and are with the highest possibilities to be used repetitively. Interestingly, these emojis tend to convey positive attitudes or sentiments in most cases.
For example, the comment " " in an issue commentis to express confirmation and praise to the issue assignee, after which the corresponding commit is merged to master. In contrast, emojis that are the least likely to used in repetition include , , , , and , which often mean warning, failure, disappointment, disagreement, and sadness. Such finding implies a friendly atmosphere on GitHub, which confirms the observation in Section IV-A.
• Emoji assembling. In some cases emojis are assembled to express a complete and even complicated meaning in a post. For example, often occurs together with , , , , or . One typical example of such co-occurrences comes from an issue comment saying "
". The writer of this comment is the owner of the project and he merged the corresponding commit after the comment. The two emojis are combined to make up a sentence, which can be inferred to mean "I have reviewed this commit, it's perfect!" Another example is "Thanks for this! We're checking it out.
" in an issue comment. The comment writer, a member of the project, added a label to the commented issue and reviewed the committed code after the comment. In this example, the three emojis are combined to retell the meaning of the plain texts, making the expression more vivid.
Understanding assembled emojis may be not as easy to understand single or repeated ones, yet such usage can spice up the communication as the assembling can be quite creative especially when combined with domain knowledge or slangs. For example, in the README of the project Releasor, i.e., "
Releasor is used in ProgressBar.js, git-hours, arrmutations, and many others", the dog food indicates that the Releasor tool has been used in the developers' own projects and demonstrates confidence. 11 Summary. This section characterizes emoji usage on GitHub from the aspects of the top emojis, the density and position of emojis in posts, and typical appearance patterns. We find that the favored emojis on GitHub are quite different from Twitter, indicating domain-specific usage such as technical discussions. In comments for issues and pull requests, emojis are often co-used with few words, and often come at the end of long comments. Additionally, emojis show different potentials to independently constitute a post, to emphasize sentiments by repetition, and to illustrate complicated meanings by assembling. It should be noted that these findings are not rules but usage patterns learned from data.
V. EMOJI INTERPRETATION
Although emojis are regarded as a ubiquitous language across different countries and user groups [5], we propose that the interpretation of emojis, similar to textual words [7], would have specific meaning in a technical field. In fact, we have obtained some clues for domain-specific usage of emojis. To address this research question (i.e., RQ2: Do emojis have domain-specific meanings by developers on GitHub?), we develop an embedding-based approach to interpret the meanings of emojis and study the sentiment distribution of emojis on GitHub.
A. Semantic Understanding
Developers on GitHub have been spontaneously proposing standards for using emojis to fit the functions and scenarios in this community. For example, gitemoji 12 provides an initiative to standardize and explain the use of emojis on GitHub commit messages. However, how it is defined does not necessarily determine how it is used. An example is that the defined as party popper in Unicode 13 represents initial commit in gitemoji, yet this emoji is often used to express congratulations.
We decide to study the interpretations of the emojis by developers on GitHub with the state-of-the-art text representation learning methods. By projecting language tokens into a semantic space, we are able to directly assess the meanings of emojis. We extract all emojis and English words from the markdown 14 texts, replace each code block with "[code]" and each URL with "[url]", and filter out punctuation and special characters. We use the NLTK package 15 to tokenize and stem the words.
With the processed texts, we use word2vec [36] to train a 300-dimension embedding for each token, including words and emojis. Such a semantic space reflects the developers' interpretation of the meanings of both words and emojis. Based on cosine similarities between the embedding vectors, we are able to find the closest neighbors of any emoji in the semantic space. These neighbors, either words or emojis, can help us infer the meaning of the target emoji. We rank the closest word tokens to the given emoji in the semantic space. To compare, we also report the closest neighbors of the emoji in a different semantic space, 16 in which the embeddings of emojis were trained with 10 millions Tweets posted by USA users [14]. This alternative semantic space represents the interpretations of emojis in common Internet communications.
Following to the word2vec results, in Table III we report similar words of two typical emojis, and , on GitHub. For the emoji , the most similar words on GitHub are obviously different with those on Twitter. For example, the similar words on GitHub includes bug, smell, worrisom, fix, and report. Such word neighbors reflect the domain specific meaning of on GitHub. That is, represents bugs in code on GitHub, but not on platforms such as Twitter.
Similar conclusions can be derived for the emoji . On Twitter, this emoji appears in similar contexts with words like cruise, ship, and sail. However, on GitHub, it has similar meanings with words like merge, lgtm (look good to me), ship, approv, and land. It indicates a status that the code is ready for use (to be "shipped").
In addition to the closest words to an emoji, we can also discover its closest emojis in the semantic space. For example, given that is referred to as a code bug rather than an animal bug, one may expect that has different emoji neighbors on GitHub than on Twitter. The 10 emojis that have the most similar embeddings with on GitHub are , , , , , , , , , and . In comparison, the 10 closest emojis on Twitter are , , , , , , , , , and . The most interesting differences are the existence of , , and , which point to the meanings of debugging or fixing issues.
B. Sentiment Distribution
The observation of domain-specific preference for emojis and the implicated positive atmosphere motivate us to look at the sentiments expressed by emojis on this platform. By comparing the sentimental emojis on GitHub and Twitter may help us understand the culture of the developer community.
Given that emojis can have domain-specific meanings rather than their original definitions or interpretations by the public, their sentiments may also change. Instead of directly using the reported sentiments of certain emojis, we carefully extract the sentiment score of each emoji based on the sentiments of its neighboring words.
Specifically, we calculate the sentiment score for each word with the SentiStrength-SE tool [7], which was designed for sentiment analysis in the software engineering domain. Each word has a score in {-100, 0, 100}, corresponding to {negative, neutral, positive}. The sentiment score of a given emoji is the weighted average of the sentiment scores of its 100 nearest neighboring words, weighted by the cosine similarity between the embeddings of the emoji and the word.
After deriving the sentiment scores of emojis used on GitHub, we group the emojis into different score intervals and plot the total frequency of emojis used in GitHub posts against the sentiment scores in Fig. 4. The distribution has a clear tendency towards positive sentiments, which suggests that programmers on GitHub tend to express positive sentiments through emojis, as support or appreciation for each other. To understand the difference between developers and common Internet users, we also measure the sentiment scores of emojis used on Twitter and plot the same distribution (the frequency of emojis used on Twitter are obtained through Emojitracker). Sentiment scores of neighboring words of emojis on Twitter are calculated with the LIWC package. 17 Clearly, the entire distribution of Twitter emoji sentiment shifts to the left (p < 0.001 for the difference of mean), which suggests that emojis are more frequently used to express positive sentiments on GitHub compared to those on Twitter. Summary. Analysis in this section evidences that emojis can have domain-specific interpretations by developers on GitHub in comparison with the common Internet users. Hence, the technical context should be considered for accurate understanding of emojis before leveraging emojis for further research. In addition, the developer community tend to use more positive emojis, which presents a positive and supportive culture compared to the public.
VI. INTENTION UNDERSTANDING
Intentions of emoji usage in daily communication have been studied [16] while the understanding of intentions on a technical platform such as GitHub is lacked. In this section, we address the research question RQ3: What are the intentions of using emojis by developers on GitHub? by proposing a taxonomy of intentions with a manual annotation task.
A. Taxonomy
Based on existing studies about emoji usage intentions, we develop an initial set of intention categories and adapt them to emoji usage on GitHub with a subset of posts. The final taxonomy of intentions is as follows.
1) Sentimental usage. Using emojis can help express sentiments of users including their feelings, emotions, and attitudes. Considering the final effect of the expression (with the emoji) and the original sentiment of the plain text (without the emoji), the sentimental intention of using emojis can be de-composed to two sub-categories, i.e., sentiment expression and sentiment strengthening. Note that we also considered sentiment weakening and sentiment reversing at first, yet no such examples have been found. We claim this intention category can be extended if necessary.
a) Emojis conveying sentiment in a non-sentimental textual context have the intention of sentiment expression. For example, in the issue "Even the Guardian has TLS now.... ", the attitude of appreciation is expressed through the emoji. b) In the case that the plain text has already expressed some sentiment, the emoji often makes the post more expressive. We name this intention as sentiment strengthening. For example, in the issue "(that's right, it's blurring the actual content of the table cell! ", the negative emotion is expressed by the text and strengthened by the emoji.
2) Statement enriching. Emojis can make expressions, which are not limited to sentimental expressions, more vivid. With this intention, emojis are often used to replace or illustrate contents such as concepts and objects in text. For example, in the issue "This should not happen, I would even go as far as to call it a .", the emoji enriches this statement by representing the word bug.
3) Content organization. The pictographic nature of emojis makes them a good choice to assist the organization of contents in a post for readability improvement. When multiple items are listed, emojis such as and can be item bullets as alternatives of symbols like • and . Additionally, some emojis such as and can be used in checklists with the semantics they conveyed. 4) Content emphasis. To avoid being overwhelmed by massive content, the important points can be demonstrated with emojis to attract more eyeballs. Emojis with this intention are not necessarily semantically connected with the text but contribute with their pictographic characteristics. Typical emojis often used with such intention include , , , and . 5) Atmosphere adjustment. Two main scenarios are included in this intention category. First, emojis can be used to adjust tone, making the messages less serious and more friendly. An example is "Please do not be terrified " in a pull request.As nonverbal cues, emojis especially facial expressions can be used with this intention. Although facial emojis are often combined with emotion, they do not show obvious emotion but often politeness and kindness in its context when categorized to this intention. Second, when one intends to say something but has no idea of a specific topic, they can use emojis, which can even be not semantically related to the context. Emojis include hearts, gestures, animals, and objects can often be with this intention. 6) Unintentional usage. Emojis can be unintentionally used in posts when they are not input by the writer. Emojis in pasted contents including codes and logs are categorized to this intention. 7) Emoji. In some cases, emojis are just emojis, e.g., " is for upgrading dependencies, from other libraries, where is for things like binaries, from the local project."
B. Annotation
Based on the proposed taxonomy, we manually annotated the intention of using emojis in 2,000 emoji posts. To give a 95% confidence level and a 5% confidence interval, we randomly select 400 posts in English for each of the five post types for annotation. For multiple appearances of emojis in a post, only the first one is annotated except that it is used in combination with others (e.g., " Work in Progress ". We discuss the annotation process in detail before reporting the results.
1) Discrepancies: Understanding the meaning beyond language is often subjective and has always been challenging, where discrepancies can occur when different individuals interpret the same expression. The two authors who annotate the intentions of using emojis complete their task independently and discuss about the discrepancies until they agree. For example, in the statement "I've tried to reinstall the game, but the error keeps happening " in an issue, one of the two authors classified the intention of to sentiment strengthening while the other believe it is to express an sentiment. After discussion, they agreed that the text "I've tried to reinstall the game, but the error keeps happening" is stating a fact without obvious sentiments, but the emoji brings emotion of sadness and disappointment to this expression; hence it is finally categorized to sentiment expression.
2) Multiple intentions: In some cases an emoji can not be categorized to only one intention. For emojis showing complicated intentions, we allocate multiple labels to them and select a primary intention for further study. For example, in the sentence " Please review the guidelines for contributingto this repository." of a pull request, the is effective in drawing attention (i.e., the content emphasis intention) to this warning. Meanwhile, this police car light emoji contains the semantic of emergency that enriches the context of warning, indicating an effect in making the statement more expressive. We label this emoji in this post with a content emphasis intention for its main effect. Another example is "Also, for data-driven styles!" in an issue where the clapping hands represent a round of applause (i.e., the sentiment expression intention) meanwhile improve the expressiveness of the statement by replacing the corresponding textual words. With a comparison, we adopt sentiment expression as the main intention of in this post.
3) Scope of context: To determine the role of emojis in a post, it is not sufficient to study the sentence containing the emoji because the content can be loosely organized, and the emoji can be connected to a sentence far from it. For example, in the first paragraph of an issue, i.e., "Sorry, but I'm unfamiliar with Javascript apps. How do you install the cypher plugin?
Step-by-step instructions might be useful for people like me who are not familiar with how this works and too dumb to figure this out. ", the confounded face on the verge of tears is used to strengthen the sentiment expressed by the word "sorry" in the beginning.
Additionally, due to the orientation of project development and collaboration, the context of emojis on GitHub consists of not only texts but also operational behaviors. A typical scenario is that a single emoji constitutes a post without any textual words such as the pull request comment " ", which demonstrates sparkles. By investigating the original pull request, the previous replies, and behaviors including adding labels and merging a commit around this post, we find that the represents a positive attitude to a commit and categorize its intention to sentiment expression. 4) Intention vs. position: The position of the emoji in text can also affect the intention categorization. In specific, when a sentimental emoji appears in the beginning of an expression followed by sentimental texts (e.g., "
LGTM, thanks!"), it can be labelled to either of the two sub-categories of sentimental usage. In this work, we determine to label it as sentiment expression for no sentiment has been conveyed before the appearance of this emoji. As a comparison, the thumbs-up gesture in "LGTM, thanks! " will be labelled with a sentiment strengthening intention.
5) Definition vs. context-based understanding:
The embedding-based analysis in Section V has revealed domainspecific semantics of emojis determined by the technical context on GitHub. In the manual annotation process, we confirmed such usage of emojis (such as , , and ) in our sampled posts. However, we also find that such emojis can have other meanings. For example, the rocket can mean launch ("Launching soon ") as well as acceleration ("It uses the new osmium and its nodejs bindings for performance."), which is not domain specific on GitHub. Additionally, the emoji can be used as a symbol regardless of its meaning (e.g., "
gym has moved to the fastlane main repo "). Another observation comes from the emoji , which is known as clear button by definition and can find its application to mean "clear". This emoji can also be interpreted to be "change log" or "change list" in the vision control system. For example, a list of change logs are wrapped by " " and "/ " in a pull request, where we annotate its intention as statement enriching.
Such observation indicates ambiguity of emojis on GitHub, and the meaning should be determined by both the contextual information and the intention of using the emoji.
6) Sentiment expression vs. atmosphere adjusting: To distinguish the two different intentions in practice is challenging for emojis especially facial emojis. It is not rare that such emojis are used merely to make the expression more friendly instead of showing emotions or attitudes. In the pull request "... Please take a look and check if it's worth it having it here ", the smiling face with open mouth and smiling eyes is used to adjust the tone in order not to be serious, just like the facial expression in face-to-face communications. The intention of this is categorized to atmosphere adjusting while in the following example, the pull request "... All covered with new tests ", it is sentiment expression because this emoji expressed happiness for adding new methods covered with new tests. In a word, the contextual information is leveraged to determine the real intention of such emojis. 7) Unusual usage: Emojis can be used in an unusual way, which is reasonable because there are no strict rules and people are still experimenting with emojis. However, unusual usage can introduce obstacles to understand the emoji and even the post. For example, the emoji of thumbs up in the issue "I have tried to implement the ABC algorithm, but am getting problem with " followed by a line of code is challenging for us to figure out. As the other intention categories can not apply to this case, we suppose the is used to referring to the following code as the mentioned problem where and are more suitable, and to enrich the statement. 8) Criteria consistency: Considering that some of the criteria emerge during the annotation process, we looked back to the posts after the process and made necessary adjustments to ensure the consistency of the categorization.
C. Intention Distribution
The intentions of using emojis in the sampled posts are presented in Fig. 5. We next briefly report the distribution of intentions in different posts, respectively. 1) Issue: In issues, emojis are mostly used unintentionally with a 21.50% proportion. This is because emojis often occur in pasted logs and code in raised issues, which echoes the extremely dense usage of emojis in the middle of issue posts (see Fig. 3). Such finding also evidences that emojis are widely adopted in multiple scenarios in software engineering. Followed intentions are sentiment strengthening (20.00%), sentiment expression (18.50%), atmosphere adjusting (17.00%), and statement enriching (14.00%).
The intention of representing emojis, although is the last but one intention with a 3.75% proportion, is mostly seen in issues as well as README. Such intention often occur in emoji related contexts and in opinion consultations such as "Click on + to add your votes" where is one of the six reactions provided by GitHub.
2) Pull request: The four main intentions of using emojis in pull requests are atmosphere adjusting (28.75%), statement enriching (23.50%), sentiment expression (19.75%), and sentiment strengthening (14.00%). In comparison with issues, the unintentional use is significantly reduced, and the emojis are more used to enrich statement and adjust atmosphere. On possible explanation of such difference is the relatively narrow scope of target audience (i.e., the project owner) and explicit requests (i.e., to get the contributed code pulled and merged to the repository) of pull requests.
3) Comments: Sentimental usage of emojis is dominant in the two types of comments especially in pull request comments, where more than half of emojis (i.e., 51.75%) are used to express sentiments and 28.25% to strengthen sentiments. This is understandable because opinions are needed to response to raised questions, discussions, plans, and implementations. In the comments of issues, another main intention of using emojis is atmosphere adjusting (29.00%), possibly due to the demand of smoothing communications in various discussion contexts. 4) README: Instead of sentimental usage, the main intention of using emojis in README is content emphasis, occupying 34.25% of emoji occurrence. Because of the nature to communicate expectations for projects, README files are often with long texts and multiple points. Using emojis can effectively help attract attention to the emphasized points from the massive content. Another observation is that emojis contribute in hierarchical lists or checklists with a probability of 5.50%. These two intentions of using emojis in README are significantly higher than in other posts. Summary. Knowing why an emoji is used is of significance to understand the real meaning of the emoji and the expression with the emoji. In this section, we proposed a taxonomy of emoji usage intention on GitHub based on our understanding and a manual annotation task conducted with 2,000 posts. The ambiguity of emojis, the diversity of emoji selection, and the complicated context of the emoji can affect the intention determination process and make it challenging. The generated taxonomy as well as the demonstrated usage examples can provide a guidance for intention recognition before leveraging emojis in further analysis. By completing the annotation task, we obtain the distribution of emoji usage intentions in different posts. We find that emojis are heavily used to express or strengthen sentiments in conversational posts especially in comments, while emojis are mainly used to emphasize the content in README, which often contains a mass of information. We also find that emojis are widely used to smooth communications to develop a positive and friendly atmosphere of the open-source collaboration platform.
VII. IMPLICATIONS
Supplement for sentiment analysis. Understanding the thoughts and sentiments of developers has always been an active direction in software engineering research. However, sentiment analysis in SE tasks is challenging due to the unreliable results provided by existing tools [8], [9], [34], [37]. One difficulty in automated sentiment analysis with lexical approaches such as SentiStrength [38] in the SE domain is the lack of sentimental words in dictionary. In our study, emojis are found to be widely leveraged in GitHub posts to not only strengthen but also express sentiments as substitute of plain text, which implies the necessity to enlarge the dictionary of sentimental words with emojis. In addition, it is also possible to use emojis as weak sentiment labels [27], [28] in distantsupervised learning.
Status sensor for contribution activeness management. The activity of contributors of open source projects can be affected by their emotion and psychological status. For example, contributors are more likely to become inactive when they express strong positive or negative emotions [39]. [1] investigated food-related emotional experiences by analyzing emoji usage on Twitter, and find emoji usage can reflect the status of users such as being accompanied or not. Emojis can be leveraged as a sensor of contributors' status on open source platforms such as GitHub. In addition, developers can add emojis to their conversations to promote a mild and friendly atmosphere for the platform [40].
Visual design for problem solving and project collaboration. The analysis in Section III-C implies that emoji usage can be related with participation in issues and activeness of developers, possibly due to the eye-catching visual design which makes the issue more readable and less boring. Such initial findings encourage further research of the role of emojis in helping problems to be solved and promoting project collaboration in the open source community. Broadly, adding visual designs into traditionally text-heavy tasks not only adds fun to the work, but may also help engage users in the tasks and even improve the quality of work [41]. On the other side, GitHub and other developer communities like StackOverflow may consider to add more visual features to attract users into discussions, such as animations or GIF images.
In fact, the current low ratio of posts containing emojis indicates a great opportunity for the GitHub community to promote emojis in the conversation, through the designs of recommender systems or specialized interfaces.
Community-specific and identity-based design. Although emojis have evolved into an ubiquitous language, we have seen that a specific user group tend to use emojis and interpret emojis in different ways. The community-specific interpretations of emojis has become the norm of the social group and are strongly tied to the social identity of the community [42]. This indicates the rationale and opportunity of designing community-specific emojis, emojis that advocate for the unique identity and culture of the community.
VIII. DISCUSSION
A. Threats to Validity
The quality of data can be determined by the design of GitHub API and the collection strategy of GHTorrent. For example, some links can not be reached because users changed their usernames. 18 The data pre-processing, during which we extract texts from the crawled markdown files, replace code, URLs, and pictures with labels, and filter duplicates and spam posts containing obviously redundant emojis, can also lead to loss of information.
The proposed intention taxonomy is based on our understanding and observation from the selected posts, which can be affected from three factors. First, although we select 400 posts for each post type for statistical significance, we can not guarantee the full coverage of emoji usage types. We claim the extensibility of the taxonomy for occurrence of new uses. Second, the selected posts are all in English, hence the derived distribution of emoji usage intentions can not represent that in non-English posts. Third, although the annotation task was conducted by two of the authors independently followed by a discussion to address discrepancies, mistakes may still occur because the gap between the reader and the writer can always exist. The results can be influenced by subjective opinions of the annotators and their language, education and culture backgrounds.
The findings and the proposed taxonomy can not be directly generalized to the whole field of software engineering, due to the following two reasons. On the one hand, the studied emojis are from free communication texts on GitHub, not including other artifacts such as code and documentation. On the other hand, emojis in communications on other platform can show different characteristics, hence need to be studied with the specific contexts.
B. Future Work
The use of emojis can be affected by multiple factors, including the popularity of the project, the topic and atmosphere of the discussion, the background and personality of the user, the time of posting, etc. We plan to study emoji usage in depth by modeling the factors to uncover such relations. Another direction is to automate the intention categorization by developing machine learning models.
IX. CONCLUSION
We presented a large-scale empirical study on how developers use emojis in their communications on GitHub. The data set involves free texts including conversations and README from 3.09 million GitHub projects covering 3.95 million users spanning from January 2012 to June 2017. We found that developers show domain specific preference, usage, and understanding of emojis in their communications in comparison to common Internet users. We conducted a manual annotation task with 2,000 posts to propose a taxonomy of emoji usage 18 https://help.github.com/articles/what-happens-when-i-change-my-username/, retrieved in August 2018 intention for GitHub users. Results show that in addition to be widely used to smooth communications, emojis are heavily used to help express sentiments in conversations especially in comments and are mainly used as eye-catching symbols in README files. Emojis may be used as sentiment sensors of developers and should be considered in textual analysis in the software engineering field.
Fig. 1 .
1Proportion of emoji users in GitHub conversations grows over the year 2016, with a sharply increase after the release of the emoji reaction feature.
Fig. 2 .
2Emoji density in different posts.
Fig. 3 .
3Emoji position in a post.
Fig. 4 .
4Sentiment distribution of emojis.
Fig. 5 .
5Distribution of intentions.
TABLE I SUMMARY
IOF DATA SET.#issue #issue comment #pull request #pull request comment #README
Non-emoji
30,805,343
49,173,170
12,696,474
23,950,062
744,660
Emoji
60,742
307,298
34,576
407,302
21,444
Total
30,866,085
49,480,468
12,731,050
24,357,364
766,104
TABLE II THE
IIMOST USED EMOJIS.Post Type
Top 10 emojis
All
issue
issue comment
pull request
pull request comment
README
TABLE III SIMILAR
IIIWORDS OF EMOJIS BASED ON WORD2VEC EMBEDDINGS.Similar words on GitHub
Similar words on Twitter
snuck, elus, insidi, uncov,
nonbreak, bug, untitl, skater,
crept, smell, worrisom, fix,
report, gonzal, buggi, glare,
floater, badminton, undetect,
spectacular, miscellan
https://dictionary.cambridge.org/us/dictionary/english/emoji 2 https://www.tjvantoll.com/2016/06/10/emoji-and-coding/ 3 http://www.emojicode.org/
https://github.com arXiv:1812.04863v1 [cs.CY] 12 Dec 2018
In general, an issue opens up a discussion thread for bugs, enhancement, questions, etc., while a pull request is a request for the project owner to "pull" the source code change from contributors and merge to the code repository. See https://help.github.com/categories/collaborating-withissues-and-pull-requests/, retrieved in August 2018. 6 README helps developers to communicate expectations for their projects, see https://help.github.com/articles/about-readmes/, retrieved in August 2018. 7 https://help.github.com/articles/about-stars/, retrieved in August 2018 8 https://github.com/blog/2119-add-reactions-to-pull-requests-issues-andcomments, retrieved in August 2018.
http://emojitracker.com, retrieved on September 3, 2017. 10 http://blog.oxforddictionaries.com/2015/11/word-of-the-year-2015-emoji
Eating your own dog food, or dogfooding, is a slang term used to refer to a situation in which an organization uses its own product. This can be a way for an organization to test its products in real-world usage. Hence dogfooding can act as quality control, and eventually a kind of testimonial advertising. See https://en.wikipedia.org/wiki/Eating your own dog food, retrieved in August, 2018.
https://github.com/carloscuesta/gitmoji 13 http://unicode.org/emoji/charts/full-emoji-list.html 14 https://guides.github.com/features/mastering-markdown/ 15 http://www.nltk.org 16 http://sempub.taln.upf.edu/tw/emojis/
http://liwc.wpengine.com
Use of emoticon and emoji in tweets for food-related emotional expression. L Vidal, G Ares, S R Jaeger, Food Quality and Preference. 49L. Vidal, G. Ares, and S. R. Jaeger, "Use of emoticon and emoji in tweets for food-related emotional expression," Food Quality and Preference, vol. 49, pp. 119-128, 2016.
Twitter sentiment analysis via bi-sense emoji embedding and attention-based lstm. Y Chen, J Yuan, Q You, J Luo, arXiv:1807.07961arXiv preprintY. Chen, J. Yuan, Q. You, and J. Luo, "Twitter sentiment analysis via bi-sense emoji embedding and attention-based lstm," arXiv preprint arXiv:1807.07961, 2018.
Through a gender lens: Learning usage patterns of emojis from large-scale Android users. Z Chen, X Lu, W Ai, H Li, Q Mei, X Liu, Proceedings of the 27th Web Conference2016. the 27th Web Conference2016Z. Chen, X. Lu, W. Ai, H. Li, Q. Mei, and X. Liu, "Through a gender lens: Learning usage patterns of emojis from large-scale Android users," in Proceedings of the 27th Web Conference2016, 2018, pp. 763-772.
Assessing personality using emoji: An exploratory study. D Marengo, F Giannotta, M Settanni, Personality and Individual Differences. 112D. Marengo, F. Giannotta, and M. Settanni, "Assessing personality using emoji: An exploratory study," Personality and Individual Differences, vol. 112, pp. 74-78, 2017.
Learning from the ubiquitous language: an empirical analysis of emoji usage of smartphone users. X Lu, W Ai, X Liu, Q Li, N Wang, G Huang, Q Mei, Proceedings of 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. 2016 ACM International Joint Conference on Pervasive and Ubiquitous ComputingX. Lu, W. Ai, X. Liu, Q. Li, N. Wang, G. Huang, and Q. Mei, "Learning from the ubiquitous language: an empirical analysis of emoji usage of smartphone users," in Proceedings of 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2016, pp. 770- 780.
Modelling politics in requirements engineering: Adding emoji to existing notations. R Siadati, P Wernick, V Veneziano, arXiv:1703.06101arXiv preprintR. Siadati, P. Wernick, and V. Veneziano, "Modelling politics in require- ments engineering: Adding emoji to existing notations," arXiv preprint arXiv:1703.06101, 2017.
Leveraging automated sentiment analysis in software engineering. M R Islam, M F Zibran, Proceedings of the 14th International Conference on Mining Software Repositories. the 14th International Conference on Mining Software RepositoriesM. R. Islam and M. F. Zibran, "Leveraging automated sentiment analysis in software engineering," in Proceedings of the 14th International Conference on Mining Software Repositories, 2017, pp. 203-214.
The challenges of sentiment detection in the social programmer ecosystem. N Novielli, F Calefato, F Lanubile, International Workshop on Social Software Engineering. N. Novielli, F. Calefato, and F. Lanubile, "The challenges of sentiment detection in the social programmer ecosystem," in International Work- shop on Social Software Engineering, 2015, pp. 33-40.
Sentiment analysis for software engineering: How far can we go. B Lin, F Zampetti, G Bavota, M Di Penta, M Lanza, R Oliveto, Proceedings of 40th International Conference on Software Engineering. 40th International Conference on Software EngineeringB. Lin, F. Zampetti, G. Bavota, M. Di Penta, M. Lanza, and R. Oliveto, "Sentiment analysis for software engineering: How far can we go?" in Proceedings of 40th International Conference on Software Engineering, 2018, pp. 94-104.
A global analysis of emoji usage. N Ljubesic, D Fiser, Proceedings of the 10th Web as Corpus Workshop. the 10th Web as Corpus WorkshopN. Ljubesic and D. Fiser, "A global analysis of emoji usage," in Proceedings of the 10th Web as Corpus Workshop, 2016, pp. 82-89.
Untangling emoji popularity through semantic embeddings. W Ai, X Lu, X Liu, N Wang, G Huang, Q Mei, Proceedings of the 11th International AAAI Conference on Web and Social Media. the 11th International AAAI Conference on Web and Social MediaW. Ai, X. Lu, X. Liu, N. Wang, G. Huang, and Q. Mei, "Untangling emoji popularity through semantic embeddings," Proceedings of the 11th International AAAI Conference on Web and Social Media, pp. 2-11, 2017.
Forms and functions of emojis in whatsapp interaction among omanis. F , Al Rashdi, F. Al Rashdi, "Forms and functions of emojis in whatsapp interaction among omanis," 2015.
Goodbye text, hello emoji: Mobile communication on WeChat in china. R Zhou, J Hentschel, N Kumar, Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. the 2017 CHI Conference on Human Factors in Computing SystemsR. Zhou, J. Hentschel, and N. Kumar, "Goodbye text, hello emoji: Mobile communication on WeChat in china," in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 2017, pp. 748-759.
What does this emoji mean? a vector space skip-gram model for twitter emojis. F Barbieri, F Ronzano, H Saggion, Language Resources and Evaluation Conference. F. Barbieri, F. Ronzano, and H. Saggion, "What does this emoji mean? a vector space skip-gram model for twitter emojis." in Language Resources and Evaluation Conference, 2016.
Sentiment of emojis. P K Novak, J Smailovic, B Sluban, I Mozetic, PloS One. 1012P. K. Novak, J. Smailovic, B. Sluban, and I. Mozetic, "Sentiment of emojis," PloS One, vol. 10, no. 12, 2015.
Spice up your chat: The intentions and sentiment effects of using emoji. T Hu, H Guo, H Sun, T T Nguyen, J Luo, Proceedings of the 11th International AAAI Conference on Web and Social Media. the 11th International AAAI Conference on Web and Social MediaT. Hu, H. Guo, H. Sun, T. T. Nguyen, and J. Luo, "Spice up your chat: The intentions and sentiment effects of using emoji," in Proceedings of the 11th International AAAI Conference on Web and Social Media, 2017, pp. 101-111.
Sender-intended functions of emojis in US messaging. H Cramer, P De Juan, J R Tetreault, Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services. the 18th International Conference on Human-Computer Interaction with Mobile Devices and ServicesH. Cramer, P. de Juan, and J. R. Tetreault, "Sender-intended functions of emojis in US messaging," in Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services, 2016, pp. 504-509.
Beyond just text: semantic emoji similarity modeling to support expressive communication. H Pohl, C Domin, M Rohs, ACM Transactions on Computer-Human Interaction. 24142H. Pohl, C. Domin, and M. Rohs, "Beyond just text: semantic emoji similarity modeling to support expressive communication ," ACM Transactions on Computer-Human Interaction, vol. 24, no. 1, pp. 6:1-6:42, 2017.
Analyzing developer sentiment in commit logs. V Sinha, A Lazar, B Sharif, Proceedings of the 13th International Conference on Mining Software Repositories. the 13th International Conference on Mining Software RepositoriesV. Sinha, A. Lazar, and B. Sharif, "Analyzing developer sentiment in commit logs," in Proceedings of the 13th International Conference on Mining Software Repositories, 2016, pp. 520-523.
Developer behavior and sentiment from data mining open source repositories. W N Robinson, T Deng, Z Qi, 49th Hawaii International Conference on. System Sciences (HICSS)W. N. Robinson, T. Deng, and Z. Qi, "Developer behavior and sentiment from data mining open source repositories," in System Sciences (HICSS), 2016 49th Hawaii International Conference on, 2016, pp. 3729-3738.
Security and emotion: sentiment analysis of security discussions on github. D Pletea, B Vasilescu, A Serebrenik, Proceedings of the 11th working conference on mining software repositories. the 11th working conference on mining software repositoriesD. Pletea, B. Vasilescu, and A. Serebrenik, "Security and emotion: sentiment analysis of security discussions on github," in Proceedings of the 11th working conference on mining software repositories, 2014, pp. 348-351.
Choosing your weapons: On sentiment analysis tools for software engineering research. R Jongeling, S Datta, A Serebrenik, 2015 IEEE International Conference on Software Maintenance and Evolution. R. Jongeling, S. Datta, and A. Serebrenik, "Choosing your weapons: On sentiment analysis tools for software engineering research," in 2015 IEEE International Conference on Software Maintenance and Evolution, 2015, pp. 531-535.
Senticr: a customized sentiment analysis tool for code review interactions. T Ahmed, A Bosu, A Iqbal, S Rahimi, Proceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering. the 32nd IEEE/ACM International Conference on Automated Software EngineeringT. Ahmed, A. Bosu, A. Iqbal, and S. Rahimi, "Senticr: a customized sentiment analysis tool for code review interactions," in Proceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering, 2017, pp. 106-111.
A comparison of dictionary building methods for sentiment analysis in software engineering text. M R Islam, M F Zibran, 2017M. R. Islam and M. F. Zibran, "A comparison of dictionary building methods for sentiment analysis in software engineering text," in 2017
ACM/IEEE International Symposium on Empirical Software Engineering and Measurement. ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, 2017, pp. 478-479.
Sentiment analysis of commit comments in github: an empirical study. E Guzman, D Azócar, Y Li, 11th Working Conference on Mining Software Repositories. E. Guzman, D. Azócar, and Y. Li, "Sentiment analysis of commit comments in github: an empirical study," in 11th Working Conference on Mining Software Repositories, 2014, pp. 352-355.
Sentiment analysis of free/open source developers: preliminary findings from a case study. A.-I Rousinopoulos, G Robles, J M González-Barahona, Revista Electronica de Sistemas de Informaçao. 1321A.-I. Rousinopoulos, G. Robles, and J. M. González-Barahona, "Senti- ment analysis of free/open source developers: preliminary findings from a case study," Revista Electronica de Sistemas de Informaçao, vol. 13, no. 2, p. 1, 2014.
MoodLens: an emoticon-based sentiment analysis system for Chinese Tweets. J Zhao, L Dong, J Wu, K Xu, The 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. J. Zhao, L. Dong, J. Wu, and K. Xu, "MoodLens: an emoticon-based sentiment analysis system for Chinese Tweets," in The 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2012, pp. 1528-1531.
Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. B Felbo, A Mislove, A Søgaard, I Rahwan, S Lehmann, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingB. Felbo, A. Mislove, A. Søgaard, I. Rahwan, and S. Lehmann, "Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm," in Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2017, pp. 1615-1625.
The ghtorrent dataset and tool suite. G Gousios, Proceedings of the 10th Working Conference on Mining Software Repositories. the 10th Working Conference on Mining Software RepositoriesG. Gousios, "The ghtorrent dataset and tool suite," in Proceedings of the 10th Working Conference on Mining Software Repositories, 2013, pp. 233-236.
REST API v3. "REST API v3." [Online]. Available: https://developer.github.com/v3/
U Pavalanathan, J Eisenstein, arXiv:1510.08480Emoticons vs. emojis on twitter: A causal inference approach. arXiv preprintU. Pavalanathan and J. Eisenstein, "Emoticons vs. emojis on twitter: A causal inference approach," arXiv preprint arXiv:1510.08480, 2015.
Individual comparisons by ranking methods. F Wilcoxon, Biometrics bulletin. 16F. Wilcoxon, "Individual comparisons by ranking methods," Biometrics bulletin, vol. 1, no. 6, pp. 80-83, 1945.
TIME Exclusive: Here Are Rules of Using Emoji You Didn't Know You Were Following. K Steinmetz, K. Steinmetz, "TIME Exclusive: Here Are Rules of Using Emoji You Didn't Know You Were Following." [Online]. Available: http://time.com/2993508/emoji-rules-tweets/
Varying Linguistic Purposes of Emoji in (Twitter) Context. N Na'aman, H Provenza, O Montoya, Proceedings of ACL 2017. ACL 2017N. Na'aman, H. Provenza, and O. Montoya, "Varying Linguistic Pur- poses of Emoji in (Twitter) Context," in Proceedings of ACL 2017, Student Research Workshop, 2018, pp. 136-141.
The pragmatics of repetition, emphasis and intensification. R C Jackson, SalfordPh.D. dissertationR. C. Jackson et al., "The pragmatics of repetition, emphasis and intensification," Ph.D. dissertation, Salford, 2016.
Distributed representations of words and phrases and their compositionality. T Mikolov, I Sutskever, K Chen, G S Corrado, J Dean, Advances in neural information processing systems. T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, "Distributed representations of words and phrases and their composi- tionality," in Advances in neural information processing systems, 2013, pp. 3111-3119.
On negative results when using sentiment analysis tools for software engineering research. R Jongeling, P Sarkar, S Datta, A Serebrenik, Empirical Software Engineering. 225R. Jongeling, P. Sarkar, S. Datta, and A. Serebrenik, "On negative results when using sentiment analysis tools for software engineering research," Empirical Software Engineering, vol. 22, no. 5, pp. 2543-2584, 2017.
Sentiment strength detection for the social web. M Thelwall, K Buckley, G Paltoglou, Journal of the Association for Information Science & Technology. 631M. Thelwall, K. Buckley, and G. Paltoglou, "Sentiment strength de- tection for the social web," Journal of the Association for Information Science & Technology, vol. 63, no. 1, pp. 163-173, 2011.
The role of emotions in contributors activity: A case study on the gentoo community. D Garcia, M S Zanetti, F Schweitzer, Third International Conference on Cloud and Green Computing. D. Garcia, M. S. Zanetti, and F. Schweitzer, "The role of emotions in contributors activity: A case study on the gentoo community," in Third International Conference on Cloud and Green Computing, 2013, pp. 410-417.
Emotional contagion. E Hatfield, J T Cacioppo, R L Rapson, Current directions in psychological science. 23E. Hatfield, J. T. Cacioppo, and R. L. Rapson, "Emotional contagion," Current directions in psychological science, vol. 2, no. 3, pp. 96-100, 1993.
Designing for user engagement: Aesthetic and attractive user interfaces. A Sutcliffe, Synthesis lectures on human-centered informatics. 21A. Sutcliffe, "Designing for user engagement: Aesthetic and attrac- tive user interfaces," Synthesis lectures on human-centered informatics, vol. 2, no. 1, pp. 1-55, 2009.
Language and social identity. J J Gumperz, Cambridge University Press2J. J. Gumperz, Language and social identity. Cambridge University Press, 1982, vol. 2.
|
[
"https://github.com/blog/2119-add-reactions-to-pull-requests-issues-andcomments,",
"https://github.com/carloscuesta/gitmoji"
] |
[
"MODIFIED ELLIPTIC GAMMA FUNCTIONS AND 6d SUPERCONFORMAL INDICES",
"MODIFIED ELLIPTIC GAMMA FUNCTIONS AND 6d SUPERCONFORMAL INDICES"
] |
[
"Vyacheslav P Spiridonov "
] |
[] |
[] |
We construct a modified double elliptic gamma function which is well defined when one of the base parameters lies on the unit circle. A model consisting of 6d hypermultiplets coupled to a gauge field theory living on a 4d defect is proposed whose superconformal index uses the double elliptic gamma function and obeys W (E 7 )-group symmetry.
|
10.1007/s11005-013-0678-6
|
[
"https://arxiv.org/pdf/1211.2703v3.pdf"
] | 119,093,708 |
1211.2703
|
3835ceae2961989f0877910893da0a4b4bc690ed
|
MODIFIED ELLIPTIC GAMMA FUNCTIONS AND 6d SUPERCONFORMAL INDICES
29 Nov 2013
Vyacheslav P Spiridonov
MODIFIED ELLIPTIC GAMMA FUNCTIONS AND 6d SUPERCONFORMAL INDICES
29 Nov 2013
We construct a modified double elliptic gamma function which is well defined when one of the base parameters lies on the unit circle. A model consisting of 6d hypermultiplets coupled to a gauge field theory living on a 4d defect is proposed whose superconformal index uses the double elliptic gamma function and obeys W (E 7 )-group symmetry.
Introduction
Six dimensional superconformal field theories currently form an active research field (see, e.g., [1] and references therein). As claimed by Moore [1], these theories should form a gold mine for experts in special functions as a source of amazing identities, which is just one of many important potential mathematical outputs from them. This statement sounds curious and the author agrees with it. Indeed, a principally new class of special functions called elliptic hypergeometric integrals has been discovered in [2]. It came as a big surprise to mathematicians since it was tacitly assumed that q-hypergeometric functions form the top level special functions of hypergeometric type with nice exact formulas [3]. Some particular examples of such integrals were interpreted as wave functions or normalizations of wave functions in specific elliptic multiparticle quantum mechanical systems [2]. Recently it was shown by Dolan and Osborn [4] that certain elliptic hypergeometric integrals coincide with superconformal indices of four-dimensional gauge field theories and corresponding identities prove Seiberg dualities (electro-magnetic, strong-weak, or mirror symmetry dualities) in the topological sector. Further detailed investigation of this relationship was performed in many papers among which we mention only a small fraction [5,6,7,8].
The theory of elliptic hypergeometric functions is nowadays a rich mathematical subject with many beautiful new constructions [9]. One of its key ingredients is the elliptic gamma function related to the Barnes multiple gamma function of order three Γ 3 (u; ω 1 , ω 2 , ω 3 ) [10] (see the Appendix for a definition of Γ m (u; ω)). The plain and q-hypergeometric functions are directly related to the Barnes gamma functions of order one Γ 1 (u; ω 1 ), which is proportional to the standard Euler's gamma function Γ(u/ω 1 ), and order two Γ 2 (u; ω 1 , ω 2 ), respectively. The multiple infinite q-products with several bases naturally emerge in the considerations of superconformal indices for higher dimensional theories [11,12]. In particular, the double elliptic gamma function related to Γ 4 (u; ω) describes topological strings partition function [12,13,14] and 6d superconformal indices [15,16,17] (the latter was noticed also by F. Dolan, G. Vartanov and the author in an unpublished consideration). In [18] the triple elliptic gamma function (related to Γ 5 (u; ω)) emerged in the description of partition functions of solvable 2d statistical mechanics models identical with superconformal indices of some 4d quiver gauge theories.
Shortly after the discovery of elliptic beta integrals the author posed a natural question: is there a higher order generalization of elliptic hypergeometric integrals to Barnes multiple gamma functions Γ m (u; ω) with m > 3, obeying exact formulas similar to the ones found in [2] ? Until now this question has not been resolved, though the author believes it has a positive answer. Perhaps 6d superconformal theories provide an appropriate framework for approaching this problem through the systematic investigation of identities for corresponding indices.
In this note we discuss double elliptic gamma functions and related 6d superconformal indices. A modified double elliptic gamma function is introduced for which one of the base parameters can lie on the unit circle. The W (E 7 )-symmetry transformation for an elliptic hypergeometric integral established in [2] and [19] is written in a novel form and its analogue involving the modified double elliptic gamma function is derived. We speculate also on the structure of a theory of N = (1, 0) 6d hypermultiplets coupled to an N = 1 4d defect (or a similar coupled 5d/3d theory) with the exact W (E 7 )-invariant superconformal index (or the partition function). This consideration is inspired by analogous 4d/2d coupled systems of [20] and a 5d/4d boundary theory with W (E 7 )-invariant index of [8].
Modified elliptic gamma functions
Let us take four incommensurate quasiperiods ω k ∈ C (i.e., they are constrained by the condition 4 k=1 n k ω k = 0, n k ∈ Z). Using their ratios we form six bases
q = e 2πi ω 1 ω 2 , p = e 2πi ω 3 ω 2 , r = e 2πi ω 3 ω 1 , s = e 2πi ω 4 ω 1 , t = e 2πi ω 4 ω 2 , w = e 2πi ω 3 ω 4(1)
and corresponding particular modular transforms
q = e −2πi ω 2 ω 1 ,p = e −2πi ω 2 ω 3 ,r = e −2πi ω 1 ω 3 , s = e −2πi ω 1 ω 4 ,t = e −2πi ω 2 ω 4 ,w = e −2πi ω 4 ω 3 .(2)
The bases p, q, r andp,q,r coincide with those used in [2,9]. In the increasing order of complexity we define the following infinite products all of which are well defined only when bases q, . . . , w are of modulus less than 1. Denote
(z; q 1 , . . . , q m ) = ∞ k1,...,km=0 (1 − zq k1 1 · · · q km m ), z ∈ C,
the standard infinite q-product and θ(z; p) = (z; p)(pz −1 ; p), a theta function obeying properties θ(pz; p) = θ(z −1 ; p) = −z −1 θ(z; p). The standard (order one) elliptic gamma function has the form
Γ(z; p, q) = ∞ i,j=0 1 − z −1 p i+1 q j+1 1 − zp i q j , z ∈ C * ,
and the double (i.e., of the order two) elliptic gamma function is
Γ(z; p, q, t) = ∞ i,j,k=0 (1 − z −1 p i+1 q j+1 t k+1 )(1 − zp i q j t k ), z ∈ C * .
We Both, Γ(z; p, q) and Γ(z; p, q, t) are symmetric in their bases. For the standard elliptic gamma function one has the difference equations Γ(qz; p, q) = θ(z; p)Γ(z; p, q), Γ(pz; p, q) = θ(z; q)Γ(z; p, q).
For the second order function Γ(z; p, q, t) one has Γ(qz; p, q, t) Γ(z; p, q, t) = Γ(z; p, t), Γ(pz; p, q, t) Γ(z; p, q, t) = Γ(z; q, t), Γ(tz; p, q, t) Γ(z; p, q, t) = Γ(z; p, q).
The inversion relations have the form Γ(z, pqz −1 ; p, q) = 1, Γ(pqtz; p, q, t) = Γ(z −1 ; p, q, t).
In [2] the following modified elliptic gamma function was defined
G(u; ω 1 , ω 2 , ω 3 ) := Γ(e 2πi u ω 2 ; p, q)Γ(re −2πi u ω 1 ; r,q) = Γ(e 2πi u ω 2 ; p, q) Γ(qe 2πi u ω 1 ; r,q) . (3)
It satisfies three linear difference equations of the first order
G(u + ω 1 ; ω) = θ(e 2πi u ω 2 ; p)G(u; ω),(4)G(u + ω 2 ; ω) = θ(e 2πi u ω 1 ; r)G(u; ω),(5)
G(u + ω 3 ; ω) = e −πiB2,2(u;ω) G(u; ω),
where B 2,2 (u; ω) is the diagonal Bernoulli polynomial of order two,
B 2,2 (u; ω) = u 2 ω 1 ω 2 − u ω 1 − u ω 2 + ω 1 6ω 2 + ω 2 6ω 1 + 1 2 .
In (6) the exponential coefficient emerged through the following well-known SL(2, Z)modular transformation property of theta functions
θ e −2πi u ω 1 ; e −2πi ω 2 ω 1 = e πiB2,2(u;ω) θ e 2πi u ω 2 ; e 2πi ω 1 ω 2 .(7)
One has the reflection equation G(a; ω)G(ω 1 + ω 2 + ω 3 − a; ω) = 1. We shall use below the following shorthand notation
G(a ± b; ω) := G(a + b, a − b; ω) := G(a + b; ω)G(a − b; ω).
To prove that function (3) is well defined for |q| = 1 we consider another functioñ
G(u; ω 1 , ω 2 , ω 3 ) = e − πi 3 B3,3(u;ω) Γ(e −2πi u ω 3 ;r,p),(8)
where B 3,3 (u; ω) is the diagonal Bernoulli polynomial of order three,
B 3,3 u + 3 n=1 ω n 2 ; ω = u(u 2 − 1 4 3 k=1 ω 2 k ) ω 1 ω 2 ω 3 .
Obviously, one has the symmetryG(u; ω 1 , ω 2 , ω 3 ) =G(u; ω 2 , ω 1 , ω 3 ). Using the relation
B 3,3 (u + ω 3 ; ω 1 , ω 2 , ω 3 ) − B 3,3 (u; ω 1 , ω 2 , ω 3 ) = 3ω 3 B 2,2 (u; ω 1 , ω 2 ),
it is not difficult to check thatG(u; ω) satisfies the same three equations (4)- (6) and the normalization conditioñ
G( 1 2 3 k=1 ω k ; ω 1 , ω 2 , ω 3 ) = G( 1 2 3 k=1 ω k ; ω 1 , ω 2 , ω 3 ) = 1.
Therefore by the Jacobi theorem one obtains the equalitỹ
G(u; ω 1 , ω 2 , ω 3 ) = G(u; ω 1 , ω 2 , ω 3 )(9)
corresponding to one of the SL(3, Z)-modular group transformation laws for the elliptic gamma function [21]. The crucial property of G(u; ω) is that it remains a well defined meromorphic function of u even for ω 1 /ω 2 > 0 (i.e., when |q| = 1 with the conditions |p|, |r| < 1 being obligatory), in difference from Γ(z; p, q). This fact is evident from the second form of representation of G(u; ω) (8).
Take the limit ω 3 → ∞ in such a way that Im(ω 3 /ω 1 ), Im(ω 3 /ω 2 ) → +∞ (i.e., p, r → 0). Then,
lim p,r→0 G(u; ω 1 , ω 2 , ω 3 ) = γ(u; ω 1 , ω 2 ) = (e 2πiu/ω1q ;q) (e 2πiu/ω2 ; q) .(10)
This is a modified q-gamma function known under many other different names (double sine, hyperbolic gamma function, or quantum dilogarithm, see Appendix A in [18] for interconnections between these functions). For Re(ω 1 ), Re(ω 2 ) > 0 and 0 < Re(u) < Re(ω 1 + ω 2 ) it has the integral representation
γ(u; ω 1 , ω 2 ) = exp − R+i0 e ux (1 − e ω1x )(1 − e ω2x ) dx x ,(11)
which shows that γ(u; ω 1 , ω 2 ) is meromorphic even for ω 1 /ω 2 > 0, when |q| = 1 and the infinite product representation (10) is not applicable. Euler's gamma function Γ(u) can be defined as a special solution of the functional equation f (u + 1) = uf (u). q-Gamma functions with q = e 2πiω1/ω2 can be defined as special solutions of the equation f (u + ω 1 ) = (1 − e 2πiu/ω2 )f (u) (in particular the functions (11) and 1/(e 2πiu/ω2 ; q) satisfy this equation). Analogously, the elliptic gamma functions of order one are defined as special solutions of the key equation (4), which does not assume any restriction on the parameter q. Its particular solutions Γ(e 2πiu/ω2 ; p, q) and 1/Γ(q −1 e 2πiu/ω2 ; p, q −1 ) exist only for |q| < 1 or |q| > 1, respectively. And G(u; ω) covers the remaining domain |q| = 1.
Define now the modified double elliptic gamma function
G(u; ω 1 , . . . , ω 4 ) := Γ(e 2πi u ω 2 ; q, p, t) Γ(qe 2πi u ω 1 ;q, r, s) .(12)
This is a meromorphic function of u ∈ C satisfying the inversion relation
G(u + 4 k=1 ω k ; ω 1 , . . . , ω 4 ) = G(−u; ω 1 , . . . , ω 4 )
and four linear difference equations of the first order
G(u + ω 1 ; ω) = Γ(e 2πi u ω 2 ; p, t)G(u; ω),(13)G(u + ω 2 ; ω) = Γ(e 2πi u ω 1 ; r, s)G(u; ω),(14)G(u + ω 3 ; ω) = Γ(e 2πi u ω 2 ; q, t) Γ(qe 2πi u ω 1 ;q, s) G(u; ω),(15)G(u + ω 4 ; ω) = Γ(e 2πi u ω 2 ; p, q) Γ(qe 2πi u ω 1 ;q, r) G(u; ω).(16)
Note that the latter equation coefficient is simply G(u; ω 1 , ω 2 , ω 3 ). Note also that in the limit ω 4 → ∞ taken in such a way that s, t → 0, we have
lim s,t→0 G(u; ω 1 , . . . , ω 4 ) = ∞ j,k=0 1 − e 2πi u ω 2 p j q k 1 − e 2πi u ω 1 r jqk+1 , which is only "a half" of 1/G(u; ω 1 , ω 2 , ω 3 ).
Let us demonstrate that G(u; ω 1 , . . . , ω 4 ) remains a meromorphic function of u for ω 1 /ω 2 > 0 (when |q| = 1). First, we find another solution of the above set of equations. Consider the following functioñ
G(u; ω 1 , . . . , ω 4 ) = e − πi 12 B4,4(u;ω) Γ(e −2πi u ω 3 ;p,r,w) Γ(we −2πi u ω 4 ;s,t, w) ,(17)
where B 4,4 (u; ω) is the diagonal multiple Bernoulli polynomial of order four, whose compact form we have found from (38) as
B 4,4 (u; ω 1 , . . . , ω 4 ) = 1 ω 1 ω 2 ω 3 ω 4 [(u − 1 2 4 k=1 ω k ) 2 − 1 4 4 k=1 ω 2 k ] 2 − 1 30 4 k=1 ω 4 k − 1 12 1≤j<k≤4 ω 2 j ω 2 k .(18)
This function satisfies four linear difference equations of the first order
G(u + ω 1 ; ω) = e − πi 3 B3,3(u;ω2,ω3,ω4) Γ(e −2πi u ω 3 ;p,w) Γ(we −2πi u ω 4 ;t, w)G (u; ω),(19)G(u + ω 2 ; ω) = e − πi 3 B3,3(u;ω1,ω3,ω4) Γ(e −2πi u ω 3 ;r,w) Γ(we −2πi u ω 4 ;s, w)G (u; ω),(20)G(u + ω 3 ; ω) = e − πi 3 B3,3(u;ω1,ω2,ω4) Γ(e −2πi u ω 4 ;s,t)G(u; ω),(21)G(u + ω 4 ; ω) = e − πi 3 B3,3(u;ω1,ω2,ω3) Γ(e −2πi u ω 3 ;p,r)G(u; ω),(22)
following from the previously defined formulas and the relation B 4,4 (u + ω 4 ; ω) − B 4,4 (u; ω) = 4ω 4 B 3,3 (u; ω). But this is precisely the set of equations (13)- (16). Indeed, equality of coefficients in (16) and (22) is nothing else than the relation (9). Equality of coefficients in (13) and (19) or in (15) and (21) follows from (9) after the replacement ω 1 → ω 4 or ω 3 → ω 4 , respectively. Equality of coefficients in (14) and (20) follows after the replacement in (9) ω 1 → ω 4 and subsequent substitution ω 2 → ω 1 . Since ω j 's are incommensurate we conclude that the ratiõ G(u; ω)/G(u; ω) is a constant independent on u. However, there is no distinguished value of u for which the equality of normalizations of G andG becomes obvious. The fact thatG
(u; ω 1 , ω 2 , ω 3 , ω 4 ) = G(u; ω 1 , ω 2 , ω 3 , ω 4 )(23)
follows from an SL(4, Z)-modular group transformation law for the double elliptic gamma function established as Corollary 9 in [22]. So, in the same way as in the lower order cases, special solutions of the key equation (13) define double elliptic gamma functions: the functions Γ(e 2πi u ω 2 ; p, q, t) and 1/Γ(q −1 e 2πi u ω 2 ; p, q −1 , t) satisfy it for |q| < 1 and |q| > 1, respectively, and G(u; ω 1 , . . . , ω 4 ) covers the domain |q| = 1. The latter function is defined for |p|, |r|, |s|, |t|, |w| < 1 and |q| ≤ 1 (more precisely, for the union of the upper half plane Im(ω 1 /ω 2 ) > 0 and the half line ω 1 /ω 2 > 0), for other admissible domains of values of bases it will take a different form.
3. A 6d/4d theory with W (E 7 )-invariant superconformal index
Superconformal indices are defined as [23,24]
I(y) = Tr [(−1) F m k=1 y G k k e −βH ],
where F is the fermion number, G k form the maximal Cartan subalgebra preserving a distinguished supersymmetry relation involving one supercharge and its superconformal partner
{Q, S} = 2H, Q 2 = S 2 = 0, [Q, G k ] = [S, G k ] = 0.
The trace is effectively taken over the space of BPS states formed by zero modes of the operator H which eliminates dependence on the chemical potential β. Computing supersymmetric indices of nonconformal theories on curved backgrounds that flow to certain superconformal field theories one gets superconformal indices of the theories with the same superconformal fixed points [25]. Computation of such indices via the localization techniques was initiated in [26]. We shall not discuss general structure of these indices in 4d field theories since they were described in many previous papers, see, e.g., [6,7]. Take a particular N = 1 4d theory in the space-time S 3 ×S 1 with SP (2N ) gauge group and the flavor group SU (8)×U (1). In addition to the vector superfield in the adjoint representation of SP (2N ), take 8 chiral matter fields forming the fundamental representation of SP (2N ) with the R-charge 1/2 and U (1)-charge (1 − N )/4. Take also one antisymmetric SP (2N )-tensor field of zero R-and SU (8)-charges and unit U (1)charge. For N = 1, the global group U (1) decouples and the tensor field is absent.
The superconformal index of this theory is described by the following elliptic hypergeometric integral [5]:
I(y 1 , . . . , y 8 ; t; p, q) = (p; p) N (q; q) N 2 N N ! Γ(t; p, q) N −1 T N 1≤j<k≤N Γ(tz ±1 j z ±1 k ; p, q) Γ(z ±1 j z ±1 k ; p, q) × N j=1 8 i=1 Γ(t 1−N 4 (pq) 1 4 y i z ±1 j ; p, q) Γ(z ±2 j ; p, q) dz j 2πiz j .(24)
Here y i are fugacities for SU (8)-group satisfying the constraint In addition to the obvious S 8 -symmetry in variables y i , function (24) obeys the following hidden symmetry transformation extending S 8 -group to W (E 7 ) -the Weyl group of the exceptional root system E 7 :
I(y 1 , . . . , y 8 ; t; p, q) = N −1 m=0 1≤i<j≤4 Γ(t m+ 1−N 2 (pq) 1 2 y i y j ; p, q) × 5≤i<j≤8 Γ(t m+ 1−N 2 (pq) 1 2 y i y j ; p, q) I(ŷ 1 , . . . ,ŷ 8 ; t; p, q), whereŷ k = y k √ Y ,ŷ k+4 = √ Y y k+4 , k = 1, . . . , 4, Y = y 1 y 2 y 3 y 4 .
Equivalently one can write Y −1 = y 5 y 6 y 7 y 8 . For N = 1 this relation was established by the author [2] and it was extended to arbitrary N by Rains [19]. Consider the following ratio involving double elliptic gamma functions .
First, we show that this function is W (E 7 )-group invariant. Indeed, explicit substitution yields I 6d/4d (ŷ 1 , . . . ,ŷ 8 ; t; p, q) = I 6d/4d (y 1 , . . . , y 8 ; t; p, q),
which follows from the relation
1≤j<k≤8 Γ(t N +1 2 (pq) 1 2 y j y k ; p, q, t) Γ(t N +1 2 (pq) 1 2ŷ jŷk ; p, q, t) = 1≤j<k≤4 5≤j<k≤8 Γ(t N +1 2 (pq) 1 2 y j y k ; p, q, t) Γ(t −N +1 2 (pq) 1 2 y j y k ; p, q, t) = N −1 m=0 1≤j<k≤4 5≤j<k≤8 Γ(t m+ 1−N 2 (pq) 1 2 y j y k ; p, q).
In [19] the W (E 7 )-transformation was also written in the form (26), but for a function different from (25). Now we would like to interpret equality (26) as a symmetry of the superconformal index of some 6d field theory with a 4d defect (we use the terminology of [20] where similar mixed 4d/2d theories were constructed). The main inspiration for that comes from a beautiful 5d/4d field theory interpretation of the W (E 7 )-symmetry of the elliptic analogue of Euler-Gauss hypergeometric function given by Dimofte and Gaiotto in [8].
The 6d-index for N = (1, 0) theories on the S 5 × S 1 manifold is
I(y; p, q, t) = Tr [(−1) F p C1 q C2 t C3 k y G k k ],
where G k are the flavor group maximal torus generators and C 1,2,3 are Cartan generators for the space-time symmetry group. In the notations of Imamura [16] p C1 q C2 t C3 = x j1+3R/2 y j2 3 y j3 8 , where R is the Cartan of SU (2) R -subalgebra, j 1 is the generator of U (1) V and j 2 , j 3 are Cartans of SU (3) V with U (1) V × SU (3) V being a subgroup of the SO(6)isometry group of S 5 . Perturbative contributions to the index are described by the double elliptic gamma functions [16,17] with bases p = xy 3 y 8 , q = x y 3 y 8 , t = xy 2 8 .
One can permute p, q, and t, but we stick to this choice leading to
C 1,2 = 1 3 (j 1 − j 3 2 ) ± j 2 2 + R 2 , C 3 = 1 3 (j 1 + j 3 ) + R 2 .(27)
E.g., for a U (1)-flavor group hypermultiplet one has the index
I hyp (y; p, q, t) = 1 Γ( √ pqty; p, q, t) = exp ∞ n=1 i hyp (y n ; p n , q n , t n ) n ,(28)i hyp (y; p, q, t) = √ pqt(y + y −1 ) (1 − p)(1 − q)(1 − t)
.
For SU (2) gauge group vector superfield one obtains
I vec (z; p, q, t) = κ Γ(z ±2 ; p, q, t) (1 − z 2 )(1 − z −2 ) = exp ∞ n=1
i vec (z n ; p n , q n , t n ) n ,
i vec (z; p, q, t) = 1 − 1 + pqt (1 − p)(1 − q)(1 − t) χ adj, SU(2) (z), κ = lim x→1(29)
Γ(x; p, q, t) 1 − x = (p; p)(q; q)(t; t)(pq; p, q)(pt; p, t)(qt; q, t)(pqt; p, q, t) 2 .
The multiplier κ appears naturally from the adjoint representation character χ adj, SU(2) (z) = z 2 + z −2 + 1. One can incorporate into I vec a piece of Haar measure for SU (2) and cancel thus the terms (1 − z 2 )(1 − z −2 ). Take now the 4d interacting gauge theory described above and assume that it lives on a S 3 × S 1 manifold immersed into the taken 6d space-time S 5 × S 1 in the "corner" x 5 = x 6 = 0 of S 5 defined by the coordinate constraint 6 i=1 x 2 i = 1. This defect breaks half of 6d supersymmetries, presumably preserving the supercharge used for defining the superconformal index. Associate fugacities p and q with the isometries of the space S 3 × S 1 , which connects corresponding 4d/6d Cartan generators as J 1 ∝ j 1 − j 3 /2 and J 2 ∝ j 2 . The abelian group associated with the generator C 3 in (27) is identified from the 4d theory point of view with the U (1)flavor group whose fugacity is t. Take now free 6d gauge invariant hypermultiplets forming the totally antisymmetric tensor of second rank T A for the mentioned SU (8) flavor group and couple them to the taken 4d defect.
The interaction superpotential that ties together the flavor symmetries and the rotation symmetry of the defect could be of the form W = Tr qM q x5=x6=0 , where q are the 4d quark superfields and M is one of the two chiral fields in the 6d hypermultiplet. Here the trace contracts the gauge indices of the quarks with the symplectic form as well as the SU (8) flavor indices of the quarks and M . Since q has U (1)-charge (1 − N )/4, this results in the additional U (1)-charge of the hypermultiplets equal to N/2 after taking into account the rotation symmetries of M and the actual Lagrangian couplings obtained after integration of W over the superspace (the author is indebted to D. Gaiotto for pointing out on such a possibility). This yields a 6d model with a 4d defect similar to 4d/2d systems considered in [20] (in particular, see the toy model considered in Sect. 4.3 of [27]). As a result, the hypermultiplet index takes the form
I TA (y; p, q, t) = 1≤j<k≤8 1 Γ(t N/2 √ pqty j y k ; p, q, t) , 8 k=1 y k = 1,(30)
which evidently coincides with the multiplier in (25). The "corner" defect 4d theory gives its own contribution to the superconformal index described by the integral I(y 1 , . . . , y 8 ; t; p, q). The combined index has W (E 7 )-symmetry indicating that this theory may have the enhanced E 7 -flavor group, provided there exists an appropriate point in the moduli space. This is a rough potential physical picture behind relation (26) the detailed consideration of which lies beyond the scope of the present note. For N = 1 a simplification takes place since the U (1) group decouples from the 4d-defect. In this case I 6d/4d turns into the W (E 7 )-invariant half-index of [8] in the limit t → 0, but this seems to be a formal coincidence since the t-parameter should stay intact in the half-index for N > 1. As proposed in [2], one can build elliptic hypergeometric integrals using the modified elliptic gamma function. This is achieved by mere replacement of Γ(e 2πiu/ω2 ; p, q)functions by G(u; ω 1 , ω 2 , ω 3 ) and appropriate change of the integration contour. Taking the limit ω 3 → ∞ such that p, r → 0 one obtains hyperbolic hypergeometric integrals expressed in terms of the hyperbolic gamma function γ (2) (u; ω 1 , ω 2 ) (see the Appendix). Using this procedure the integral (24) can be reduced [28,29] to the following expression:
I h (x 1 , . . . , x 8 ; λ; ω 1 , ω 2 ) = 1 2 N N ! γ (2) (λ; ω 1 , ω 2 ) N −1 × i∞ −i∞ 1≤j<k≤N γ (2) (λ ± u j ± u k ; ω 1 , ω 2 ) γ (2) (±u j ± u k ; ω 1 , ω 2 ) N j=1 8 k=1 γ (2) (µ k ± u j ; ω 1 , ω 2 ) γ (2) (±2u j ; ω 1 , ω 2 ) N j=1 du j i √ ω 1 ω 2 ,
where chemical potentials are related to flavor fugacities as t = e 2πiλ/ω2 and y k = e 2πix k /ω2 with
µ k = x k + ω 1 + ω 2 4 − (N − 1) λ 4 , 8 k=1
x k = 0.
In terms of µ k the balancing condition reads
2(N − 1)λ + 8 k=1 µ k = 2(ω 1 + ω 2 ).
Define now a new function I 5d/3d (x 1 , . . . , x 8 ; λ; ω 1 , ω 2 ) = I h (x 1 , . . . , x 8 ; λ; ω 1 , ω 2 )
1≤j<k≤8 γ (3) ( N +1 2 λ + µ j + µ k ; ω 1 , ω 2 , λ) ,(31)
where γ (3) (u; ω 1 , ω 2 , λ) is the hyperbolic gamma function of third order (see the Appendix). Again, it is not difficult to see that this function is W (E 7 )-invariant as a consequence of known relations for I h -integral:
I 5d/3d (x 1 , . . . ,x 8 ; λ; ω 1 , ω 2 ) = I 5d/3d (x 1 , . . . , x 8 ; λ; ω 1 , ω 2 ),(32)
wherex
k = x k − 1 2 4 l=1 x l ,x k+4 = x k+4 + 1 2 4 l=1
x l , k = 1, . . . , 4.
Integral (31) may be interpreted as the partition function of some 5d-theory coupled to a 3d defect. Indeed, contribution of 5d hypermultiplets to the partition function is determined by the 1/γ (3) -function [14,16] which indicates that our 5d theory has the field content similar to the one described earlier with the defect S 3 × S 1 replaced by the squashed three-sphere S 3 b with b 2 = ω 1 /ω 2 . The transition from 4d indices to 3d partition functions of theories living on such manifolds was described in [29]. Note that symmetry transformation (32) looks similar to the enhanced E n -global symmetry discussed in [30] asking for an investigation of potential relations between corresponding theories.
Relevance of the modular group transformations
Let us discuss physical relevance of modular groups acting on the generalized gamma functions. Quasiperiods ω k are usually interpreted as squashing parameters and coupling constants. The generalized gamma functions are defined differently for different domains of these parameters related to each other by modular transformations usually playing the role of S-dualities.
The simplest example of the relevance of SL(2, Z)-modular group is given by the q-gamma function. It can be defined as a solution of the equation f (u + ω 1 ) = (1 − e 2πiu/ω2 )f (u). For |q| < 1 its solution 1/(e 2πiu/ω2 ; q) defines the standard q-gamma function and serves as a building block of various partition functions. However, to cover the region |q| = 1, one needs SL(2, Z)-modular transformation [31] and define the modified q-gamma function (10), i.e. to use the ratio of modular transformed elementary partition functions.
Consider now the elliptic gamma function Γ(z; p, q) describing the superconformal index for a 4d chiral superfield. In order to define an analogue of this function for the region |q| = 1 in [2] the modified elliptic gamma function was proposed as the ratio of this index with a U (1)-group fugacity parametrization z = e 2πiu/ω2 and superconformal group generator fugacities q = e 2πiω1/ω2 and p = e 2πiω3/ω2 and the index with a different choice of squashing parameters Γ(qe 2πi u ω 1 ; r,q). Surprisingly, this ratio yields again the chiral field index with yet another parametrization of fugacities e − πi 3 B3,3(u;ω) Γ(e −2πi u ω 3 ;r,p). The exponential cocycle factor spoils this interpretation and requires a physical interpretation. As shown in [6] this SL(3, Z)group action on 4d superconformal indices describes the 't Hooft anomaly matching conditions as the conditions of cancellation of this cocycle contributions described by a curious set of Diophantine equations. Therefore this modular group plays quite important role in the formalism.
A similar picture at the level of free 6d hypermultiplet index was described recently in [14] in relation to the topological strings partition function. Namely, I hyp (y; p, q, t) is proportional to the latter function and, as argued in [14], a particular combination of three SL(4, Z)-transformed versions of it should yield yet another similar partition function. And this expectation is confirmed with the help of an SL(4, Z)-modular group transformation for the double elliptic gamma function which is written in our case as equality (23).
However, in difference from the G(u; ω 1 , ω 2 , ω 3 )-function case, the elliptic hypergeometric integrals formed from G(u; ω 1 , . . . , ω 4 ) do not reduce to the integrals composed from Γ(z; p, q, t). Now the modular group simply maps them into similar integrals up to the cocycle ∝ e − πi 12 B4,4 multiplying the integral kernels. Therefore one should not expect cancellation of these factors from the integrals. Cancellation of even the gauge group chemical potentials is possible only under very strong restrictions, e.g. for SU (2) gauge group it is possible only for N f = 16 at the expense of an unusual quadratic restriction on chemical potentials. Such exponentials have the forms resembling the Casimir energy contributions to the indices [17]. Therefore it is necessary to better understand the general structure of full 6d superconformal indices before connecting SL(4, Z)-modular group transformations to higher dimensional anomalies. Still, we can see an involvement of the B 4,4 -polynomial in the 4d anomaly matching conditions. Define a modified elliptic hypergeometric integral:
I mod (x 1 , . . . , x 8 ; ω 1 , . . . , ω 4 ) = (p;p) N (r;r) N 2 N N ! G(ω 4 ; ω 1 , ω 2 , ω 3 ) N −1 × [− ω 3 2 , ω 3 2 ] N 1≤j<k≤N G(ω 4 ± u j ± u k ; ω 1 , ω 2 , ω 3 ) G(±u j ± u k ; ω 1 , ω 2 , ω 3 ) × N j=1 8 i=1 G(x i − N 4 ω 4 + 1 4 4 k=1 ω k ± u j ; ω 1 , ω 2 , ω 3 ) G(±2u j ; ω 1 , ω 2 , ω 3 ) du j ω 3 ,(33)
which is obtained from (24) simply by the replacement of Γ-functions by G-functions using exponential representation for fugacities in terms of chemical potentials and passing to the integration over a cube. Note that this integral is well-defined for |q| = 1. Introduce "the modified index" I mod 6d/4d (x 1 , . . . , x 8 ; ω 1 , . . . , ω 4 )
= I mod (x 1 , . . . , x 8 ; ω 1 , . . . , ω 4 ) 1≤j<k≤8 G( N 2 ω 4 + 1 2 4 k=1 ω k + x j + x k ; ω 1 , . . . , ω 4 ) ,(34)
containing the modified double elliptic gamma function. It is not difficult to check that this expression is also W (E 7 )-invariant I mod 6d/4d (x 1 , . . . ,x 8 ; ω 1 , . . . , ω 4 ) = I mod 6d/4d (x 1 , . . . , x 8 ; ω 1 , . . . , ω 4 ).
Now one can replace G-functions by their modular transformed expressionsG containing exponentials of Bernoulli polynomials and check that relation (35) boils down to an SL(3, Z)-modular transformation of the previous relation (26). At the level of integral (33) with the constraint 2(x 7 + x 8 ) = order polynomial B 4,4 (u; ω) is effectively involved into these anomaly matchings as well.
The residue calculus for elliptic hypergeometric integrals was developed long ago, see [2,9] and references therein. It has shown that by shrinking the integration contour to zero one can formally represent integrals as sums of bilinear combinations of elliptic hypergeometric series with permuted base variables which describes the factorization of superconformal indices into some more elementary building blocks which in general are not defined in the limit p → 0 or q → 0. This analysis has lead to the discovery of the notion of two-index biorthogonality and the elliptic modular doubling principle [2,9]. In [7] this residue calculus applied to 4d N = 2 superconformal indices was physically interpreted as a result of insertions of surface defects into the bulk theory.
One can investigate the structure of residues for the modified elliptic hypergeometric integrals/indices and come to similar factorization in terms of different elliptic hypergeometric series. The latter series are related by an SL(3, Z)transformation and remain well defined in the limit p → 0, which leads to hyperbolic integrals. As a result hyperbolic integrals are represented as combinations of products of two q-hypergeometric series related by an SL(2, Z)-modular transformation (their bases are q andq) [2,9]. This factorization was used in [32] for computing partition functions in some 3d N = 2 theories appearing from the reduction of 4d N = 4 SYM theories. The principle difference between 4d (elliptic) and 3d (hyperbolic) cases consists in the fact that in 3d this factorization of sums of residues into modular blocks has rigorous meaning because of the convergence of corresponding infinite series for |q| < 1, whereas in 4d such series do not converge for generic values of p and q bases and the factorization of indices has in general a formal meaning. It is not difficult to develop the residue calculus for 6d indices and find triple sums of residues. However, corresponding sums cannot factorize because there are no triply periodic functions. This makes the 4d (elliptic) case rather unique and raises the interest to 6d indices as qualitatively different objects.
The author is indebted to T. Dimofte, D. Gaiotto, Y. Imamura, A. Klemm, J. Manschot, G. W. Moore, and G. S. Vartanov for valuable discussions and to the referee for constructive remarks. This work is partially supported by RFBR grant no. 11-01-00980 and NRU HSE scientific fund grant no. 12-09-0064.
where ω(j) = (ω 1 , . . . , ω j−1 , ω j+1 , . . . , ω m ) and ζ 0 (s, u; ω) = u −s . The multiple gamma function is defined by Barnes as Γ m (u; ω) = exp(∂ζ m (s, u; ω)/∂s) s=0 .
As a consequence of (36) it satisfies m finite difference equations Γ m (u + ω j ; ω) = 1 Γ m−1 (u; ω(j)) Γ m (u; ω), j = 1, . . . , m,
where Γ 0 (u; ω) := u −1 .
The multiple sine-function is defined as
S m (u; ω) = Γ m ( m k=1 ω k − u; ω) (−1) m Γ m (u; ω)
and the hyperbolic gamma function is γ (m) (u; ω) = S m (u; ω) (−1) m−1 .
One has equations γ (m) (u + ω j ; ω) = γ (m−1) (u; ω(j)) γ (m) (u; ω), j = 1, . . . , m.
The standard elliptic gamma function can be written as a special ratio of four Γ 3 (u; ω)-functions, and the double elliptic gamma function is given by a product of four Γ 4 (u; ω)-functions [9]. One can derive the integral representation [22] γ (m) (u; ω) = exp −PV
In particular, one has the following relation with the modified q-gamma function γ(u; ω 1 , ω 2 ): γ (2) (u; ω 1 , ω 2 ) = e − πi 2 B2,2(u;ω1,ω2) γ(u; ω 1 , ω 2 ).
Collapsing integrals to sums of residues one can derive infinite product representations for γ (m) (u; ω) [22]. Particular inversion relations have the form γ (2) ( 2 k=1 ω k + u; ω 1 , ω 2 )γ (2) (−u; ω 1 , ω 2 ) = 1, γ (3) ( 3 k=1 ω k + u; ω 1 , ω 2 , ω 3 ) = γ (3) (−u; ω 1 , ω 2 , ω 3 ).
use the conventions Γ(a, b; . . .) := Γ(a; . . .)Γ(b; . . .), Γ(az ±1 ; . . .) := Γ(az; . . .)Γ(az −1 ; . . .), Γ(az ±1 y ±1 ; . . .) := Γ(azy; . . .)Γ(az −1 y; . . .)Γ(azy −1 ; . . .)Γ(az −1 y −1 ; . . .).
8
i=1 y i = 1, t is the fugacity for the group U (1), p and q are fugacities for the superconformal group generator combinations R/2 + J 1 ± J 2 , where R is the R-charge and J 1,2 are Cartan generators of SO(4)-rotations. Nontrivial contributions to the index come only from the states with H = E − 2J 1 − 3R/2 = 0, where E is the energy. In terms of the variables t i = t i we have the balancing condition t 2N −2 8 i=1 t i = (pq) 2 . The constraints |t|, |t i | < 1 are needed for the choice of the integration contours as unit circles with positive orientation T. For N = 1 the integral I(y 1 , . . . , y 8 ; p, q) is nothing else than an elliptic analogue of the Euler-Gauss hypergeometric function introduced in[2].
I
6d/4d (y 1 , . . . , y 8 ; t; p, q) := I(y 1 , . . . , y 8 ; t; p, q) j y k ; p, q, t)
Appendix A. Barnes multiple gamma function Barnes multiple zeta function ζ m (s, u; ω)[10] is originally defined by an m-fold seriesζ m (s, u; ω) = ∞ n1,...,nm=0 1 (u + Ω) s , Ω = n 1 ω 1 + . . . + n m ω m ,where s, u ∈ C. It converges for Re(s) > m, provided all ω j lie in one half-plane formed by a line passing through zero (then there are no accumulation points of the Ω-lattice in compact domains). This zeta function satisfies equations ζ m (s, u + ω j ; ω) − ζ m (s, u; ω) = −ζ m−1 (s, u; ω(j)), j = 1, . . . , m,
(ω k ) > 0 and 0 < Re(u) < Re( m k=1 ω k ) and B m,m are multiple Bernoulli polynomials defined by the generating function ,n (u; ω 1 , . . . , ω m )x n n! .
k=1 ω k + (N − 1)ω 4 this was done already in[28]. As mentioned above, the condition of cancellation of Bernoulli polynomial coefficients in the integration variables and external parameters describes the 't Hooft anomaly matching conditions. Therefore the fourth
Applications of the six-dimensional (2,0) theories to Physical Mathematics. G W Moore, Lecture Notes for Felix Klein Lectures. G. W. Moore, Applications of the six-dimensional (2,0) theories to Physical Mathematics, Lecture Notes for Felix Klein Lectures (Bonn, October 1-11, 2012).
On the elliptic beta function. V P Spiridonov, math.CA/0303205Theta hypergeometric integrals. 561Habilitation ThesisAlgebra i Analiz. Elliptic hypergeometric functionsV. P. Spiridonov, On the elliptic beta function, Russian Math. Surveys 56 (1) (2001), 185-186; Theta hypergeometric integrals, Algebra i Analiz 15 (6) (2003), 161-215, math.CA/0303205; Elliptic hypergeometric functions, Habilitation Thesis (Dubna, 2004).
. G E Andrews, R Askey, R Roy, Special Functions, Encyclopedia of Math. Appl. 71Cambridge Univ. PressG. E. Andrews, R. Askey, and R. Roy, Special Functions, Encyclopedia of Math. Appl. 71, Cambridge Univ. Press, Cambridge, 1999.
Applications of the superconformal index for protected operators and q-hypergeometric identities to N = 1 dual theories. F A Dolan, H Osborn, Nucl. Phys. 818F. A. Dolan and H. Osborn, Applications of the superconformal index for protected operators and q-hypergeometric identities to N = 1 dual theories, Nucl. Phys. B818 (2009), 137-178.
Superconformal indices for N = 1 theories with multiple duals. V P Spiridonov, G S Vartanov, Nucl. Phys. 824V. P. Spiridonov and G. S. Vartanov, Superconformal indices for N = 1 theories with multiple duals, Nucl. Phys. B824 (2010), 192-216.
Elliptic hypergeometry of supersymmetric dualities. V P Spiridonov, G S Vartanov, Elliptic hypergeometric integrals and 't Hooft anomaly matching conditions. 30416JHEPV. P. Spiridonov and G. S. Vartanov, Elliptic hypergeometry of supersymmetric dualities, Commun. Math. Phys. 304 (2011), 797-874; Elliptic hypergeometric integrals and 't Hooft anomaly matching conditions, JHEP 06 (2012), 016.
D Gaiotto, L Rastelli, S S Razamat, arXiv:1207.3577Bootstrapping the superconformal index with surface defects. hep-thD. Gaiotto, L. Rastelli, and S. S. Razamat, Bootstrapping the superconformal index with surface defects, arXiv:1207.3577 [hep-th].
An E 7 Surprise. T Dimofte, D Gaiotto, JHEP. 10129T. Dimofte and D. Gaiotto, An E 7 Surprise, JHEP 10 (2012), 129.
Essays on the theory of elliptic hypergeometric functions. V P Spiridonov, arXiv:0805.3135Russian Math. Surveys. 633math.CAV. P. Spiridonov, Essays on the theory of elliptic hypergeometric functions, Russian Math. Surveys 63 (3) (2008), 405-472; arXiv:0805.3135 [math.CA].
On the theory of the multiple gamma function. E W Barnes, Trans. Cambridge Phil. Soc. 19E. W. Barnes, On the theory of the multiple gamma function, Trans. Cambridge Phil. Soc. 19 (1904), 374-425.
Indices for superconformal field theories in 3,5 and 6 dimensions. J Bhattacharya, S Bhattacharyya, S Minwalla, S Raju, JHEP. 080264J. Bhattacharya, S. Bhattacharyya, S. Minwalla, and S. Raju, Indices for superconformal field theories in 3,5 and 6 dimensions, JHEP 0802 (2008), 064.
Instanton partition functions and M -theory. N A Nekrasov, Japan. J. Math. 4N. A. Nekrasov, Instanton partition functions and M -theory, Japan. J. Math. 4 (2009), 63-93.
A Iqbal, C Kozcaz, T Sohail, arXiv:0903.0961Periodic Schur process, cylindric partitions and N = 2 * theory. hep-thA. Iqbal, C. Kozcaz, and T. Sohail, Periodic Schur process, cylindric partitions and N = 2 * theory, arXiv:0903.0961 [hep-th].
G Lockhart, C Vafa, arXiv:1210.5909Superconformal partition functions and non-perturbative topological strings. hep-thG. Lockhart and C. Vafa, Superconformal partition functions and non-perturbative topolog- ical strings, arXiv:1210.5909 [hep-th].
M 5-branes from gauge theories on the 5-sphere. H.-C Kim, S Kim, JHEP. 05144H.-C. Kim and S. Kim, M 5-branes from gauge theories on the 5-sphere, JHEP 05 (2013), 144.
Y Imamura, arXiv:1210.6308Perturbative partition function for squashed S 5. hep-thY. Imamura, Perturbative partition function for squashed S 5 , arXiv:1210.6308 [hep-th].
H.-C Kim, J Kim, S Kim, arXiv:1211.0144Instantons on the 5-sphere and M 5-branes. hep-thH.-C. Kim, J. Kim, and S. Kim, Instantons on the 5-sphere and M 5-branes, arXiv:1211.0144 [hep-th].
Elliptic beta integrals and solvable models of statistical mechanics, Contemp. V P Spiridonov, arXiv:1011.3798Math. 563hep-thV. P. Spiridonov, Elliptic beta integrals and solvable models of statistical mechanics, Con- temp. Math. 563 (2012), 181-211, arXiv:1011.3798 [hep-th].
Transformations of elliptic hypergeometric integrals. E M Rains, Ann. Math. 171E. M. Rains, Transformations of elliptic hypergeometric integrals, Ann. Math. 171 (2010), 169-243.
D Gaiotto, G W Moore, A Neitzke, arXiv:1103.2598Wall-crossing in coupled 2d − 4d systems. hep-thD. Gaiotto, G. W. Moore, and A. Neitzke, Wall-crossing in coupled 2d − 4d systems, arXiv:1103.2598 [hep-th].
The elliptic gamma function and SL(3, Z) × Z 3. G Felder, A Varchenko, Adv. in Math. 156G. Felder and A. Varchenko, The elliptic gamma function and SL(3, Z) × Z 3 , Adv. in Math. 156 (2000), 44-76.
The modular properties and the integral representations of the multiple elliptic gamma functions. A Narukawa, Adv. Math. 189A. Narukawa, The modular properties and the integral representations of the multiple elliptic gamma functions, Adv. Math. 189 (2005), 247-267.
An index for 4 dimensional super conformal theories. J Kinney, J M Maldacena, S Minwalla, S Raju, Commun. Math. Phys. 275J. Kinney, J. M. Maldacena, S. Minwalla, and S. Raju, An index for 4 dimensional super conformal theories, Commun. Math. Phys. 275 (2007), 209-254.
Counting chiral primaries in N = 1, d = 4 superconformal field theories. C Römelsberger, Nucl. Phys. 747C. Römelsberger, Counting chiral primaries in N = 1, d = 4 superconformal field theories, Nucl. Phys. B747 (2006), 329-353.
Rigid supersymmetric theories in curved superspace. G Festuccia, N Seiberg, JHEP. 1106114G. Festuccia and N. Seiberg, Rigid supersymmetric theories in curved superspace, JHEP 1106 (2011), 114.
D-particle bound states and generalized instantons. G W Moore, N Nekrasov, S Shatashvili, Commun. Math. Phys. 209G. W. Moore, N. Nekrasov, and S. Shatashvili, D-particle bound states and generalized instantons, Commun. Math. Phys. 209 (2000), 77-95.
D Gaiotto, S Gukov, N Seiberg, arXiv:1307.2578Surface defects and resolvents. hepthD. Gaiotto, S. Gukov, and N. Seiberg, Surface defects and resolvents, arXiv:1307.2578 [hep- th].
Unit circle elliptic beta integrals. J F Van Diejen, V P Spiridonov, Ramanujan J. 10J. F. van Diejen and V. P. Spiridonov, Unit circle elliptic beta integrals, Ramanujan J. 10 (2005), 187-204.
From 4d superconformal indices to 3d partition functions. F A H Dolan, V P Spiridonov, G S Vartanov, Phys. Lett. 7043F. A. H. Dolan, V. P. Spiridonov, and G. S. Vartanov, From 4d superconformal indices to 3d partition functions, Phys. Lett. B704 (2011), no. 3, 234-241.
5-dim Superconformal index with enhanced En global symmetry. H.-C Kim, S.-S Kim, K Lee, JHEP. 10142H.-C. Kim, S.-S. Kim, and K. Lee, 5-dim Superconformal index with enhanced En global symmetry, JHEP 10 (2012), 142.
Discrete Heisenberg-Weyl group and modular group. L D Faddeev, Lett. Math. Phys. 34L. D. Faddeev, Discrete Heisenberg-Weyl group and modular group, Lett. Math. Phys. 34 (1995), 249-254.
Superconformal indices of N = 4 SYM field theories. V P Spiridonov, G S Vartanov, arXiv:1005.4196Lett. Math. Phys. 100hep-thV. P. Spiridonov and G. S. Vartanov, Superconformal indices of N = 4 SYM field theories, Lett. Math. Phys. 100 (2012), 97-118, arXiv:1005.4196 [hep-th].
Moscow reg. 141980, Russia and Max-Planck-Institut für Mathematik. Bonn, GermanyDubna7Bogoliubov Laboratory of Theoretical PhysicsBogoliubov Laboratory of Theoretical Physics, JINR, Dubna, Moscow reg. 141980, Russia and Max-Planck-Institut für Mathematik, Vivatsgasse 7, 53111, Bonn, Germany
|
[] |
[
"Microbubble formation and pinch-off scaling exponent in flow-focusing devices",
"Microbubble formation and pinch-off scaling exponent in flow-focusing devices"
] |
[
"Wim Van Hoeve \nPhysics of Fluids\nFaculty of Science and Technology\nMESA + Institute for Nanotechnology\nUniversity of Twente\nP.O. Box 2177500 AEEnschedeThe Netherlands\n",
"Benjamin Dollet \nInstitut de Physique de Rennes\nUMR UR1-CNRS 6251\nUniversité de Rennes 1\nCampus de Beaulieu, Bâtiment 11AF-35042Rennes CedexFrance\n",
"Michel Versluis \nPhysics of Fluids\nFaculty of Science and Technology\nMESA + Institute for Nanotechnology\nUniversity of Twente\nP.O. Box 2177500 AEEnschedeThe Netherlands\n",
"Detlef Lohse \nPhysics of Fluids\nFaculty of Science and Technology\nMESA + Institute for Nanotechnology\nUniversity of Twente\nP.O. Box 2177500 AEEnschedeThe Netherlands\n"
] |
[
"Physics of Fluids\nFaculty of Science and Technology\nMESA + Institute for Nanotechnology\nUniversity of Twente\nP.O. Box 2177500 AEEnschedeThe Netherlands",
"Institut de Physique de Rennes\nUMR UR1-CNRS 6251\nUniversité de Rennes 1\nCampus de Beaulieu, Bâtiment 11AF-35042Rennes CedexFrance",
"Physics of Fluids\nFaculty of Science and Technology\nMESA + Institute for Nanotechnology\nUniversity of Twente\nP.O. Box 2177500 AEEnschedeThe Netherlands",
"Physics of Fluids\nFaculty of Science and Technology\nMESA + Institute for Nanotechnology\nUniversity of Twente\nP.O. Box 2177500 AEEnschedeThe Netherlands"
] |
[] |
We investigate the gas jet breakup and the resulting microbubble formation in a microfluidic flow-focusing device using ultra high-speed imaging at 1 million frames/s.In recent experiments [Dollet et al., Phys. Rev. Lett. 100, 034504 (2008)] it was found that in the final stage of the collapse the radius of the neck scales with time with a 1/3 power-law exponent, which suggested that gas inertia and the Bernoulli suction effect become important. Here, ultra high-speed imaging was used to capture the complete bubble contour and quantify the gas flow through the neck. It revealed that the resulting decrease in pressure, due to Bernoulli suction, is too low to account for an accelerated pinch-off. The high temporal resolution images enable us to approach the final moment of pinch-off to within 1 µs. We observe that the final moment of bubble pinch-off is characterized by a scaling exponent of 0.41 ± 0.01. This exponent is approximately 2/5, which can be derived, based on the observation that during the collapse the neck becomes less slender, due to the exclusive driving through liquid inertia.1 arXiv:1102.5627v1 [physics.flu-dyn]
|
10.1063/1.3631323
|
[
"https://arxiv.org/pdf/1102.5627v1.pdf"
] | 85,444,818 |
1102.5627
|
978356dc2eba9fa7114a436b15a6512b0000a371
|
Microbubble formation and pinch-off scaling exponent in flow-focusing devices
28 Feb 2011
Wim Van Hoeve
Physics of Fluids
Faculty of Science and Technology
MESA + Institute for Nanotechnology
University of Twente
P.O. Box 2177500 AEEnschedeThe Netherlands
Benjamin Dollet
Institut de Physique de Rennes
UMR UR1-CNRS 6251
Université de Rennes 1
Campus de Beaulieu, Bâtiment 11AF-35042Rennes CedexFrance
Michel Versluis
Physics of Fluids
Faculty of Science and Technology
MESA + Institute for Nanotechnology
University of Twente
P.O. Box 2177500 AEEnschedeThe Netherlands
Detlef Lohse
Physics of Fluids
Faculty of Science and Technology
MESA + Institute for Nanotechnology
University of Twente
P.O. Box 2177500 AEEnschedeThe Netherlands
Microbubble formation and pinch-off scaling exponent in flow-focusing devices
28 Feb 2011(Dated: 25 February 2011)
We investigate the gas jet breakup and the resulting microbubble formation in a microfluidic flow-focusing device using ultra high-speed imaging at 1 million frames/s.In recent experiments [Dollet et al., Phys. Rev. Lett. 100, 034504 (2008)] it was found that in the final stage of the collapse the radius of the neck scales with time with a 1/3 power-law exponent, which suggested that gas inertia and the Bernoulli suction effect become important. Here, ultra high-speed imaging was used to capture the complete bubble contour and quantify the gas flow through the neck. It revealed that the resulting decrease in pressure, due to Bernoulli suction, is too low to account for an accelerated pinch-off. The high temporal resolution images enable us to approach the final moment of pinch-off to within 1 µs. We observe that the final moment of bubble pinch-off is characterized by a scaling exponent of 0.41 ± 0.01. This exponent is approximately 2/5, which can be derived, based on the observation that during the collapse the neck becomes less slender, due to the exclusive driving through liquid inertia.1 arXiv:1102.5627v1 [physics.flu-dyn]
I. INTRODUCTION
Liquid droplet pinch-off in ambient air or gas bubble pinch-off in ambient liquid can mathematically be seen as a singularity, both in space and time. 1,2 The process that leads to such a singularity has been widely studied in recent years [3][4][5][6][7][8][9][10][11][12][13][14][15] and is of major importance in an increasing number of medical and industrial applications. Examples of this are the precise formation and deposition of droplets on a substrate using inkjet technology, 16 or for the production of medical microbubbles used in targeted drug delivery. 17,18 For the pinch-off of liquid in gas, the dynamics close to pinch-off exhibit self-similar behavior, which implies that the local shape of the neck is not influenced by its initial conditions. The radius of the neck goes to zero following a universal scaling behavior with r 0 ∝ τ α , where τ represents the time remaining until pinch-off and α the power law scaling exponent. 1 The scaling exponent α is a signature of the physical mechanisms that drive the pinch-off. The formation and pinch-off of a low-viscosity liquid droplet in air is described by a balance between surface tension and inertia, resulting in a 2/3 scaling exponent. [2][3][4]6,19,20 The inverted problem of the collapse of a gaseous thread in a liquid is, however, completely different. Initially, a simple power law was predicted based on a purely liquid inertia driven collapse giving rise to a 1/2 scaling exponent. 7,21,22 However, many groups report power law scaling exponents that are slightly larger than 1/2. 8,[11][12][13][14]23,24 In recent work of Eggers et al. 25 and Gekle et al. 15 it was demonstrated that a coupling between the radial and axial length scale of the neck 10 can explain these small variations in the scaling exponent. Based on a slender-body calculation it is found that α(τ ) = 1/2 + (−16 ln τ ) −1/2 , where α slowly asymptotes to 1/2 when approaching pinch-off.
In the work of Gordillo et al. 8,14 it has been shown that gas inertia, i.e. Bernoulli suction, plays an important role in the bubble pinch-off. The increasing gas flow through the neck results in an accelerated collapse with α = 1/3. 14, 26 It should be noted that the smaller the scaling exponent α the more rapidly the radius of the neck diminishes at the instant of pinch-off, since the speed of collapseṙ 0 ∝ ατ α−1 , where the overdot denotes the time derivative.
In the work of Dollet et al. 27 microbubble formation in a microfluidic flow-focusing device was investigated. A flow-focusing device comprises two co-flowing fluids, an inner gas and an outer liquid phase, that are focused into a narrow channel where bubble pinch-off occurs.
It was found that bubble formation in a square cross-sectional channel (W × H = 20 µm × 20 µm) showed a similar collapse behavior giving a 1/3 scaling exponent. In that paper it was suggested that this exponent reflects the influence of gas inertia. However, this scaling exponent could not be conclusively ascribed to Bernoulli suction, due to a lack of spatial and temporal resolution at the neck in the final stages of pinch-off.
In this work we study the bubble formation for extremely fast bubble pinch-off in a microfluidic flow-focusing channel of square cross-section, using ultra high-speed imaging at 1 Mfps. The complete spatial structure of the bubble, including its neck, was captured. This allowed us to not only investigate the effect of Bernoulli suction, but also the influence of the constituent radial and axial length scale length scales of the neck.
Here we find that the ultimate stage of microbubble pinch-off is purely liquid inertia driven. In our system, the neck becomes less slender when approaching the pinch-off, giving rise to an exponent α = 2/5 over almost 2 decades, which is different as compared to the case of bubble pinch-off in the bulk as reported by Bergmann et al., 11 Thoroddsen et al., 13 and Gekle et al., 26 among others.
II. EXPERIMENTAL SETUP
The experimental setup is shown in Fig. 1a. The flow-focusing device is fabricated with a square cross-section channel geometry, with channel width W = 60 µm and height H = 59 µm, as depicted schematically in Fig. 1, to ensure that the collapse occurs in the radial 3D collapse regime only. 27 The device was produced using rapid prototyping techniques. 28 A homogeneous layer of negative photoresist (SU-8) is spin-coated on a silicon wafer. The thickness of the layer defines the channel height. A chrome mask (MESA + Institute for Nanotechnology, University of Twente, The Netherlands) is used in contact photolithography to imprint features with sizes down to 2 µm. After ultraviolet exposure a cross-linking reaction starts which rigidifies the photoresist that is exposed to the light. The photoresist that is not exposed is removed during development with isopropanol. What is left is a positive relief structure which can be used as a mold to imprint micron-sized channels in polydimethylsiloxane (PDMS) (Sylgard 184, Dow Corning). PDMS is a transparent polymer which is obtained by mixing two components, base and curing agent, in a 10:1 ratio in weight.
The mixture is poured on the mold and cured in a 65 • C oven for 1 hour. The PDMS slab with imprinted microchannels is removed from the mold and then holes are punched in the PDMS. The PDMS slab is oxygen plasma-bonded (Harrick Plasma, Model PDC-002, Ithaca, NY, USA) to a glass cover plate of 1 mm thickness to close the channels. Plasma bonding creates a non-reversible bond which can withstand pressures up to a few bars. 29,30 The oxygen plasma turns the PDMS channel walls temporarily hydrophilic which enhances fluid flow and wetting of the channel walls. After closing the device, 1/16 inch outer diameter Telfon tubing is connected to the inlet channels, through which gas and liquid is supplied. To resolve the growth of the bubble and the extremely fast bubble pinch-off at the same time requires a high-speed camera that is capable of recording images at a high frame rate and at full resolution so that the field of view is sufficient to capture the entire bubble profile at sufficiently high spatial resolution. These two criteria, i.e. a short interframe time (of the order of 1 µs) and a sufficiently large field of view means a specialized ultra high-speed camera is required for this task. Hence, we use the Shimadzu ultra high-speed camera
III. RESULTS
A. Extracting the collapse curves
In Fig. 2 a time series of the formation of a microbubble is shown, where all images are background subtracted to improve the contrast. The first image (frame 1) shows the bubble almost completely blocking the narrow channel (cf. Fig. 1c). This restricts the outer liquid flow and the liquid starts to squeeze the gas in the radial direction forming a neck. The neck becomes smaller and smaller until final pinch-off, resulting in bubble detachment (frame 93).
The complete contour of the bubble is extracted from the recordings using image analysis algorithms in MATLAB (Mathworks Inc., Natick, MA, USA). In order for precise detection of the contour the images were resampled and bandpass filtered in the Fourier domain to achieve sub-pixel accuracy. The schematic of the axisymmetric shape of the bubble with the axis of symmetry along the z-axis is given in Fig. 3.
In From the ultra high-speed imaging results it can be found that this moment occurs between the last frame before actual pinch-off (frame 93 in Fig. 2) and the first frame after pinch-off (frame 94). We estimate the time of collapse with sub-interframe time accuracy by assuming that the collapse exhibits a power law behavior with r 0 ∝ (t c − t) α , where the exponent α and the collapse time t c are a priori unknown, similarly as was done in Bergmann et al. 31 From a best fit to the data we obtain t c = 93.3 ± 1 µs, where the maximum systematic error is equal to the time between two frames. Note that the error in estimating t c results in a deflection of the datapoints away from a straight line bounded between the curves log r 0 (τ ± 1 × 10 −6 s)/R indicated by the gray area in Fig. 4c. This figure also suggests that two different stages during bubble formation exist: in the first stage of the collapse, all data was found to be well approximated by a power law r 0 /R ∝ (τ /τ cap ) α , with α = 0.29 ± 0.02.
In the final stage, when τ ≤ τ cap , a scaling with exponent α = 0.41 ± 0.01 is observed, spanning almost two decades.
B. Liquid inertia driven pinch-off
Approaching the singularity at pinch-off (τ → 0), the relative importance between viscous forces, surface tension forces, and inertial forces are given by the Reynolds number, Weber number, and the capillary number. The Reynolds number, as a measure of the ratio between inertial forces to viscous forces, is expressed as
Re = ρr 0ṙ0 η ,(1)
with characteristic length scale r 0 , and characteristic velocity equivalent to the radial velocity of the interfaceṙ 0 . The relative importance of inertial forces with respect to surface tension forces is given by the Weber number
We = ρr 0ṙ 2 0 γ .(2)
The capillary number represents the relative importance of viscous forces to surface tension forces as
Ca = ηṙ 0 γ .(3)
If we now assume that r 0 ∝ τ α , where the experimentally determined scaling exponent is close to α = 2/5, it follows that Re ∝ τ −1/5 , We ∝ τ −4/5 , and Ca ∝ τ −3/5 . This implies that Re, We, and Ca all diverge approaching the singularity. Accordingly, inertial forces must dominate both surface tension and viscous forces, hence the final stage of the collapse is purely liquid inertia dominated. In Eggers et al. 25 it was shown that for a liquid inertia driven collapse both the radial and axial length scale of the neck are important. Hence, the time evolution of the shape of the neck is investigated by measuring its slenderness. The slenderness ratio λ is defined as the ratio of the axial radius of curvature to the circumferential radius of curvature of the neck. The larger the slenderness ratio is, the more slender the neck is. The axial radius of curvature is measured by locally fitting a circle with radius r c to the contour of the neck, whereas the circumferential radius of curvature r 0 is equivalent to the minimum radius of the neck, see Fig. 3. In Fig. 5 the time evolution of the principal radii of curvature are plotted on a logarithmic scale for the final stage of the collapse. It is found that the axial radius of curvature exhibits a power law behavior r c ∝ τ β , with β = 0.53. The circumferential radius of curvature scales as r 0 ∝ τ α , with α = 0.41, as was shown before (cf. Fig. 4c).
The axial radius of curvature is found to have the more rapidly diminishing exponent (β > α), which implies that the slenderness λ = r c /r 0 ∝ τ β /τ α → 0 for τ → 0. In other words, the neck profile becomes less slender approaching pinch-off, thus, both the radial and the axial length scales are still important. This 3D character implies that the liquid flows spherically inward towards the collapsing neck. Thus, it might be anticipated that, this 3D collapse can be approximately described using the Rayleigh-Plesset equation for spherical bubble collapse 22
r 0r0 + 3 2ṙ 2 0 = 1 ρ p − 2γ r 0 ,(4)
with capillary pressure p. It should be noted that a necessary condition for this is that the neck should be much smaller than the channel dimensions (r 0 W , H). By substituting r 0 ∝ τ α in above equation and keeping the right-hand side constant, which assumes an inertia-dominated flow, it is found that it is necessary that α = 2/5, which agrees surprisingly well with our experimental findings of 0.41 ± 0.01.
C. "Filling effect"
How to account for the scaling r 0 ∝ τ 0.29±0.02 for τ > τ cap , i.e. at early times? At this initial stage of the collapse a thin layer of liquid with a thickness of several micrometers separates the bubble from the hydrophilic channel wall. 32 The liquid flow in such a confined channel can be described using Darcy's law for pressure driven flow through porous media.
The volumetric flow rate of liquid that permeates into the neck region is
Q in = − kA η ∂p ∂z ,(5)
with k the permeability, A = W H − πr 2 0 the cross-sectional area of the thin liquid layer surrounding the bubble, and ∂p/∂z the pressure gradient.
The pressure distribution in the liquid is inhomogeneous, thus the bubble's surface does not have a constant curvature even though the gas pressure is practically uniform. The pressure gradient that drives the liquid flow can be derived from the capillary pressure
p = γ 1 r 0 − 1 r c .(6)
In the initial stage of the collapse, i.e. at the onset of neck formation, r c > r 0 . As a gross simplification, we approximate the neck as a radially collapsing cylinder, of length r c much larger than its radius r 0 . The capillary pressure is then p ≈ γ/r 0 , therefore ∂p/∂z ≈ −r −2 0 ∂r 0 /∂z. The volumetric gas flow rate that is pushed out of the neck region is
Q out = −V g ≈ −r 0 r cṙ0 ,(7)
with V g ≈ r c r 2 0 the volume occupied by the gas. The gas in the neck is replaced by the liquid. This is referred to as the "filling effect", thus, from a balance between Eq. (5) and Eq. (7), we now get
r 0 r cṙ0 ≈ 1 r 2 0 ∂r 0 ∂z ≈ 1 r 0 r c ,(8)
hence, assuming that r c varies little in this initial stage of the collapse, r 2 0ṙ 0 is roughly constant. It follows that the radius of the neck must scale as r 0 ∝ τ α , with α = 1/3. This is in good agreement with the experimentally measured scaling exponent α ≈ 0.29 ± 0.02 for τ > τ cap .
IV. DISCUSSION
In Gekle et al. 26 a supersonic air flow through the neck is visualized using smoke particles and it is reported that Bernoulli suction accelerates the collapse. An accelerated collapse due to Bernoulli suction is also reported by Gordillo et al., 8 giving rise to a 1/3 scaling exponent. It is extremely difficult to measure the gas velocity in a microfluidic flow-focusing device in a direct way. However, the camera's wide field of view (200 µm × 175 µm) enabled us to capture the contour of the expanding bubble in great detail and allows for an estimate of the gas velocity.
The volume of the bubble V b , as the gas volume downstream of the neck that is enclosed by the bubble contours, is calculated as follows:
V b = b a dzπr 2 (z),(9)
with the profile of the bubble r(z), with a the axial coordinate of the location of the neck and b the tip of the bubble (cf. Fig. 3). We plot the bubble volume V b as a function of time until pinch-off in Fig. 6. The bubble's contour is indicated for four characteristic moments during the bubble formation process in the panels (i-iv) of Fig. 6. In the initial stage the gaseous thread in front of the flow-focusing channel is forced to enter the channel and completely We estimate the volumetric gas flow rate Q g through the neck as the time derivative of the volume of the bubble, Q g =V b , where it is assumed that no gas diffusion into the surrounding liquid take place. The bubble volume is approximated by a second order polynomial function, as indicated by the dashed line in Fig. 6, which is used to obtain the time derivative of the volume.
The gas velocity through the neck u g is calculated as the volume flow rate Q g divided by the cross-sectional area of the neck (πr 2 0 ). In Fig. 7 we plot the gas velocity as a function of the time until pinch-off. Note that the gas velocity is low during almost the entire collapse process, i.e. |u g | < 0.5 m/s (see the inset in Fig. 7 of flow reversal and ρ g = 1.2 kg/m 3 the gas density.
We now compare this pressure drop with the capillary pressure in the neck. Just before pinch-off, the capillary pressure, as a consequence of surface tension forces acting on the curved interface, should be at a much higher pressure than the surrounding liquid. For an axisymmetric surface profile, with r = r(z), the capillary pressure, as a function of the axial coordinate z, is given by the Laplace equation
p(z) = γ 1 r √ 1 + r 2 − r 1 + r 2 3/2 ,(10)
where prime denotes the derivative with respect to z. 33 Note that, at the location where the neck is thinnest, the first term equals the circumferential curvature (r −1 0 ), whereas the second term represents the axial curvature (r −1 c ). In Fig. 8 contributions of the capillary pressure. In the figure it is demonstrated that the pressure drop due to Bernoulli suction is marginal in comparison to the increasing capillary pressure approaching pinch-off. Hence, it can be concluded that Bernoulli suction, i.e. gas inertia, is irrelevant during the entire bubble formation process. It is also shown that the concave axial curvature counteracts the circumferential curvature leading to a significant decrease in capillary pressure. This confirms that the axial length scale of the neck is important and gives the collapse a three-dimensional character.
V. CONCLUSION
In conclusion, we visualized the complete microbubble formation and extremely fast bubble pinch-off in a microscopically narrow flow-focusing channel of square cross-section (W × H = 60 µm × 60 µm), using ultra high-speed imaging. The camera's wide field of view enabled visualization of all the features of bubble formation, including the two principal radii of curvature of the bubble's neck. Recording was performed at 1 Mfps, thereby, approaching the moment of pinch-off to within 1 µs. It was found that the neck's axial length scale decreases faster than the radial one, ensuring that the neck becomes less and less slender, collapsing spherically towards a point sink. We describe this collapse using the Rayleigh-Plesset equation for spherical bubble collapse, 22 and recover a 2/5 power law exponent which is consistent with our experimental findings. The gas velocity through the neck is calculated from the growth-rate of the bubble. Just before pinch-off the gas velocity accelerates up to −23 m/s reducing the bubble's volume, however this velocity is too low for Bernoulli suction to be the dominant effect. Thus, the final moment of microbubble pinch-off in a flow-focusing system is purely liquid inertia driven.
VI. ACKNOWLEDGEMENT
We kindly acknowledge J. M. Gordillo for insightful discussions. This work was financially supported by the MicroNed technology program of the Dutch Ministry of Economic Affairs through its agency SenterNovem under grant Bsik-03029.5.
FIG. 1 .
1(a) Schematic overview of the setup for the study of microbubble formation in microfluidic flow-focusing devices. A high-speed camera mounted to an inverted microscope is used to capture the final moment of microbubble pinch-off. Gas pressure was controlled by a pressure regulator connected to a sensor. The liquid flow rate was controlled by a high-precision syringe pump. (b) Schematic representation of a planar flow-focusing device with uniform channel height H = 59 µm and channel width W = 60 µm. (c) Snapshot of a high-speed recording. The outer liquid flow Q forces the inner gas flow Q g to enter a narrow channel (encircled by the dashed line) in which a microbubble is formed.
Nitrogen gas is controlled by a regulator (Omega, PRG101-25) connected to a pressure sensor (Omega, DPG1000B-30G). The gas supply pressure was 12 kPa. A 10% (w/w) solution of dishwashing liquid (Dreft, Procter & Gamble) in deionized water is flow-ratecontrolled using a high precision syringe pump (Harvard Apparatus, PHD 2000, Holliston, MA, USA). The liquid, with density ρ = 1000 kg/m 3 , surface tension γ = 35 mN/m, and viscosity η = 1 mPa·s, wets the channel walls. The liquid surfactant solution was supplied at a flow rate Q = 185 µl/min. The Reynolds number Re = ρQ R/ηW H ≈ 26, with nozzle radius R = W H/ (W + H) ≈ 30 µm, is low enough to guarantee that the flow is laminar. The bubble formation process is imaged using an inverted microscope (Nikon Instruments, Eclipse TE2000-U, Melville, NY, USA) equipped with an extra long working distance objective with a cover glass correction collar (Nikon Instruments, 60× Plan Fluor ELWD N.A. 0.70 W.D. 2.1-1.5 mm, Melville, NY, USA) and an additional 1.5× magnification lens. The system is operated in bright-field mode using high-intensity fiber illumination (Olympus, ILP-1, Zoeterwoude, The Netherlands).
(
Shimadzu Corp., Hypervision HPV-1, Kyoto, Japan) to capture 100 consecutive images at a high temporal resolution of 1 Mfps (equivalent to an interframe time of 1 µs), exposure time of 0.5 µs, field of view of 200 µm × 175 µm, and with a spatial resolution of 0.68 µm/pixel.
FIG. 2 .
2Time series showing the formation of a microbubble in a microfluidic flow-focusing device recorded at 1 Mfps. The frame number is indicated at the left of each frame. For reasons of clarity, the background, including the channel structure, is subtracted. A detailed image that corresponds to frame 86 is represented in Fig. 3a. The camera's field of view is indicated by the dashed line in Fig. 1c. The exposure time is 0.5 µs. The scale bar in the lower right corner denotes 50 µm. of the high-speed recording showing the formation of a microbubble corresponding to frame 86 in Fig. 2. The scale bar denotes 25 µm. (b) System of coordinates for an axisymmetric bubble. The shape of the gas-liquid interface r(z) is described as a function of the axial-coordinate z. The bubble's volume is the volume enclosed between a and b indicated on the z-axis. The gaseous thread forms a neck that is concave in shape with r 0 and r c the circumferential and axial radius of curvature respectively.
Fig. 4a a surface contour plot of the radius of the bubble r as a function of the axial coordinate z and the time remaining until pinch-off τ is shown. The minimum radius of the neck r 0 is indicated by the dashed line. In Fig. 4b we plot r 0 as a function of the time remaining until pinch-off τ = t c − t, with t the time and t c the collapse time, on a linear scale, whereas the collapse curve is represented on a logarithmic scale in Fig. 4c. The collapse time is defined as the moment when the neck reaches its critical radius r 0 = 0 and breaks.
FIG. 4 .
4(a) Surface contour plot (false color) of the formation of a microbubble. The axisymmetric radius of the bubble is plotted as a function of the axial coordinate z and the time until pinch-off τ = t c − t. The dashed line indicates the minimum radius of the neck r 0 until final collapse and pinch-off at the origin. (b) The time evolution of the minimum radius of the neck for three different experiments under the same initial conditions. (c) The logarithm of the minimum radius of the neck r 0 normalized by the nozzle radius R = 30 µm as a function of the logarithm of the time until final pinch-off τ , normalized by the capillary time τ cap = 28 µs. The solid line represents the best fit to the data showing a 0.41 ± 0.01 slope. The dashed lines with slope 2/5 and slope 1/3 serve as a guide to the eye. The vertical dotted line marks the time closest to pinch-off measured in the work of Dollet et al.27 The error in determining the collapse time t c is visualized as the gray area.
FIG. 5 .
5The axial radius of curvature r c (squares) decreases faster compared to the circumferential radius of curvature r 0 (bullets). Hence the neck becomes less slender approaching pinch-off. Thus the slenderness ratio λ = r c /r 0 , becomes smaller when approaching the pinch-off. The radii of curvature are normalized by the nozzle radius R = 30 µm. The time until pinch-off is normalized by the capillary time τ cap ≈ 28 µs.
fills it (i-ii). The volume of the bubble increases until it reaches a maximum volume of 38 pl at τ = 46 µs (ii). The restricted liquid flow starts to "squeeze" the gaseous thread and a clearly visible neck begins to develop. Then a remarkable event takes place-the gas flow reverses and the neck starts to collapse (ii-iii) until bubble pinch-off occurs. The volume of the bubble beyond pinch-off (τ < 0) is 31 pl, which is equivalent to a bubble radius of 19 µm.
FIG. 6 .
6Volume of the bubble V b as a function of the time until pinch-off τ . The volume of the bubble was calculated by integration over its contour along the z-axis between the neck and the tip of the bubble (indicated a and b in Fig. 3). The insets (i-iv) depict the contours that enclose the bubble's volume corresponding to the marked data points in the graph. A second order polynomial fit is used to calculate ∂V b /∂τ (dashed line). The bubble reaches its maximum volume at τ rev (dashed-dotted line) and the gas flow direction reverses, consequently, the bubble shrinks during the final moments before pinch-off. Different symbols represent different experiments, giving an indication of the reproducibility.
FIG. 7 .
7), however, in the final stage of the collapse, a strong acceleration of the gas flow is observed, reaching a velocity up to u max = −23 m/s. The pressure drop due to the fast gas flow through the neck is given by Bernoulli's equation as ∆p = −ρ g (u 2 rev − u 2 max ) /2 ≈ 0.3 kPa, with u rev = 0 m/s the gas velocity at the moment Gas velocity u g in the neck as a function of the time until collapse τ . The initial positive gas velocity reverses its flow direction and accelerates when approaching the pinch-off. At the moment of pinch-off maximum velocity of −23 m/s is reached. In the inset an enlarged section of the graph for the data points encircled by the dashed line is represented demonstrating the gas flow reversal. Again, different symbols/colors represent different individual experiments.
, both the Bernoulli pressure drop and the capillary pressure in the neck are plotted as a function of the time remaining until pinch-off. The capillary pressure (represented by the bullets) is obtained by inserting the complex shape of the neck into Eq. (10). The dashed line and the dashed-dotted line indicate the pressure contribution from the circumferential curvature (p ∝ γτ −α ) and the axial curvature (p ∝ −γτ −β ) respectively, while the solid line represents the sum of both
FIG. 8 .
8Evolution of the capillary pressure and the Bernoulli pressure drop during the final moments of bubble pinch-off. The dashed line indicates the capillary pressure contribution due to the circumferential curvature (p = γ/r 0 , with r 0 ∝ τ 0.41 ); the dashed-dotted line shows the pressure contribution from the axial curvature (p = −γ/r c , with r c ∝ τ 0.53 ); the solid line represents the sum of both contributions. The bullets represents the capillary pressure obtained from the local shape of the neck (r(z)) extracted from the wide-field-of-view images and using Eq. (10). The increasing gas velocity through the neck causes a local pressure reduction in the neck, referred to as Bernoulli suction (p = −ρu 2 g /2), as indicated by the crosses. It can be seen that the capillary pressure clearly dominates during the entire bubble formation process.
Nonlinear dynamics and breakup of free-surface flows. J Eggers, Rev. Mod. Phys. 693J. Eggers. Nonlinear dynamics and breakup of free-surface flows. Rev. Mod. Phys., 69(3):865-930, 1997.
Physics of liquid jets. J Eggers, E Villermaux, Rep. Prog. Phys. 71336601J. Eggers and E. Villermaux. Physics of liquid jets. Rep. Prog. Phys., 71(3):036601, 2008.
Dynamics of inviscid capillary breakup: collapse and pinchoff of a film bridge. Y.-J Chen, P H Steen, J. Fluid Mech. 341Y.-J. Chen and P. H. Steen. Dynamics of inviscid capillary breakup: collapse and pinchoff of a film bridge. J. Fluid Mech., 341:245-267, 1997.
Self-similar capillary pinchoff of an inviscid fluid. R F Day, E J Hinch, J R Lister, R. F. Day, E. J. Hinch, and J. R. Lister. Self-similar capillary pinchoff of an inviscid fluid.
. Phys. Rev. Lett. 804Phys. Rev. Lett., 80(4):704-707, 1998.
Perfectly monodisperse microbubbling by capillary flow focusing. A M Gañán-Calvo, J M Gordillo, Phys. Rev. Lett. 8727274501A. M. Gañán-Calvo and J. M. Gordillo. Perfectly monodisperse microbubbling by capillary flow focusing. Phys. Rev. Lett., 87(27):274501, 2001.
Capillary pinch-off in inviscid fluids. D Leppinen, J R Lister, Phys. Fluids. 152D. Leppinen and J. R. Lister. Capillary pinch-off in inviscid fluids. Phys. Fluids, 15(2):568- 578, 2003.
Scaling and instabilities in bubble pinch-off. J C Burton, R Waldrep, P Taborek, J. C. Burton, R. Waldrep, and P. Taborek. Scaling and instabilities in bubble pinch-off.
. Phys. Rev. Lett. 9418184502Phys. Rev. Lett., 94(18):184502, 2005.
Axisymmetric bubble pinch-off at high reynolds numbers. J M Gordillo, A Sevilla, J Rodríguez-Rodríguez, C Martínez-Bazán, Phys. Rev. Lett. 9519194501J. M. Gordillo, A. Sevilla, J. Rodríguez-Rodríguez, and C. Martínez-Bazán. Axisymmetric bubble pinch-off at high reynolds numbers. Phys. Rev. Lett., 95(19):194501, 2005.
Bubbling in unbounded coflowing liquids. A M Gañán-Calvo, M A Herrada, P Garstecki, Phys. Rev. Lett. 9612124504A. M. Gañán-Calvo, M. A. Herrada, and P. Garstecki. Bubbling in unbounded coflowing liquids. Phys. Rev. Lett., 96(12):124504, 2006.
Axisymmetric breakup of bubbles at high Reynolds numbers. J M Gordillo, M Pérez-Saborid, J. Fluid Mech. 562J. M. Gordillo and M. Pérez-Saborid. Axisymmetric breakup of bubbles at high Reynolds numbers. J. Fluid Mech., 562:303-312, 2006.
Giant bubble pinch-off. R P H M Bergmann, D Van Der Meer, M Stijnman, M Sandtke, A Prosperetti, D Lohse, Phys. Rev. Lett. 9615154505R. P. H. M. Bergmann, D. van der Meer, M. Stijnman, M. Sandtke, A. Prosperetti, and D. Lohse. Giant bubble pinch-off. Phys. Rev. Lett., 96(15):154505, 2006.
Breakup of air bubbles in water: Memory and breakdown of cylindrical symmetry. N C Keim, P Møller, W W Zhang, S R Nagel, Phys. Rev. Lett. 97144503N. C. Keim, P. Møller, W. W. Zhang, and S. R. Nagel. Breakup of air bubbles in water: Memory and breakdown of cylindrical symmetry. Phys. Rev. Lett., 97:144503, 2006.
Experiments on bubble pinch-off. S T Thoroddsen, T G Etoh, K Takehara, Phys. Fluids. 19442101S. T. Thoroddsen, T. G. Etoh, and K. Takehara. Experiments on bubble pinch-off. Phys. Fluids, 19(4):042101, 2007.
Axisymmetric bubble collapse in a quiescent liquid pool. I. Theory and numerical simulations. J M Gordillo, Phys. Fluids. 20112103J. M. Gordillo. Axisymmetric bubble collapse in a quiescent liquid pool. I. Theory and numerical simulations. Phys. Fluids, 20:112103, 2008.
Approach to universality in axisymmetric bubble pinch-off. S Gekle, J H Snoeijer, D Lohse, D Van Der Meer, Phys. Rev. E. 8036305S. Gekle, J. H. Snoeijer, D. Lohse, and D. van der Meer. Approach to universality in axisymmetric bubble pinch-off. Phys. Rev. E, 80:036305, 2009.
The dynamics of the piezo inkjet printhead operation. H Wijshoff, Phys. Rep. 491H. Wijshoff. The dynamics of the piezo inkjet printhead operation. Phys. Rep., 491:77-177, 2010.
Ultrasound microbubble contrast agents: Fundamentals and application to gene and drug delivery. K Ferrara, R Pollard, M Borden, Ann. Rev. Biomed. Eng. 9K. Ferrara, R. Pollard, and M. Borden. Ultrasound microbubble contrast agents: Funda- mentals and application to gene and drug delivery. Ann. Rev. Biomed. Eng., 9:415-447, 2007.
Lipid-shelled vehicles: Engineering for ultrasound molecular imaging and drug delivery. K W Ferrara, M A Borden, H Zhang, Acc. Chem. Res. 42K.W. Ferrara, M.A. Borden, and H. Zhang. Lipid-shelled vehicles: Engineering for ultra- sound molecular imaging and drug delivery. Acc. Chem. Res., 42:881-892, 2009.
Fluid pinch-off dynamics at nanometer length scales. J C Burton, J E Rutledge, P Taborek, Phys. Rev. Lett. 9224244505J. C. Burton, J. E. Rutledge, and P. Taborek. Fluid pinch-off dynamics at nanometer length scales. Phys. Rev. Lett., 92(24):244505, 2004.
The measurement of growth rates in capillary jets. H González, F J García, J. Fluid Mech. 619H. González and F. J. García. The measurement of growth rates in capillary jets. J. Fluid Mech., 619:179-212, 2009.
The release of air bubbles from an underwater nozzle. M S Longuet-Higgins, B R Kerman, K Lunde, J. Fluid Mech. 230M. S. Longuet-Higgins, B. R. Kerman, and K. Lunde. The release of air bubbles from an underwater nozzle. J. Fluid Mech., 230:365-390, 1991.
Dynamics of bubble-growth and detachment from a needle. H N Oguz, A Prosperetti, J. Fluid Mech. 257H. N. Oguz and A. Prosperetti. Dynamics of bubble-growth and detachment from a needle. J. Fluid Mech., 257:111-145, 1993.
Axisymmetric bubble collapse in a quiescent liquid pool. II. Experimental study. R Bolaños-Jiménez, A Sevilla, C Martínez-Bazán, J M Gordillo, Phys. Fluids. 2011112104R. Bolaños-Jiménez, A. Sevilla, C. Martínez-Bazán, and J. M. Gordillo. Axisymmetric bub- ble collapse in a quiescent liquid pool. II. Experimental study. Phys. Fluids, 20(11):112104, 2008.
The effect of liquid viscosity on bubble pinch-off. R Bolaños-Jiménez, A Sevilla, C Martínez-Bazán, D Van Der Meer, J M Gordillo, Phys. Fluids. 2172103R. Bolaños-Jiménez, A. Sevilla, C. Martínez-Bazán, D. van der Meer, and J. M. Gordillo. The effect of liquid viscosity on bubble pinch-off. Phys. Fluids, 21:072103, 2009.
Theory of the collapsing axisymmetric cavity. J Eggers, M A Fontelos, D Leppinen, J H Snoeijer, Phys. Rev. Lett. 9894502J. Eggers, M. A. Fontelos, D. Leppinen, and J. H. Snoeijer. Theory of the collapsing axisymmetric cavity. Phys. Rev. Lett., 98:094502, 2007.
Supersonic air flow due to solid-liquid impact. S Gekle, I Peters, J M Gordillo, D Van Der Meer, D Lohse, Phys. Rev. Lett. 10424501S. Gekle, I. Peters, J. M. Gordillo, D. van der Meer, and D. Lohse. Supersonic air flow due to solid-liquid impact. Phys. Rev. Lett., 104:024501, 2010.
Role of the channel geometry on the bubble pinch-off in flow-focusing devices. B Dollet, W Van Hoeve, J.-P Raven, P Marmottant, M Versluis, Phys. Rev. Lett. 100334504B. Dollet, W. van Hoeve, J.-P. Raven, P. Marmottant, and M. Versluis. Role of the channel geometry on the bubble pinch-off in flow-focusing devices. Phys. Rev. Lett., 100(3):034504, 2008.
Rapid prototyping of microfluidic systems in Poly(dimethylsiloxane). D C Duffy, J C Mcdonald, O J A Schueller, G M Whitesides, Anal. Chem. 70D. C. Duffy, J. C. McDonald, O. J. A. Schueller, and G. M. Whitesides. Rapid prototyping of microfluidic systems in Poly(dimethylsiloxane). Anal. Chem., 70:4974-4984, 1998.
Studies on surface wettability of poly(dimethyl) siloxane (PDMS) and glass under oxygen-plasma treatment and correlation with bond strength. S Bhattacharya, A Datta, J M Berg, S Gangopadhyay, J. Microelectromech. S. 143S. Bhattacharya, A. Datta, J. M. Berg, and S. Gangopadhyay. Studies on surface wetta- bility of poly(dimethyl) siloxane (PDMS) and glass under oxygen-plasma treatment and correlation with bond strength. J. Microelectromech. S., 14(3):590-597, 2005.
Determining the optimal PDMS-PDMS bonding technique for microfluidic devices. M A Eddings, M A Johnson, B K Gale, J. Micromech. Microeng. 18667001M. A. Eddings, M. A. Johnson, and B. K. Gale. Determining the optimal PDMS-PDMS bonding technique for microfluidic devices. J. Micromech. Microeng., 18(6):067001, 2008.
Controlled impact of a disk on a water surface: cavity dynamics. R P H M Bergmann, D Van Der Meer, S Gekle, J A Van Der Bos, D Lohse, J. Fluid Mech. 633R. P. H. M. Bergmann, D. van der Meer, S. Gekle, J. A. van der Bos, and D. Lohse. Controlled impact of a disk on a water surface: cavity dynamics. J. Fluid Mech., 633:381- 409, 2009.
Formation of monodisperse bubbles in a microfluidic flow-focusing device. P Garstecki, I Gitlin, W Diluzio, G M Whitesides, E Kumacheva, H A Stone, Appl. Phys. Lett. 8513P. Garstecki, I. Gitlin, W. DiLuzio, G. M. Whitesides, E. Kumacheva, and H. A. Stone. Formation of monodisperse bubbles in a microfluidic flow-focusing device. Appl. Phys. Lett., 85(13):2649-2651, 2004.
Capillarity and Wetting Phenomena: Drops, Bubbles, Pearls, Waves. P.-G De Gennes, F Brochard-Wyart, D Quéré, SpringerNew York, USP.-G. de Gennes, F. Brochard-Wyart, and D. Quéré. Capillarity and Wetting Phenomena: Drops, Bubbles, Pearls, Waves. Springer, New York, US, 2004.
|
[] |
[
"Medium corrections to the CP-violating parameter in leptogenesis",
"Medium corrections to the CP-violating parameter in leptogenesis"
] |
[
"M Garny \nTechnische Universität München\nJames-Franck-Straße85748GarchingGermany\n",
"A Hohenegger ",
"A Kartavtsev ",
"\nMax-Planck-Institut für Kernphysik\nSaupfercheckweg 169117HeidelbergGermany\n"
] |
[
"Technische Universität München\nJames-Franck-Straße85748GarchingGermany",
"Max-Planck-Institut für Kernphysik\nSaupfercheckweg 169117HeidelbergGermany"
] |
[] |
In two recent papers, arXiv:0909.1559 and arXiv:0911.4122, it has been demonstrated that one can obtain quantum corrected Boltzmann kinetic equations for leptogenesis using a top-down approach based on the Schwinger-Keldysh/Kadanoff-Baym formalism. These "Boltzmann-like" equations are similar to the ones obtained in the conventional bottom-up approach but differ in important details. In particular there is a discrepancy between the CP-violating parameter obtained in the first-principle derivation and in the framework of thermal field theory. Here we demonstrate that the two approaches can be reconciled if causal n-point functions are used in the thermal field theory approach. The new result for the medium correction to the CP-violating parameter is qualitatively different from the conventional one. The analogy to a toy model considered earlier enables us to write down consistent quantum corrected Boltzmann equations for thermal leptogenesis in the SM+3νR which include quantum statistical terms and medium corrected expressions for the CP-violating parameter.
|
10.1103/physrevd.81.085028
|
[
"https://arxiv.org/pdf/1002.0331v2.pdf"
] | 76,648,825 |
1002.0331
|
82df415674d9dd49c9860e4f753b4e53a2d648c6
|
Medium corrections to the CP-violating parameter in leptogenesis
3 May 2010
M Garny
Technische Universität München
James-Franck-Straße85748GarchingGermany
A Hohenegger
A Kartavtsev
Max-Planck-Institut für Kernphysik
Saupfercheckweg 169117HeidelbergGermany
Medium corrections to the CP-violating parameter in leptogenesis
3 May 2010numbers: 1110Wx9880Cq Keywords: Kadanoff-Baym equationsBoltzmann equationexpanding universeleptogenesisthermal quantum field theory
In two recent papers, arXiv:0909.1559 and arXiv:0911.4122, it has been demonstrated that one can obtain quantum corrected Boltzmann kinetic equations for leptogenesis using a top-down approach based on the Schwinger-Keldysh/Kadanoff-Baym formalism. These "Boltzmann-like" equations are similar to the ones obtained in the conventional bottom-up approach but differ in important details. In particular there is a discrepancy between the CP-violating parameter obtained in the first-principle derivation and in the framework of thermal field theory. Here we demonstrate that the two approaches can be reconciled if causal n-point functions are used in the thermal field theory approach. The new result for the medium correction to the CP-violating parameter is qualitatively different from the conventional one. The analogy to a toy model considered earlier enables us to write down consistent quantum corrected Boltzmann equations for thermal leptogenesis in the SM+3νR which include quantum statistical terms and medium corrected expressions for the CP-violating parameter.
I. INTRODUCTION
To calculate the baryon asymmetry generated during the epoch of leptogenesis [1] in the standard model extended by three right-handed neutrinos (SM+3ν R ) and its extensions one usually uses standard Boltzmann kinetic equations. The collision terms (and in particular the CP-violating parameters) in this equations are computed in vacuum in the in-out formalism [2,3] and do not take into account effects induced by the hot medium of the early universe. Such effects can be consistently taken into account in a top-down approach based on the Schwinger-Keldysh/Kadanoff-Baym formalism. In [4,5] we have applied it to a simple toy model of leptogenesis and derived a new (quantum corrected) form of the Boltzmann equations, which includes quantum statistical factors and takes the medium effects into account. We have found that the medium corrections to the CP-violating parameter ǫ depend only linearly on the one-particle distribution functions (see also [6][7][8]). In the analysis based on finite temperature field theory for the phenomenological scenario of thermal leptogenesis [2,3,9] and for GUT baryogenesis [10] the medium corrections to the CP-violating parameter ǫ depend quadratically on the distribution functions. This discrepancy has been noted in the context of leptogenesis in [4] for the vertex contribution to the CP-violating parameter and later in [5] for the self-energy contribution. Here, we use a finite temperature equivalent of the Cutkosky cutting rules [11][12][13][14] to derive thermal corrections to the expression for the imaginary part of the three-point vertex function and the self-energy loop and to calculate the corresponding medium-corrected CP-violating parameters. We show that the discrepancy is due to an ambiguity in the real-time (RTF) formulation of thermal quantum field theory and disappears if one considers retarded or advanced n-point functions. In the framework of the toy model this has been demonstrated recently in [15]. Together with the new form of the Boltzmann equation derived in [4,5] this puts us in the position to write down quantum corrected Boltzmann equations for the phenomenological scenario of thermal leptogenesis which consistently include the medium corrected CP-violating parameter and quantum statistical terms. In section II, we introduce our notations for the CP-violating parameters and the thermal field theory formalism. Then, in section III we review the conventional calculation of the thermal corrections, and in section IV we demonstrate how to reconcile them with the recent results from nonequilibrium field theory.
Finally, in section V we present the quantum corrected Boltzmann equations taking medium corrections into account.
II. CP-VIOLATING PARAMETER AND THERMAL FIELD THEORY
In the phenomenological scenario of thermal leptogenesis as well as in the toy model, the matter-antimatter asymmetry is generated by the decay of a heavy species. In both cases the CP violation in this decay is caused by interference between the tree level and the one-loop diagrams, see fig. 1 into leptons α = ℓ and Higgs β = φ or their anti-particles. In the toy model ψ i is a heavy real scalar particle which decays via Yukawa interactions L = − gi 2! ψ i bb+h.c. into two light scalars α = β = b or the conjugatē b. The CP-violating parameter ǫ i for the decay of ψ i is defined as
ǫ i = ǫ V i + ǫ S i = Γ ψi→αβ − Γ ψi→ᾱβ Γ ψi→αβ + Γ ψi→ᾱβ ,(1)
where Γ Ni→ℓφ includes a sum over flavour indices and loop-internal Majorana neutrino generations in the case of the phenomenological scenario: Γ Ni→ℓφ = α, j Γ Ni→ℓαφ (we do not consider flavor effects here). If the tree level and one-loop contributions are written as λ 0 A 0 and λ 1 A 1 , respectively, where all coupling constants are absorbed in λ 0(1) , the CP-violating parameter becomes at lowest order:
ǫ i = |λ 0 A 0 + λ 1 A 1 | 2 − |λ * 0 A 0 + λ * 1 A 1 | 2 |λ 0 A 0 + λ 1 A 1 | 2 + |λ * 0 A 0 + λ * 1 A 1 | 2 ≃ −2 Im λ * 0 λ 1 Im {A * 0 A 1 } |λ 0 | 2 |A 0 | 2 ,(2)
where the sum over lepton flavour and Majorana neutrino generation indices is again implicit. In the case of thermal leptogenesis this leads to
ǫ i = −2 j =i Im (h † h) 2 ij (h † h) ii Im {A * 0 A 1 } 2 q · k , i = 1, 2, 3 ,(3)
where q and k denote the four-momenta of the Majorana neutrino and the lepton respectively, see fig. 3, and for the toy model to
ǫ i = −2 |g j | 2 Im g i g * j g * i g j Im {A * 0 A 1 } , i = j , i = 1, 2 .(4)
This means that one needs to compute the imaginary (absorptive) part of the vertex and the self-energy loop contributions Im {A * 0 A 1 }. In vacuum this can be done conveniently with help of the Cutkosky cutting rules [16][17][18]. In thermal quantum field theory these can be generalized in order to take into account interactions of internal lines in the loops with the background medium [11,13,14]. In the real-time formalism of thermal quantum field theory two types of fields, termed type-1 and type-2 fields, are introduced in order to avoid pathological singularities [19]. Vertices can be of either type, differing only by a relative minus sign.
We denote them by g 1 = −ig and g 2 = +ig for a generic coupling 1 g. The propagators connecting the different types of vertices can be considered as components of a 2 × 2 propagator matrix 2
G a (p) = G 11 a (p) G 12 a (p) G 21 a (p) G 22 a (p) = ∆ a (p) e βp0/2 ∆ − a (p) e −βp0/2 ∆ + a (p) ∆ * a (p)
.
For a scalar particle b the components are
∆ b (p) = D b (p) , ∆ ± b (p) = D ± b (p) .(5)
For a fermion f the components are
∆ f (p) = (γ · p + m f )D f (p) , ∆ ± f (p) = (γ · p + m f )D ± f (p) .(6)
For brevity we have defined
D a (p) = i p 2 − m 2 a + iǫ − 2πξ a f a,eq (p)δ(p 2 − m 2 a ) , D ± a (p) = 2π [Θ(±p 0 ) − ξ a f a,eq (p)] δ(p 2 − m 2 a ) ,(7)
where ξ a = +1 for fermions and ξ a = −1 for bosons. Here, we denote by f b,eq (p) and f f,eq (p) the equilibrium distribution function for bosons and fermions, respectively, given by
f a,eq (p) = [exp (β|p µ U µ |) + ξ a ] −1 .(8)
They are functions of the Lorentz invariant product p µ U µ of the particles' four-momentum and the fourvelocity U of the plasma in a general frame. In the rest-frame of the plasma, U = (1, 0, 0, 0), we obtain the standard form which depends on p 0 . In the following we assume that it is sufficient to replace the different propagators in our toy model and the phenomenological theory by their thermal field theory equivalents given in eqn. (5). This approach has been followed in previous works for the baryogenesis and leptogenesis scenarios [2,3,9,10]. We ignore further thermal effects, such as thermal corrections to the masses and wave function renormalization here.
Denoting vertices attached to external lines by x i and those attached to internal lines only by z j we can formally denote an amputated n-point graph by F (x 1 , . . . , x n ; z j ). Here we assume that F is given in momentum space, writing the position space coordinates in order to identify the individual vertices. The contribution of this graph to the amplitude is −iF (x 1 , . . . , x n ; z j ).
Physical amplitudes involve a sum over possible combinations of types of internal vertices:
F (x 1 , . . . , x n ; z j ) = type zj F (x 1 , . . . , x n ; z j ) .
For external vertices of fixed type it has been shown [11,12] that this sum is equivalent to a sum over all possible "circlings" of the internal vertices: 3
F (x 1 , . . . , x n ; z j ) = circling zj F ≷ (x 1 , . . . , x n ; z j ) .(9)
F > and F < with "circled" vertices represent graphs computed using the set of rules, given in fig. 2. These differ for the computation of F > and F < by interchange of the ∆ + and ∆ − propagators. In F ≷ (x 1 , . . . , x n ; z j ) we explicitly denote circling of a vertex α as F ≷ (x 1 , . . . , x α , . . . , x n ; z j ). Note that the two ways of defining F in terms of F > and F < in eqn. (9) are in agreement only if the Kubo-Martin-Schwinger (KMS) boundary condition,
∆ − a (p) = −ξ a e −βp·U ∆ + a (p) ,(10)
is satisfied. This is the case in thermal equilibrium.
= −ig = +ig p = ∆(p) , p = ∆ + (p) p = ∆ − (p) , p = ∆ * (p)
FIG. 2: Circling rules for a generic theory used for the computation of F> in momentum space. The rules for the computation of F< differ by interchange of ∆ + (p) and ∆ − (p). The ∆ ± propagators connecting circled and uncircled vertices may be interpreted as cut propagators. In vacuum they correspond to the cut propagators in the Cutkosky rules.
From F we can then compute Im{A * 0 A 1 } as 4
Im{A * 0 A 1 } = −Im i −1 F g 1 g 2 g 3 .(11)
where g 1 , g 2 and g 3 stand for the generic couplings associated with the three vertices in the one-loop diagrams at Fig. 2.
III. PHYSICAL AND GHOST FIELDS
In this section we briefly review the conventional calculation of the CP-violating parameter in real-time thermal field theory. However, we use a notation that is helpful to understand the ambiguities emerging there, and that can be more easily compared to the results from non-equilibrium field theory. An obvious problem with the real-time formulation for the computation of n-point functions is that there are in general 2 n such functions which differ in the types of the external vertices. Historically the correct function was considered to be the one with all external vertices of type-1 (physical). In this case eqn. (9) leads to the following formula for the imaginary part of a graph's contribution to the amplitude:
Im i −1 F (1, . . . , 1; z j ) = 1 2 circling (xi), zj F ≷ (x 1 , . . . , x n ; z j ) ,(12)
where the sum includes all possible circlings of the internal vertices z j but only those circlings of external vertices x i which include both, circled and uncircled vertices (indicated by the brackets around x i ). The six diagrams contributing to the imaginary part of the three-point vertex function according to eqn. (12) are shown in fig. 4. The circlings contributing to the self-energy part are shown in fig. 5. x 1
x 3 x 2 (a) x 1 x 3 x 2 (b) x 1 x 3 x 2 (c) x 1 x 3 x 2 (d) x 1 x 3 x 2 (e)
x 1
x 3
x 2 (f) FIG. 4: Circlings contributing to Im i −1 F(1, 1, 1) for the vertex loop. At one-loop level the circlings can be interpreted as cuts, as indicated, by the lines separating circled from uncircled regions [11]. The contributions from diagrams involving cuts through the x2-x3 line are suppressed relative to the others in the hierarchical limit.
x 1 z x 2 (a) x 1 z x 2 (b) x 1 z x 2 (c) x 1 z x 2 (d)
FIG. 5: Circlings contributing to Im i −1 F(1, 1; z) for the self-energy loop. The graphs (b) and (c) vanish since ψi and ψj cannot be on-shell simultaneously. Note that we consider only the diagrams with ψi in the external and ψj in the internal line (i = j) because these are the only ones which contribute to ǫi.
The contributions which correspond to cuts through the ψ j line are suppressed in the hierarchical limit. If they are neglected the application of this circling formula leads for the toy model to the result
ǫ V,th i = − 1 8π |g j | 2 M 2 i Im g i g * j g * i g j dΩ l 4π 1 + fb ,eq E1 + fb ,eq E2 + 2fb ,eq E1 fb ,eq E2 M 2 j /M 2 i + 1 2 (1 + cos θ l ) + . . . ,(13)
for the vertex contribution and
ǫ S,th i = − |g j | 2 16π Im g i g * j g * i g j 1 M 2 j − M 2 i dΩ l 4π 1 + fb ,eq E1 + fb ,eq E2 + 2fb ,eq E1 fb ,eq E2(14)
for the self-energy contribution. The distribution functions are to be evaluated for the energies E 1 and E 2 given by 5
E 1,2 = 1 2 E ψ1 q ∓ |q|(sin θ l cos ϕ l cos δ ′ + cos θ l sin δ ′ ) ,(15)
where θ l and ϕ l are elements of the solid angle Ω l and the angle δ ′ is given in the limit of massless decay products by sin δ ′ = (|p| − |k|)/|q|. The dots in eqn. (13) represent further terms in f ψj ,eq which are neglected. Equivalently these results can be derived directly using only the 11 components of the propagators, because the vertex and the self-energy loop do not include internal vertices. Very similar results are known for the phenomenological scenario which can be obtained using the propagators in eqn. (6) for fermions for the Majorana neutrinos and leptons in the loops and eqn. (5) for the Higgs bosons [2,9]. For the dependence on the distribution functions one obtains then the quadratic form
1 − f ℓ,eq E1 + f φ,eq E2 − 2f ℓ,eq E1 f φ,eq E2 .(16)
The results obtained from non-equilibrium field theory in [4,5] differ from eqns. (13) and (14). The nonequilibrium results feature a different dependence on the distribution functions,
1 + fb E1 + fb E2 + 2fb E1 fb E2 → 1 + fb E1 + fb E2 .(17)
Note that the top-down results are valid even if fb is not an equilibrium distribution (fb ≃ f b must hold, however) and that the dependence is linear in the distribution function in contrast to eqns. (13), (14). The latter property contradicts the result derived from thermal quantum field theory . In the phenomenological model, an analogous replacement leads to a particularly important discrepancy. Indeed, eqn. (16) would imply a cancellation of the leading effects since f φ,eq p − f ℓ,eq p = 2f φ,eq p f ℓ,eq p . The remaining effect is, in this case, entirely due to the fact that different energies enter the distribution functions of leptons and Higgs particles in eqn. (16). Since E 1 − E 2 ∼ |q|, this effect vanishes when the velocity of the Majorana neutrino in the medium rest-frame, |q|/E N1 q , becomes small. Therefore it is important to check wether a replacement of the form of eqn. (17) does also occur in the phenomenological scenario. This will be investigated in the next section.
IV. CAUSAL N-POINT FUNCTIONS
We will now see how the finite temperature field theory approach can be reconciled with the results derived from non-equilibrium quantum field theory. In [21][22][23] it was shown that the combination
F (α) R/A (x 1 , . . . , x n ; z j ) = i =α circling xi,zj F ≷ (x 1 , . . . x α , . . . , x n ; z j ) ,(18)
referred to as the retarded (advanced) product, has the distinguishing property that the time component (x α ) 0 is singled out as being the largest (smallest). This becomes clear when we consider the so-called largest (smallest) time equation
F ≷ (x 1 , . . . , x α , . . . , x n ) + F ≷ (x 1 , . . . , x α , . . . , x n ) = 0 , if (x α ) 0 largest/smallest ,(19)
which implies pairwise cancellation of the terms in eqn. (18) if any external vertex x i with i = α has the largest (smallest) time component. It has been realized that such causal products appear in Boltzmann equations in different cases, see for example [22,24]. Furthermore, it has been shown that the causal 5 Note that if in eqn. (14) the term quadratic in the distribution functions was absent, then by redefining the integration variable ϕ l we could write the energies E 1 and E 2 in the form E 1,2 = 1 2 E ψ 1 q + |q|(sin θ l cos ϕ l cos δ ′ ∓ cos θ l sin δ ′ ) , which was used in [20]. products agree with the results of the calculation in imaginary-time formalism analytically continued to real energies, at least in a few examples including the self-energy loop and the three-point vertex. The imaginary part of the causal sum was shown in [22] to obey
Im i −1 F (α) R/A (x 1 , . . . , x α , . . . , x n ; z j ) = ∓ 1 2 not all circling xi circling zj Im i −1 F > (x 1 , . . . , x α , . . . , x n ; z j )− −i −1 F < (x 1 , . . . , x α , . . . , x n ; z j ) ,(20)
where "not all" means that not all x i should be circled at the same time and the imaginary part is taken of the causal product in momentum space. Here, the vertex x α with largest or smallest time is always circled.
We can now compute the imaginary part of the advanced product Im i −1 F
A (x 1 , x 2 , x 3 ) for the threepoint vertex with smallest time component (x 1 ) 0 of the decaying particle. The relevant circlings are shown in fig. 6. As before the contributions fig. 6(b) and (c) are suppressed due to the cut through the ψ j propagator line. x2, x3) for the vertex loop. The advanced three-point function involves a difference of F> and F< contributions which differ by the replacement ∆ ± ↔ ∆ ∓ in the circling rules. Since the finite temperature contributions (terms proportional to f eq ) to the latter are the same, all contributions quadratic in the distribution functions cancel.
x 1 x 3 x 2 (a) x 1 x 3 x 2 (b) x 1 x 3 x 2 (c) FIG. 6: Circlings contributing to Im i −1 F (1) A (x1,
We compute the remaining contribution from fig. 6(a):
Im i −1 F (1) A (x 1 , x 2 , x 3 ) = 1 2 Im i −1 F > (x 1 , x 2 , x 3 ) − i −1 F < (x 1 , x 2 , x 3 ) = 1 2 Im i −1 d 4 l (2π) 4 (+ig 1 )(−ig 2 )(−ig 3 )D − β (q + l)D ψj (q + l − k)D + α (l)− − (+ig 1 )(−ig 2 )(−ig 3 )D + β (q + l)D ψj (q + l − k)D − α (l) S ,(21)
where we take F ≷ to include the spinors and charge conjugation operators C as well as projection operators P R , P L associated with the vertices (for the Majorana neutrino interactions). This leads to the trace part denoted by S [2,25,26]. In the massless lepton and Higgs limit:
S = spins ū ℓ (k)P L u Ni (q) * ū ℓ (k)P L (γ · (q + l − k) + M j )C −1 P L (γ · l)P R Cu Ni (q) =Tr (γ · q − M i )(γ · k)M j (γ · l)P R = − 2 M i M j k · l(22)
and S = 1 for the toy model. It turns out that the pole of the ψ j propagator does not lie in the loop integration region, so we can drop the iǫ prescription. We then get (the upper and lower signs correspond to the toy model and phenomenological scenario respectively)
Im i −1 F (1) A (x 1 , x 2 , x 3 ) = − 1 2 Im d 4 l (2π) 2 δ (q + l) 2 − m 2 β δ l 2 − m 2 α i (q + l − k) 2 − M 2 j × Θ(−(q 0 + l 0 ))Θ(l 0 ) − Θ(q 0 + l 0 )Θ(−l 0 ) ± Θ(−(q 0 + l 0 )) − Θ(q 0 + l 0 ) f α,eq l + + Θ(l 0 ) − Θ(−l 0 ) f β,eq q+l S ,(23)
which becomes
Im i −1 F (1) A (x 1 , x 2 , x 3 ) = = 1 2 d 4 l (2π) 2 δ (q + l) 2 − m 2 β δ l 2 − m 2 α 1 (q + l − k) 2 − M 2 j 1 ± f α,eq l + f β,eq q+l S .(24)
Performing the integration over d |l| dl 0 , this indeed leads to the result for the CP-violating parameter obtained in the top-down approach with correct dependence on the distribution functions eqn. (17). For the phenomenological scenario we obtain in the limit of massless lepton and Higgs:
ǫ V,th i = 1 16π j =i Im (h † h) 2 ij (h † h) ii M j M i dΩ l 4π 1 − cos θ l M 2 j /M 2 i + 1 2 (1 + cos θ l ) 1 − f ℓ,eq E1 + f φ,eq E2 ,(25)
where E 1,2 are given by eqn. (15). In the zero temperature limit this reduces to the well-known result
ǫ V,vac i = − 1 8π j =i Im (h † h) 2 ij (h † h) ii f M 2 j M 2 i , with f (x) = √ x 1 − (1 + x) ln 1 + x x .(26)
The same computation can be performed for the self-energy loop. The possible circlings are shown in fig. 7:
x 1 z x 2 (a) x 1 z x 2 (b) FIG. 7: Circlings contributing to Im i −1 F(1)
A (x1, x2; z) for the self-energy loop. Graph (b) vanishes since ψi and ψj cannot be on-shell simultaneously.
Im i −1 F (1) A (x 1 , x 2 ; z) = 1 2 Im i −1 F > (x 1 , x 2 , x 3 ) − i −1 F < (x 1 , x 2 , z) = − 1 2 Im d 4 l (2π) 4 (+ig 1 )(−ig z )(−ig 3 )D − β (q + l)D ψj (q)D + α (l)− − (+ig 1 )(−ig z )(−ig 3 )D + β (q + l)D ψj (q)D − α (l) S ,(27)
where (the result for) S coincides with eqn. (22) in the phenomenological scenario, while S = 1/2! includes an additional symmetrization factor in the toy model. This becomes
Im i −1 F (1) A (x 1 , x 2 ; z) = = 1 2 d 4 l (2π) 2 δ (q + l) 2 − m 2 β δ l 2 − m 2 α 1 q 2 − M 2 j 1 ± f α,eq l + f β,eq q+l S .(28)
This corresponds to the result for the self-energy contribution in the hierarchical limit in the top-down approach [5] if the equilibrium distribution functions are replaced with non-equilibrium ones. In the zero temperature limit this leads to the correct vacuum result. Thus, we have shown that (within the toy model) the CP-violating parameter ǫ th obtained with help of thermal quantum field theory coincides with the one obtained in the top-down approach (in the approximately symmetric case) when one uses causal products instead of the conventional ones which assume type-1 external vertices. Furthermore, by comparing with the top-down result, we find that the thermal field theory result can be generalized to a (symmetric) nonequilibrium configuration for the toy model by the canonical replacement of the equilibrium distribution functions with the non-equilibrium ones: f eq → f . For the phenomenological scenario we obtain in the limit of massless lepton and Higgs for the self-energy contribution (including a factor of 2, because the two components of the lepton doublet can propagate in the self-energy loop for a given transition)
ǫ S,th i = − 1 8π j =i Im (h † h) 2 ij (h † h) ii M i M j M 2 i − M 2 j dΩ l 4π (1 − cos θ l ) 1 − f ℓ,eq E1 + f φ,eq E2 ,(29)
where E 1,2 are again given by eqn. (15). In the zero temperature limit this reduces to the standard result
ǫ S,vac i = − 1 8π j =i Im (h † h) 2 ij (h † h) ii M i M j M 2 i − M 2 j .
The complete CP-violating parameter is given by
ǫ th i = ǫ V,th i + ǫ S,th i ,(30)
where the vertex and self-energy contributions are given by eqns. (25) and (29) respectively. Therefore the overall dependence on the distribution functions (vertex and self-energy contribution) is given by
1 − f ℓ,eq E1 + f φ,eq E2 .
In contrast to previous findings eqn. (16), this does not vanish in the limit when the Majorana neutrino decays at rest assuming massless ℓ and φ. Therefore, it is qualitatively different from the conventional result. The new expression can lead to a significant enhancement of the CP-violating parameter, see fig. 8. Similar formulas can be derived for processes such as φ → N 1 ℓ in the standard model (which can become relevant at higher temperatures) or for similar MSSM processes involving sneutrinos and sleptons. The size of the medium corrections depends primarily on the statistics of the particles in the loop, see fig. 9.
V. BOLTZMANN EQUATIONS
We can assume in addition that the structure of the Boltzmann equations for the phenomenological scenario is analogous to the one given in [4,5] with appropriate quantum statistical factors for bosons and fermions respectively and appropriate symmetrization factors. This defines the full set of Boltzmann equations including medium corrections to the CP-violating parameter for the phenomenological scenario as derived above. With these modifications, the minimal network of quantum corrected Boltzmann equations for
M 1 /T ǫ 1 /ǫ vac 1 UR NR ǫ th 1 /ǫ vac 1 , N 1 →φl ǫ th 1¸/ ǫ vac 1 , N 1 →φl ǫ th,conv 1¸/ ǫ vac 1 , N 1 →φℓL[f N1 ](|k|) = C N1↔ℓφ [f N1 , f ℓ , f φ ](|k|) + C N1↔lφ [f N1 , fl, fφ](|k|) ,(31a)L[f ℓ ](|k|) = C ℓφ↔N1 [f ℓ , f φ , f N1 ](|k|) ,(31b)L[fl](|k|) = Clφ ↔N1 [fl, fφ, f N1 ](|k|) ,(31c)
where the Liouville operator is given by
L[f a ](x, k) = k 0 ∂ ∂t − |k| H ∂ ∂ |k| f a (|k|) .(32)
If the generated asymmetry is small, as we assume here, then f ℓ ≈ fl and f φ ≈ fφ. In this case the CP-violating contributions to the right-hand side of eqn. (31a) cancel out and we obtain where, as usual, the tree-level amplitude for the Majorana neutrino decay is given by
C N1↔ℓφ [f N1 , f ℓ , f φ ](|k|)+C N1↔lφ [f N1 , fl, fφ](|k|) ≃ 1 2 dΠ ℓ p dΠ φ q (2π) 4 δ(k − p − q) |M 0 | 2 N1→ℓφ (p, q) × [1 − f N1 |k| ]f ℓ |p| f φ |q| − f N1 |k| [1 − f ℓ |p| ][1 + f φ |q| ] + [1 − f N1 |k| ]fl |p| fφ |q| − f N1 |k| [1 − fl |p| ][1 + fφ |q| ] ,(33)M 1 /T ǫ 1 /ǫ vac 1 UR NR N 1 →φℓ, N 1 →φl N 1 →φl|M 0 | 2 N1→ℓφ (p, q) = |M 0 | 2 N1→lφ (p, q) = 2(h † h) 11 p · q.
The collision terms for the (inverse) decay of the heavy particle into a ℓφ or alφ pair explicitly contain the CP-violating parameter ǫ 1 given in eqn. (30) but with the equilibrium distributions replaced by non-equilibrium ones f ,eq → f :
C ℓφ↔N1 [f ℓ , f φ , f N1 ](|k|) = 1 2 dΠ ℓ p dΠ N1 q (2π) 4 δ(k + p − q) |M 0 | 2 N1→ℓφ (k, p)[1 + ǫ 1 (|q|)] × [1 − f ℓ |k| ][1 + f φ |p| ]f N1 |q| − f ℓ |k| f φ |p| [1 − f N1 |q| ] ,(34a)Clφ ↔N1 [fl, fφ, f N1 ](|k|) = 1 2 dΠl p dΠ N1 q (2π) 4 δ(k + p − q) |M 0 | 2 N1→ℓφ (k, p)[1 − ǫ 1 (|q|)] × [1 − fl |k| ][1 + fφ |p| ]f N1 |q| − fl |k| fφ |p| [1 − f N1 |q| ] .(34b)
Note that the network of Boltzmann equations (31) should be understood in the generalized sense: the transition amplitudes differ from the usual perturbative matrix elements and do not have their symmetry properties as was noted in [4,5]. The structure of the collision terms (34) differs from the conventional one. In particular, we did not include the processes ℓφ ↔lφ explicitly, because the collision terms for the processes ℓφ ↔ N 1 andlφ ↔ N 1 do not suffer from the generation of an asymmetry in equilibrium.
To obtain a consistent set of equations in the canonical bottom-up approach we would need to subtract the RIS part of the S-matrix element for the processes ℓφ ↔lφ. Note, however, that it may be necessary to include the collision terms for ℓφ ↔lφ (derived in the top-down approach) in quantitative studies, because these can violate CP in general. Further scattering processes with top-quarks and gauge-bosons can also give relevant contributions. We note here that this result should be treated with care, because additional new effects could arise when the phenomenological scenario is investigated in the top-down approach. In addition, the applicability of the quasi-particle picture can not be tested in the framework of thermal field theory. In particular the results presented above will only apply in the hierarchical case [5]. The analysis of the resonant case requires the use of the Kadanoff-Baym formalism, which allows us to take into account the in-medium spectral properties of the mixing fields.
VI. CONCLUSIONS
Inspired by a discrepancy between conventional results for the thermal corrections to the CP asymmetries in thermal leptogenesis and recent new results from non-equilibrium quantum field theory we have reconsidered the calculation of the CP-violating parameters based on thermal quantum field theory. We find that, if causal products are used in the computation of the n-point functions, the results of both approaches can be brought into agreement in the framework of a toy model. We conclude that causal n-point functions must be used in the derivation of the CP-violating parameter in the phenomenological scenario as well. This leads to new expressions for the thermal corrections to the vertex and self-energy CP-violating parameters. In contrast to the conventional results the thermal corrections do not vanish in the limit when the Majorana neutrino decays at rest assuming massless decay products. Therefore, it is qualitatively different from the conventional result and might give significant contributions to the generated baryon asymmetry. In the range from 0.1 to 10 of the dimensionless inverse temperature, thermal effects can enhance the CPviolating parameter by up to an order of magnitude. The asymmetry can be computed using the minimal set of Boltzmann equations for leptogenesis in SM+3ν R presented here, which are analogous to the equations which have been derived earlier in the framework of the toy model. These take into account decays and inverse decays and include all quantum statistical factors in a way which guarantees that no asymmetry is generated in equilibrium. They can be applied in the case of non-degenerate Majorana neutrino masses. For a detailed phenomenological analysis it will be necessary to take into account further thermal effects such as thermal masses and resummed thermal propagators as well as additional CP-violating processes which exist in phenomenological scenarios.
FIG. 3 :
3Momentum flow in the vertex and the self-energy loop.
FIG. 8 :
8Temperature dependence of the CP-violating parameter in the Majorana neutrino decay relative to its vacuum value. Shown are the thermal average ǫ th 1 /ǫ vac 1 (solid red line) and the values for various momentum modes ǫ th 1 /ǫ vac 1 (dotted red lines) corresponding to |q| = T , −1 ≤ sin(δ ′ ) ≤ +1. For comparison we also show the conventional results ǫ th,conv 1 /ǫ vac 1 (dashed black line), where the leading effects cancel as described in the text. Equilibrium distribution functions for bosons and fermions with negligible chemical potentials are assumed. Note that the shown behavior can be modified if thermal masses are included, since the decay N1 → ℓφ (and the conjugate process) becomes kinematically forbidden if the thermal Higgs mass becomes too large. At even higher temperature the process φ → N1ℓ becomes relevant instead [2]. thermal leptogenesis with hierarchical Majorana neutrino masses M 1 ≪ M 2 , M 3 takes the form (in homogeneous and isotropic Friedman-Robertson-Walker space-time and not writing equations for the Higgs fields φ,φ which are considered to be in thermal equilibrium):
FIG. 9 :
9Medium correction to the CP-violating parameters in the MSSM. The lines correspond to the thermal averages ǫ th 1 /ǫ vac 1 , and the shaded regions illustrate the momentum dependence of ǫ th 1 /ǫ vac 1 for 0.5 ≤ |q|/T ≤ 4 and δ ′ = 0. Note also that in the weighted sum ofÑ1 → φl andÑ1 →φℓ processes the cancellation of the medium contributions, observed in the earlier publications, does not occur anymore.
. In the phenomenological FIG. 1: Tree level and one-loop contributions to the heavy Majorana neutrino decay ψi → αβ. The asymmetry, at lowest order, is due to the interference of these contributions. scenario ψ i = N i are heavy Majorana neutrinos which decay via Yukawa interactions L = h αi ℓ α N i φ+h.c.ψ i
α
β
β
α
ψ i
α
ψ j
β
ψ i
α
β
ψ j
α
β
At the end of the calculation of Im A * 0 A 1 we set g = 1, since the physical coupling constants have been factored out into λ 0 and λ 1 . 2 In[2] and elsewhere resummed propagators have been used in this place to prevent the appearance of singularities. Since we are mainly interested in the structure of the thermal corrections we stick to the free thermal propagators here.3 The historic origin of this formula was that the external fields where considered to be all of type 1 (physical).
In the phenomenological scenario the Feynman rules for Majorana neutrinos include spinors, charge conjugation and projection operators which we assume to be included in F ≷ .
AcknowledgementsThis work was supported by the "Sonderforschungsbereich" TR27 and by the "cluster of excellence Origin and Structure of the Universe". We would like to thank J-S. Gagnon for useful discussions related to thermal quantum field theory.
. M Fukugita, T Yanagida, Phys. Lett. B. 17445M. Fukugita and T. Yanagida, Phys. Lett. B 174, 45 (1986).
. G F Giudice, A Notari, M Raidal, A Riotto, A Strumia, hep-ph/0310123Nucl. Phys. B. 68589G. F. Giudice, A. Notari, M. Raidal, A. Riotto, and A. Strumia, Nucl. Phys. B 685, 89 (2004), hep-ph/0310123.
. S Davidson, E Nardi, Y Nir, Phys. Rept. 466S. Davidson, E. Nardi, and Y. Nir, Phys. Rept. 466, 105 (2008), 0802.2962.
. M Garny, A Hohenegger, A Kartavtsev, M Lindner, 0909.1559Phys. Rev. 80125027M. Garny, A. Hohenegger, A. Kartavtsev, and M. Lindner, Phys. Rev. D80, 125027 (2009), 0909.1559.
. M Garny, A Hohenegger, A Kartavtsev, M Lindner, 0911.4122M. Garny, A. Hohenegger, A. Kartavtsev, and M. Lindner (2009), 0911.4122.
. C P Kiessig, M Plümacher, 0910.4872C. P. Kiessig and M. Plümacher (2009), 0910.4872.
. W Buchmüller, S Fredenhagen, hep-ph/0004145Phys. Lett. B. 483217W. Buchmüller and S. Fredenhagen, Phys. Lett. B 483, 217 (2000), hep-ph/0004145.
. A Anisimov, W Buchmüller, M Drewes, S Mendizabal, 1001.3856A. Anisimov, W. Buchmüller, M. Drewes, and S. Mendizabal (2010), 1001.3856.
. L Covi, N Rius, E Roulet, F Vissani, hep-ph/9704366Phys. Rev. D. 5793L. Covi, N. Rius, E. Roulet, and F. Vissani, Phys. Rev. D 57, 93 (1998), hep-ph/9704366.
. K Takahashi, Phys. Rev. D. 29632K. Takahashi, Phys. Rev. D 29, 632 (1984).
. R L Kobes, G W Semenoff, Nucl. Phys. B. 260714R. L. Kobes and G. W. Semenoff, Nucl. Phys. B 260, 714 (1985).
. R L Kobes, G W Semenoff, Nucl. Phys. B. 272329R. L. Kobes and G. W. Semenoff, Nucl. Phys. B 272, 329 (1986).
. P F Bedaque, A K Das, S Naik, hep-ph/9603325Mod. Phys. Lett. A. 122481P. F. Bedaque, A. K. Das, and S. Naik, Mod. Phys. Lett. A 12, 2481 (1997), hep-ph/9603325.
. F Gelis, hep-ph/9701410Nucl. Phys. B. 508483F. Gelis, Nucl. Phys. B 508, 483 (1997), hep-ph/9701410.
. A Hohenegger, University of HeidelbergPh.D. thesisA. Hohenegger, Ph.D. thesis, University of Heidelberg (2009).
. R E Cutkosky, Journal of Mathematical Physics. 1429R. E. Cutkosky, Journal of Mathematical Physics 1, 429 (1960).
R J Eden, P V Landshoff, D I Olive, J C Polkinghorne, The Analytic S-Matrix. Cambridge University PressR. J. Eden, P. V. Landshoff, D. I. Olive, and J. C. Polkinghorne, The Analytic S-Matrix (Cambridge University Press, 2002).
M , Le Bellac, Quantum and Statistical Field Theory. Oxford University PressM. Le Bellac, Quantum and Statistical Field Theory (Oxford University Press, 1992).
R J Rivers, Path Integral Methods in Quantum Field Theory. Cambridge University PressR. J. Rivers, Path Integral Methods in Quantum Field Theory (Cambridge University Press, 1988).
. M Garny, A Hohenegger, A Kartavtsev, M Lindner, Phys. Rev. D. 80125027M. Garny, A. Hohenegger, A. Kartavtsev, and M. Lindner, Phys. Rev. D 80, 125027 (2009), .
. R L Kobes, Phys. Rev. D. 42562R. L. Kobes, Phys. Rev. D 42, 562 (1990).
. R L Kobes, Phys. Rev. D. 431269R. L. Kobes, Phys. Rev. D 43, 1269 (1991).
. M A Van Eijck, C G Van Weert, Physics Letters B. 278305M. A. van Eijck and C. G. van Weert, Physics Letters B 278, 305 (1992).
. H A Weldon, Phys. Rev. D. 282007H. A. Weldon, Phys. Rev. D 28, 2007 (1983).
. W Buchmüller, M Plümacher, hep-ph/9710460Physics Letters B. 431354W. Buchmüller and M. Plümacher, Physics Letters B 431, 354 (1998), hep-ph/9710460.
. L Covi, E Roulet, F Vissani, hep-ph/9605319Phys. Lett. B. 384169L. Covi, E. Roulet, and F. Vissani, Phys. Lett. B 384, 169 (1996), hep-ph/9605319.
|
[] |
[
"A Resilient and Energy-Aware Task Allocation Framework for Heterogeneous Multi-Robot Systems",
"A Resilient and Energy-Aware Task Allocation Framework for Heterogeneous Multi-Robot Systems"
] |
[
"Gennaro Notomista ",
"Siddharth Mayya ",
"Yousef Emam ",
"Christopher Kroninger ",
"Addison Bohannon ",
"Seth Hutchinson ",
"Magnus Egerstedt "
] |
[] |
[] |
In the context of heterogeneous multi-robot teams deployed for executing multiple tasks, this paper develops an energy-aware framework for allocating tasks to robots in an online fashion. With a primary focus on long-duration autonomy applications, we opt for a survivability-focused approach. Towards this end, the task prioritization and execution-through which the allocation of tasks to robots is effectively realizedare encoded as constraints within an optimization problem aimed at minimizing the energy consumed by the robots at each point in time. In this context, an allocation is interpreted as a prioritization of a task over all others by each of the robots. Furthermore, we present a novel framework to represent the heterogeneous capabilities of the robots, by distinguishing between the features available on the robots, and the capabilities enabled by these features. By embedding these descriptions within the optimization problem, we make the framework resilient to situations where environmental conditions make certain features unsuitable to support a capability and when component failures on the robots occur. We demonstrate the efficacy and resilience of the proposed approach in a variety of use-case scenarios, consisting of simulations and real robot experiments.
|
10.1109/tro.2021.3102379
|
[
"https://arxiv.org/pdf/2105.05586v1.pdf"
] | 234,469,780 |
2105.05586
|
a233483862ea34bd1a5f10829c3a92b920587ef3
|
A Resilient and Energy-Aware Task Allocation Framework for Heterogeneous Multi-Robot Systems
Gennaro Notomista
Siddharth Mayya
Yousef Emam
Christopher Kroninger
Addison Bohannon
Seth Hutchinson
Magnus Egerstedt
A Resilient and Energy-Aware Task Allocation Framework for Heterogeneous Multi-Robot Systems
1Index Terms-Task PlanningPath Planning for Multiple Mobile Robots or AgentsFailure Detection and RecoveryEn- ergy and Environment-Aware AutomationMulti-Robot SystemsRobust/Adaptive Control of Robotic Systems
In the context of heterogeneous multi-robot teams deployed for executing multiple tasks, this paper develops an energy-aware framework for allocating tasks to robots in an online fashion. With a primary focus on long-duration autonomy applications, we opt for a survivability-focused approach. Towards this end, the task prioritization and execution-through which the allocation of tasks to robots is effectively realizedare encoded as constraints within an optimization problem aimed at minimizing the energy consumed by the robots at each point in time. In this context, an allocation is interpreted as a prioritization of a task over all others by each of the robots. Furthermore, we present a novel framework to represent the heterogeneous capabilities of the robots, by distinguishing between the features available on the robots, and the capabilities enabled by these features. By embedding these descriptions within the optimization problem, we make the framework resilient to situations where environmental conditions make certain features unsuitable to support a capability and when component failures on the robots occur. We demonstrate the efficacy and resilience of the proposed approach in a variety of use-case scenarios, consisting of simulations and real robot experiments.
I. INTRODUCTION
M ULTI-robot task allocation (MRTA) is an active research topic given the increasing deployment of multirobot systems in dynamic and partially unknown out-oflaboratory environments (see, e.g., [1], [2] and references therein). Often, the design of MRTA algorithms is tailored around particular challenges that the multi-robot team is expected to face in the environment. For instance, many envisioned applications require robots with limited energy resources to operate effectively for long periods of time, ©2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. {christopher.m.kroninger.civ, addison.w.bohannon.civ}@mail.mil necessitating the development of survivability-focused energyaware algorithms for task execution as well as allocation [3], [4]. Similarly, robot heterogeneity has received explicit focus within the MRTA literature, as teams equipped with different types of sensors, actuators, and communication devices can enable the execution of a wider range of tasks [5], [6], [7].
Heterogeneity can also contribute favorably to another desirable property of a MRTA framework: resilience, typically interpreted as the ability of the allocation algorithm to react to component failures on the robots, varying environmental conditions, and other non-idealities in the operating conditions [8]. Consider an example scenario where a heterogeneous multi-robot system consisting of ground and aerial mobile robots, is tasked with carrying objects to specified locations in the environment. Unexpected weather conditions might lead to high speed winds in the environment, which might prevent aerial robots from making further progress towards the accomplishment of their goal. In such a case, the heterogeneity in the capabilities of the robots could be leveraged through a dynamic re-allocation of the transport task to ground robots.
It should be noted that, in light of such a scenario, the multi-robot task allocation problem can be considered as being inextricably linked to the execution of the tasks by the robots. This is especially true when considering the deployment of survivability-focused multi-robot systems over long time horizons, where evolving or newly detected environmental phenomena can affect the task allocation.
This paper presents a dynamic task allocation and execution framework for multi-robot systems which explicitly accounts for the aforementioned survivability and heterogeneity considerations while being demonstrably resilient to robot failures and changes in environmental conditions. To encode heterogeneity, we propose a novel framework for representing the suitability that robots have for different tasks. This is done by explicitly considering the capabilities required to perform the tasks (e.g., flight or high speed) as well as the features available on the robots (e.g., a specific type of sensors, actuators, or communication equipment) which support these capabilities. We leverage this representation in a constraintbased optimization framework whose solution at each point in time yields (i) a dynamic allocation of tasks to robots through a prioritization scheme, and (ii) control inputs to each robot which ensure the execution of the tasks in accordance with the optimized priorities [4].
Existing task allocation techniques typically define both robots and tasks in terms of the capabilities available on the robots and required to perform the tasks [7], [9]. In contrast, our approach distinguishes between the features available on the robots and the capabilities that these features enable. We demonstrate that this explicit representation contributes to the resilience of the proposed dynamic task allocation method, leveraging the fact that multiple bundles of robot features can satisfy the same capability. Consequently, dynamic readjustments of Robot-to-Feature and Feature-to-Capability mappings can enhance the resilience of the system by capturing scenarios in which (i) environmental conditions make a certain feature more suitable to support a capability, and (ii) component failures on robots occur, affecting the available features. The pertinent question then becomes: how can we design a survivability-focused dynamic allocation paradigm based on these descriptions of heterogeneity (at the feature level as well as the capability level) with demonstrably resilient operations?
Following preliminary work in [10], [4], [11] on adaptive and minimum-energy task execution and allocation, in this paper, we opt for a constraint-based approach, where the execution and prioritization of tasks are encoded as constraints within an optimization problem. Such a formulation has demonstrated both the ability of accounting for the energy limitation that robots have while executing tasks [12], [13], [14], and a higher flexibility and robustness in scenarios where the operating conditions of the robots are only partially known or may change [11]-events which are especially likely when considering long-duration autonomy applications. Since energy considerations are of paramount importance in our framework, the execution of n t different tasks by a robot can be posed as the following energy-minimization problem: where x is the current state of the robot, u is the control effort expended by it, and c taskj (x, u) ≥ 0 denotes a constraint function which enforces the execution of task j. The feasibility of such an optimization problem is ensured by the introduction of a slack variable δ ∈ R nt : minimize u,δ u 2 + δ 2 (1) subject to c taskj (x, u) ≥ −δ j , ∀j ∈ {1, . . . , n t },
where δ = [δ 1 , δ 2 , . . . , δ nt ] T is a vector with positive components representing the extent to which the robot can violate the constraints corresponding to each of the tasks. The applicability of this framework for dynamic task allocation via prioritization is enabled by the observation that relative constraints among the components of δ can allow a robot to perform one task more effectively than others. For instance, if δ m δ n , ∀n = m, then the robot will execute task m with priority higher than all the other tasks: this represents an allocation via prioritization. Such a prioritization can be then encoded via an additional constraint Kδ ≤ 0 in (1), where the matrix K encodes the relative inequalities among the slack variables.
Within the above described formulation, the allocation problem then consists of designing the matrix K for each robot such that the heterogeneity of the robots-intended as their different ability of performing different tasks-are appropriately accounted for. To this end, we propose a modification of the minimum energy optimization problem presented in (1) where priority matrices K are automatically generated. Moreover, by means of an additional constraint, the optimization problem ensures that the minimum amount of capabilities required for the successful execution of each task is met by the allocation.
This formulation yields a mixed-integer quadratic program (MIQP) which not only generates the task allocation of the team (encoded via the prioritization matrices K) but also the control inputs u which the robots can use to execute the tasks. Since MIQPs can be computationally intensive to solve, we further present a mixed centralized/decentralized computational architecture which allows a central coordinator to transmit the task priorities to each robot. The robots can then solve the simpler convex quadratic program described in (1) (with the additional constraint Kδ ≤ 0) to generate their control inputs in real time.
We demonstrate that non-idealities such as environmental disturbances and/or component failures on the robots can be effectively accounted for in our framework, enabling, this way, resilient task allocation behaviors. It is informative to know how, also in nature, such behaviors emerge from the concepts of survivability and heterogeneity. In fact, these concepts play a central role in ecological studies as well, as highlighted by Bridle and van Rensburg in [15]:
For some groups of organisms, we can now integrate genomic data with environmental and demographic data to test the extent to which ecological resilience depends on evolutionary adaptation. Such data will allow researchers to estimate when and where biodiversity within a species has the power to rescue ecological communities from collapse due to climate change and habitat loss.
Drawing an analogy with the task allocation framework we present in this paper, the features of the robots (genomic data) and the resulting heterogeneity (biodiversity) are leveraged to introduce a degree of resilience (ecological resilience) into the framework, which results in a natural adaptation of the multi-robot system to failures (collapse) due to the dynamic environments in which it operates (climate change and habitat loss).
The remainder of the paper is organized as follows. Section II introduces the problem formulation and places it within the context of existing literature. Section III develops a novel framework for encoding robot heterogeneity. In Section IV, we present the main constraint-based minimum energy task allocation paradigm, and demonstrate its resilient capabilities in two distinct failure scenarios. Section V touches upon the performance guarantees of the developed task allocation paradigm and highlights a mixed centralized/decentralized framework to enable the task allocation and execution. In Section VI, we present example use-case scenarios highlighting the resilient allocation and execution of multiple tasks. Section VII concludes the paper.
II. PROBLEM FORMULATION AND RELATED WORK A. Problem Formulation
Consider a team of n r heterogeneous robots which are deployed in an environment and required to execute n t tasks. Each robot is endowed with a subset of n f available features (such as camera, LIDAR, and wheels). These features allow the robots to exhibit a subset of n c capabilities (such as flight and navigation). Certain capabilities can be achieved by multiple combinations of feature bundles, whereas tasks require a given set of capabilities in order to be executed. The successful execution of each task is conditioned upon a minimum number of robots with specific capabilities being allocated to it. In this paper, we consider extended setbased tasks [16], which include tasks whose execution can be encoded as the minimization of a cost function.
Given the above problem setup, the paper then concerns itself with (i) allocating tasks among the robots such that the minimum requirements for each task are met, and (ii) executing the tasks by synthesizing an appropriate control input for the robots. Both these objectives are met while minimizing the control effort expended by the robots. Additionally, we show how the resulting task allocation and execution framework exhibits resilience properties against varying environment conditions and failures on the robots.
B. Related Work
In this section, we briefly review the relevant literature on MRTA, focusing specifically on the scenarios where tasks might require coordination among multiple robots. For a comprehensive survey on a broader class of task allocation problems, see [1], [2], [17] and references within.
In [5], the authors developed a framework for assigning heterogeneous robots to a set of tasks by switching between different predefined behavioral routines. To tackle the challenges of computational complexity associated with such discrete assignment-based approaches, market-based methods [18], [19], [20], [21] have been proposed, where robots can split tasks among them via bidding and auctioning protocols. In scenarios where a large number of robots with limited capabilities are present, decentralized stochastic approaches have been developed where allocations are described in terms of population distributions and are achieved by modifying transition rates among tasks [22], [23], [24].
As multi-robot tasks get more complex and diverse, robots have been envisioned to take up specialized roles within the team, necessitating the characterization of resource diversity and access within the multi-robot system [25], [26]. In [7], [27], the authors define a community of robot species (robot type), each endowed with specific capabilities, and develop an optimization-based framework to allocate sufficient capabilities to each task. This is realized using transition rates, which the robots use to switch between the different tasks. In comparison, our approach explicitly models how different feature bundles available to the robots might endow them with capabilities required to execute a task. This has the benefit of enduing the allocation framework with a degree of resilience, as will be demonstrated in Section IV-C.
Indeed, adaptivity and resilience are commonly studied aspects of task allocation in multi-robot systems (see, e.g., [6], [28], [29], [30]). Typically, adaptivity is incorporated by defining a time-varying propensity of robots to participate in different tasks. These measures of utility are based on predefined objective functions and aim to capture the effectiveness of the robots at performing tasks in real-time [31]. Such frameworks, however, do not account for drastic unexpected failures in the capabilities of the robots, adversarial attacks, or varying environmental conditions that might affect the operations of the robots. Such considerations point to the question of resilience in multi-robot systems, which has been explored in the context of coordinated control tasks [32], as well as resource-availability in heterogeneous systems [8].
Building up on our previous work, presented in [10], [4], [11], in this paper we present three distinct novel developments towards the achievement of a resilient task allocation and execution framework. These are: (i) the explicit feature-and capability-based models of robot heterogeneity, which allows for greater flexibility in allocating tasks; (ii) an optimizationbased task execution framework which allows robots to execute prioritized tasks accounting for the different features and capabilities they possess; and (iii) a minimum-energy task allocation framework-geared towards long-duration autonomy applications-which leverages the real-time performance of robots in executing the tasks to effectively realize a resilient task allocation framework.
III. ENCODING ROBOT HETEROGENEITY
The objective of this section is to develop a framework which generates a feasible mapping between robots and their assigned tasks, while explicitly accounting for the heterogeneity in the robots and the different capability requirements of the tasks. We define a novel notion of feasibility based on a newly added feature layer in the description of the robots. Intuitively, a feasible assignment needs to take into account the capabilities needed for the tasks along with the features possessed by the robots. For example, assigning a ground vehicle to an aerial-surveillance task would be considered an unfeasible assignment. Shown in Fig. 1 is an example of the three mappings to be introduced in the next subsections. Starting from the left, we begin by introducing the Capabilityto-Task mapping (T ) which contains the task specifications. In turn, each of those capabilities requires any one of various feature-bundles to be exhibited. This is captured in the Featureto-Capability mapping (H k ) through the use of hypergraphs. In subsection III-B, we define the Robot-to-Feature mapping (A) which maps each robot to the set of features it possesses. Finally, we introduce a way to obtain the Robot-to-Capability mapping (F ) which is directly used in the task allocation and execution framework. In Table I, we summarized this notation, together with the one used throughout the paper.
In the context of the presented model of robot heterogeneity based on capabilities and features, we will considered tasks as uniquely defined by a single set of capabilities it requires to be executed. Moreover, we will assume that capabilities (such as flying) are determined by features that the robot possess Robot-to-Capability mapping III-D S i ∈ R n t ×n t Specialization matrix of robot i III-F α ∈ {0, 1} n t ×nr Matrix of task priorities IV-B δ i ∈ R n t Task relaxation for robot i IV-A δ ∈ R n t nr Vector of task relaxation parameters
IV-B x i ∈ R nx State of robot i IV u i ∈ R nu Control input of robot i IV x ∈ R nxnr Ensemble state of nr robots IV-A u ∈ R nunr
Ensemble control input of nr robots IV (such as fixed wings or propellers). The following example is aimed at stressing the difference between tasks, capabilities, and features, which will be used throughout this paper.
Example 1 (Tasks, capabilities, features). Consider aerial and amphibious vehicle robots deployed in an environment where there is a river. In this scenario, crossing the river is not a task as it is not determined by a single set of capabilities. On the other hand, examples of tasks are flying over the river (Task 1) and swimming from one side of the river to the opposite one (Task 2). Aerial robots in the form of fixed-wings aircrafts and quadrotors possess the features (fixed wings and propellers) which endow them with the capabilities to perform Task 1; amphibious vehicles, instead, are able to execute Task 2 thanks to the features (waterproof body and water propellers) which determine the capability of swimming.
A. Capability-to-Task Mapping
In many applications such as search and rescue and flexible manufacturing, it is necessary for a heterogeneous team of robots to simultaneously accomplish various tasks each requiring a different set of capabilities. For example, a task requiring the delivery of a product from one point to another may require two capabilities: packaging and transportation. We therefore define the mapping from the set of tasks at hand to their respective capabilities as
T ∈ {0, 1} nt×nc ,
where T tk = 1 if and only if task t requires capability k, and n t and n c denote the numbers of tasks and capabilities respectively. Note that the values of T need not necessarily be binary: in Section III-E we present an extension including weights, i.e. with T tk ∈ R ≥0 . In Fig. 1 we present an example setup of an assignment problem consisting of two tasks and three capabilities denoted by t t and c k , respectively. The mapping from tasks to capabilities is a graphical representation of the matrix T in the form of a bipartite graph: a graph whose nodes are split into two disjoint sets and whose edges contain a single node from each of those two sets. On the left-hand side of Fig. 1, those two sets are the tasks and the capabilities, and the information contained in the graph yields the following mapping T :
T = 1 1 0 0 0 1
Therefore, by looking at the edges incident to t 1 in Fig. 1 or at the first row of T , one can deduce that capabilities c 1 and c 2 are required by task t 1 .
B. Robot-to-Feature Mapping
As mentioned in Section I, each robot available for assignment possesses a variety of features. For example, an e-puck's features include an IMU and a CMOS camera [33]. Therefore, we define the following binary mapping from robots to their respective features:
A ∈ {0, 1} n f ×nr ,
where A ij = 1 if and only if robot j possesses feature i, and n r and n f denote the number of robots and features, respec-tively. Continuing with the example from Fig. 1, the right-most bipartite graph yields the following Robot-to-Feature mapping:
A = 1 0 0 0 1 1 0 0 1 1 0 0 0 1 1 1 0 0 1 1 0 0 0 1 .
(2)
By looking at the first column of matrix A above, one can deduce that robot r 1 possesses features f 1 , f 2 and f 3 . Now that we have defined both the Capability-to-Task and Robotto-Feature mappings, we are ready to introduce the Featureto-Capability mapping in the following subsection.
C. Feature-to-Capability Mapping
When considering heterogeneous multi-robot systems, it is important to note that two non-identical robots may be able to support the same capability. In other words, certain robots possessing different sensors and actuators can be interchangeable when it comes to supporting a specific capability. This gives the rise to the need of associating multiple bundles of features to the same capability in a distinguishable manner. To meet this need, we use the notion of a bipartite hypergraph to define the Feature-to-Capability mapping.
A hypergraph is a graph whose edges are not restricted to a cardinality of two. Hence, we can use one hyperedge to associate a capability with one of the feature bundles that can support it. The mapping between capabilities and features in the middle of Fig. 1 is an example of a bipartite hypergraph. The top edge (colored golden) mapping c 1 to f 1 and f 2 indicates that, together, these two features can support capability c 1 . Note that feature bundles vary in size and consequently, so does the cardinality of the different hyperedges. Therefore, one cannot define a single matrix carrying the information of the entire Feature-to-Capability mapping. This is due to the fact that there is no consistency in terms of edge cardinality nor in the number of edges incident to each capability. Each capability requires its own mapping from its respective hyperedges to the feature space.
Therefore, we define the following row-stochastic matrix, i.e. a matrix where each row sums to 1:
H k ∈ [0, 1] nc k ×n f ,
where k denotes the capability index, H k,ij = 0 if and only if feature j belongs to the feature bundle denoted by hyperedge i. n c k and n f denote the number of hyperedges incident to capability k and the number of features, respectively. Revisiting the example setup in Fig. 1, the mappings from capabilities c 1 , c 2 and c 3 to the feature space respectively yield:
H 1 = 1/2 1/2 0 0 0 0 0 0 1 0 0 0 , H 2 = 0 0 1 0 0 0 0 0 0 1 0 0 , H 3 = 0 0 0 1/3 1/3 1/3 .
As explained in the following subsection, normalizing the rows of H k allows us to verify if the requirements for each capability are met, regardless of the varying feature-capability edge cardinalities. In the next subsection, we utilize the above developed framework to create a mapping from robots to capabilities, which enables task allocation in Section IV.
D. Mapping Robots to Capabilities Directly
The MRTA algorithm presented in this paper can be referred to as ST-MR-IA (Single-Task robots, Multi-Robot tasks, Instantaneous Assignment) [1]: in fact, (i) through prioritization, each robot is assigned to a single task, (ii) the tasks can be executed by multiple robots, in a coordinated or independent fashion, and (iii) the allocation of tasks to robots is carried out at each time instant, without planning for future allocations. As discussed in Section II-B, previous approaches to solving ST-MR-IA MRTA problems assume knowledge of the direct mapping from capabilities to robots encoded by a matrix
F ∈ R nc×nr
where F kj = 0 if and only if robot j can support capability k. Therefore, in this subsection, we state the required condition under which robot j can indeed support capability k, and derive the matrix F required by such algorithms based on this condition. Notice that the framework only accounts for a finite number of capabilities and features relevant to the required tasks, so the computation of F remains tractable.
As mentioned above, a capability k can be supported by a number of feature bundles. Consequently, a robot must possess all the features in at least one of the bundles associated with a capability in order to support it. Hence, we say robot i supports capability k if and only if it possesses all the features within a hyperedge associated with capability k. For example, in Fig. 1, robot r 1 can support capability c 1 since it possesses features f 1 and f 2 included in the top hyperedge. On the other hand, robot r 3 cannot support capability c 3 since it only possesses features f 4 and f 5 but not f 6 . We define the feasibility vector F k capturing which robots can satisfy capability k as:
F k = max(kron 1 (H k A)),(3)
where kron n denotes the shifted Kronecker delta function
kron n (x) = 1 if x = n 0 otherwise(4)
applied element-wise. The function kron 1 is introduced to eliminate cases where robots have an incomplete portion of the features in a hyperedge. Moreover, the max operator is intended column-wise, and serves to check whether a robot possesses all the features from at least one bundle. Note that using the max operator in the case of a robot satisfying a capability through multiple hyperedges selects only one of those edges, which will become relevant when we introduce weights in the next section. Shifting our attention to the example from Fig. 1, we can compute the feasibility vector F 3 corresponding to c 3 :
H 3 A = 0 1/3 2/3 1
whose ij-th component is the proportion of features that robot j possesses belonging to hyperedge i incident to capability k. Therefore, in this case, robot r 3 possesses only 2/3 of the features in the only bundle associated with c 3 , and therefore cannot support that capability. We thus obtain the following feasibility vector for c 3 :
F 3 = max(f (H 3 A)) = 0 0 0 1 .
As illustrated above, if F k,j = 1, then robot j can support capability k. Therefore, by concatenating all the vectors F k , we obtain the desired linear mapping from capabilities to robots:
F = F T 1 F T 2 . . . F T nc T .
As such, we can define a feasible assignment as one where all the capabilities required by each task t are at least supported by a given number robots assigned to task t, i.e.
i∈Rt F −,i ≥ T t,− ,(5)
where the notation T t,− and F −,i is used to denote the t-th row of T and the i-th column of F . The inequality in (5) holds element-wise, and R t denotes the set of robots assigned to task t. T t,k = n indicates that at least n of the robots assigned to task t need to exhibit capability k. Notice that, in order to ensure the satisfaction of inequality (5) for some set of robots R t , it is necessary that there are enough features available among the robots as are required for the execution of each task.
E. Weights Extension
In Subsection III-C, a binary mapping from features to capabilities leveraging the notion of hyper-edges was presented. In other words, depending on its features, a robot either fully exhibited a capability or did not at all. However, in many real-world applications this mapping is not necessarily binary. For example, continuous tracks perform better than ordinary wheels in navigating uneven terrains and therefore the hyper-edge containing the continuous track feature should be assigned a higher weight. This notion can be captured by introducing a weight associated to each hyper-edge, leading to the following more general form of (3):
F k = max(W k kron 1 (H k A)),(6)
where W k is diagonal matrix whose diagonal elements w k,1 , . . . , w k,nc k specify the quality of each hyper-edge at exhibiting capability k, as depicted in Fig. 1. Given this refined definition of F k , the inequality in (5) ensures that each capability required by task t is exhibited by at least one robot assigned to task t. In the next subsection, we introduce a specialization matrix which encodes information about which tasks a given robot is a valid candidate for assignment.
F. Specialization Matrix
To conclude the model of robot heterogeneity used within the task allocation framework proposed in this paper, we now define the requirements for a robot to be considered as a potential candidate for a task. As opposed to our previous work [4], where the specialization matrices were assumed to be given, we leverage the above developed feature and capability models to compute the specialization matrix of robot i as follows:
S i = diag(1 nt − kron 0 (T F −,i )) ∈ R nt×nt ,(7)
where, for m ∈ R n , diag(m) = M ∈ R n×n such that
M ij = m i if i = j 0 otherwise,
1 nt is a vector of dimension n t whose entries are all equal to 1, and kron 0 denotes the Kronecker delta function defined in (4) applied element-wise. As a result, the specialization of robot i towards task j, s ij , is given by
s ij = 1 if T j,− F −,i > 0 0 otherwise,
i.e. s ij = 1 if robot i exhibits at least one capability required by task j. The motivation behind this choice is two-fold: robots are allowed to combine their capabilities to satisfy a task, and there is no notion of priority between capabilities (i.e. exhibiting capability 1 is more or less crucial than exhibiting capability 2 and 3). The former indicates that if a robot exhibits even a single capability relevant to the task, it may still be able to contribute, whereas the latter indicates that there is no possible ordering of the candidates in terms of specialization. Finally, as will be shown in Section VI, the specialization matrix can be adapted on-the-fly. For example, in the case a robot loses a feature (e.g. its camera is malfunctioning), by removing the edges between the robot and the feature, we can re-compute which capabilities the malfunctioning robot can still exhibit.
IV. TASK EXECUTION AND PRIORITIZATION
This section develops a minimum energy task allocation framework, through prioritization and execution, that explicitly accounts for the heterogeneity of the robots expressed in terms of their capabilities, as well as specifications on the capabilities required to execute each task. Moreover, we demonstrate how the proposed task allocation framework introduces a degree of resilience, allowing the robots to react, for instance, to component failures and, more generally, to unmodeled or unexpected environmental conditions.
As stated in Section II-A, we consider a team of n r robots tasked with executing n t different tasks in the environment. We model the dynamics of each robot i ∈ {1, . . . , n r } with a control-affine dynamical system:
x i = f (x i ) + g(x i )u i(8)
where f and g are locally Lipschitz continuous vector fields, x i ∈ X ⊆ R nx is the state of the robot, and u i ∈ U ⊆ R nu is the input. Note that, in this paper, we assume that all robots obey the same dynamics given in (8), however, the entire formulation can be extended in a straightforward fashion to the case where individual robots have different dynamics.
As done in [16], we use Control Barrier Functions (CBFs) (see [34] for a review on the subject) to encode the set-based tasks that the robots are required to execute. To this end, in the following we briefly recall the definition and the main properties of CBFs, which will be used in the rest of the paper to formulate the task prioritization and execution framework.
Definition 1 ([34]
). Let C ⊂ D ⊂ R n be the zero superlevel set of a continuously differentiable function h : D → R. Then h is a control barrier function (CBF) if there exists an extended class K ∞ function γ 1 such that, for the control affine systeṁ
x = f (x) + g(x)u, x ∈ R nx , u ∈ R nu , one has sup u∈U {L f h(x) + L g h(x)u} ≥ −γ(h(x)).
for all x ∈ D.
The notation L f h(x) and L g h(x) are used to denote the Lie derivative of h along the vector fields f and g, respectively. Given this definition of CBFs, the following theorem highlights how they can be used to ensure both set forward invariance and stability [35].
Theorem 1 ([34]
). Let C ⊂ R n be a set defined as the zero superlevel set of a continuously differentiable function h : D ⊂ R n → R. If h is a control barrier function on D and ∂h ∂x (x) = 0 for all x ∈ ∂C, then any Lipschitz continuous controller
u(x) ∈ {u ∈ U : L f h(x) + L g h(x)u + γ(h(x)) ≥ 0} for the systemẋ = f (x) + g(x)u, x ∈ R nx , u ∈ R nu , renders the set C forward invariant. Additionally, the set C is asymptotically stable in D.
The results of this theorem will be used in the remainder of this section to design a control framework that allows a heterogeneous multi-robot system to prioritize and perform a set of tasks that need to be executed.
A. Constraint-Driven Task Execution
The formulation adopted in this paper in terms of extended set-based tasks [16] allows us to encode a large variety of tasks: these are tasks characterized by a set, which is to be rendered either forward invariant (usually referred to as safety in dynamical system theory [34]), or asymptotically stable, or both. The results recalled above suggest the use of CBFs to encode these kinds of tasks. Indeed, CBFs have been successfully used to encode a variety of such tasks for different robotic platforms, ranging from coordinated control of multi-robot systems [4] to multi-task prioritization for robotic manipulators [16]. In particular, in [16] the definition of extended set-based tasks, i.e. tasks which consist in the state x approaching a set (stability) or remaining within a set (safety), is formalized.
As shown in [10], among the extended set-based tasks, there is a class of coordinated multi-robot tasks which are executed through the minimization of a cost function, realized, for instance, by gradient-flow-like control laws [36]. These types of tasks can be recognized to be extended set-based tasks where the set of stationary points of the cost function has to be rendered asymptotically stable. In [10], it is shown how the execution of these tasks can be turned into a constrained optimization problem-a formulation amenable for long-duration robot autonomy [3].
To make matters more concrete, consider the continuously differentiable positive definite (energy-like) cost J : R nx → R, which is a function of the robot state x i , whose dynamics are assumed to be control affine, as in (8). Then, it is shown in [10] how the execution of the task characterized by the minimization of the cost function J can be realized by solving the following constrained optimization problem:
minimize ui,δi u i 2 + δ 2 i (9) subject to L f h(x i ) + L g h(x i )u i ≥ −γ(h(x i )) − δ i ,
where the task is encoded by the constraint in which h(
x i ) = −J(x i ) is a CBF that renders the safe set C = {x i ∈ R nx : h(x i ) ≥ 0} = {x i ∈ R nx : J(x i ) ≤ 0} = {x i ∈ R nx : J(x i ) = 0}
asymptotically stable. In (9), γ is an extended class K ∞ function and δ i is a slack variable which quantifies the extent to which the constraint can be relaxed. In cases where the completion of a task (a stationary point of J) is characterized by a strictly positive value of the cost J, δ i ensures that the optimization program (9) remains feasible (see [10]).
In multi-task multi-robot settings, this framework naturally allows robots to combine multiple constraints, each representing a task, into a single framework. For tasks encoded via CBFs h m , m ∈ {1, . . . , n t }, the constraint-based optimization problem for robot i can be written as,
minimize ui,δi u i 2 + δ i 2 (10) subject to L f h m (x) + L g h m (x)u ≥ −γ(h m (x)) − δ im ∀m ∈ {1 . . . n t },
where δ i = [δ i1 , . . . , δ int ] T represents the slack variables corresponding to each task being executed by robot i. The tasks encoded by the CBFs h m (x) are not restricted to be dependent only on the state of robot i, but rather on the ensemble state of the robots x = [x T 1 , . . . , x T nr ] T , thus allowing the framework to encompass coordinated multi-robot tasks.
With this framework in place, the slack variables δ i present a natural way of encoding task priorities for the individual robots. This will be the subject of the next section, where the main task allocation framework is presented.
B. Task Prioritization and Execution Algorithm
In section III, we presented a framework to model robot heterogeneity-exhibited in the different suitability that each robot has for different tasks-starting from the lower level concepts of robot features and capabilities. In this section, we leverage the expressiveness of this model in order to render the task prioritization framework, presented in [4] and improved in [11], resilient.
The optimization-based formulation extends the one in (10) as follows:
Task allocation optimization problem (MIQP)
(11) minimize u,δ,α nr i=1 C Π i α −,i 2 + u i 2 + l δ i 2 Si (11a) subject to L f h m (x) + L g h m (x)u i ≥ −γ(h m (x)) − δ im (11b) Θδ i + Φα −,i ≤ Ψ (11c) 1 T nt α −,i ≤ 1 (11d) F α T m,− ≥ T T m,− (11e) n r,m,min ≤ 1 T α T m,− ≤ n r,m,max (11f) δ i ∞ ≤ δ max (11g) α ∈ {0, 1} nt×nr (11h) ∀i ∈ {1 . . . n r }, ∀m ∈ {1 . . . n t },
where C, l ∈ R ≥0 are parameters of the optimization, δ max signifies the maximum extent to which each task constraint can be relaxed, and γ is a continuously differentiable class K ∞ function. The matrix Π i is a projection matrix defined in (13) to account for the heterogeneous capabilities of the multi-robot system, as explained in detail later.
First of all, as done in Section III, the symbols X i,− and X −,j denote the i-th row and the j-th column of the matrix X, respectively. The introduction of the matrix of task priorities α ∈ {0, 1} nt×nr in the optimization problem is what determines the prioritization (and, therefore, the allocation) of the tasks for each robot. This is realized through the constraint (11c), where the matrices Θ ∈ R n t (n t −1) 2 ×nt and Φ ∈ R n t (n t −1) 2 ×nt , and the vector Ψ ∈ R n t (n t −1) 2 , enforce constraints among different components of the vectors of task relaxation parameters δ i . As extensively discussed in [4], the constraint
δ in ≥ κ δ im − δ max (1 − α mi ) , n = m,(12)
that can be written as (11c), realizes the following two implications:
α im = 1 =⇒ δ im ≤ 1 κ δ in ∀n ∈ {1, . . . , n t } \ {m},
which implies that task m has highest priority for robot i, and
α im = 0 =⇒ δ im ≤ δ max + 1 κ δ in ∀n ∈ {1, . . . , n t } \ {m},
which implies that task m does not have the highest priority for robot i. In fact, in light of constraint (11g), no further constraints are enforced on δ im , since δ max is the maximum value |δ im | is allowed to achieve 2 . Notice further that, for the way it is used in (1), the optimal value of δ will always be non-negative (see also analyses in [10], [4]). The constraint (11d) is used to ensure that each robot has at most one task to be executed with highest priority, making the task prioritization formulation effectively a task allocation. Notice that, compared to our previous work [4], (11d) is here turned from an equality into an inequality constraint. In [4], this constraint was used to ensure that no feasible solution consisted in robots trading off task execution for energy saving. In the presented, enhanced, formulation, this is not necessary anymore thanks to constraint (11e)-whose meaning will be described in the following. Consequently, we can now account for situations where no tasks are allocated to some of the robots, implementing, as a matter of fact, the concept of autonomy-on-demand in the context of task allocation.
The constraint (11e) is what allows us to specify the minimum capabilities required for each task, expressed by the matrices T and F defined in Sections III-A and III-D, respectively. Moreover, the constraint (11f) allows us to enforce the minimum and maximum number of robots required for each task, thus giving a lot of flexibility and versatility to be utilized in many different application scenarios. In Section VI, experiments performed on a real multi-robot system will showcase the use of these constraints.
As in our previous works [4] and [11], the cost of the optimization problem (11) is composed of 3 terms. The last two terms in (11a) correspond to the control effort spent by the robots and the magnitude of the relaxation parameters, respectively. The former enables our framework to be compatible with long-duration autonomy applications. More specifically, robots can remain operational over sustained periods of time by minimizing the energy spent to perform a task-which is proportional to control effort-together with enforcing energy constraints as, e.g., in [13], [14]. The latter, instead, ensures that the tasks to which the robots have been assigned get indeed executed, thanks to constraint (11b). The norm of δ i corresponding to robot i is weighted by the specialization matrix of robot i, S i . This way, the relaxation variables corresponding to tasks that robot i is not capable of performing (i.e. with a low value of the entry of the specialization matrix) are weighted accordingly less.
Finally, the first term in (11a) is introduced to penalize bad allocations of tasks to robots, as explained in the following. The matrix Π i is defined as follows:
Π i = I nt − S i S † i ,(13)
where I nt is the n t × n t identity matrix, and S † i is the right Moore-Penrose inverse [37] of the specialization matrix S i of robot i. It is easy to see that Π i is the projector onto the orthogonal complement of the subspace of specializations possessed by robot i. Assume, for example, that robot i has no specialization at all at performing task k (i.e. s ik = 0) and has a non-zero specialization s ij of performing task j, j = k. Then, its specialization matrix S i will be given by: Then, the projector Π i in the cost (11a) will contribute to a non-zero cost when the components of α i corresponding to tasks that robot i has no specialization to perform are not zero, i.e. when robot i has been assigned to a task that it is not able to perform-referred above as a bad allocation.
S i = diag([s i1 ,
Remark 1 (Centralized Mixed-Integer Quadratic Program).
Notice that in (11) there is a coupling between the robots through the cost as well as the constraints. This means that the task allocation framework has to be solved in a centralized fashion. Moreover, the matrix of task priorities α is integer. This renders (11) a mixed-integer quadratic program (MIQP). A QP-relaxation approach, as well as ways of solving this framework in a decentralized way, are discussed in [4]. In Section V of this paper, we show how the proposed MIQP can be solved in a mixed centralized/decentralized fashion, and we analyze the performances compared to the centralized approach.
Remark 2 (Time-varying and sequential tasks). Expressing tasks by means of control barrier functions, besides rendering the task execution and allocation particularly amenable for online-optimization-based controllers, allows us to account for time-varying and sequential tasks, comprised by a sequence of sub-tasks, as well. In fact, the time-varying extension of control barrier functions (see, e.g., [13]) can be leveraged to consider tasks which have an explicit dependence on time. In the experimental section, we show how this extension of the proposed task allocation and execution framework can be used to implement state-trajectory-tracking tasks. Moreover, thanks to the pointwise-in-time nature of the developed optimization program, tasks can be removed and inserted in a continuous fashion, as demonstrated in [16]. This allows for a flexible implementation of sequential tasks which require the completion of a sub-task before another one can be started, as done in [38]. In the same way, the features and the specialization of the robots towards different tasks can be modified during the execution of the task. In the next subsection, we present an approach to leverage time-varying specialization in order to adapt to disturbances or modeled phenomena in the environment.
The following algorithm summarizes the application of the optimization-based allocation and execution framework to a multi-robot system with heterogeneous capabilities.
Algorithm 1 Task allocation and execution
Require:
Tasks h m , m ∈ {1, . . . , n t } Mappings F , T Parameters n r,m,min , n r,m,max , δ max , C, l 1: Evaluate S i , ∀i ∈ {1, . . . , n r } (7) 2: while true do Compute robot input u i , ∀i ∈ {1, . . . , n r }
Send input u i , ∀i ∈ {1, . . . , n r } to robots and execute 6: end while We conclude this subsection by showcasing the execution of Algorithm 1 in an explanatory example featuring the use of the allocation constraints described so far. Fig. 2a, 4 robots (gray triangles) have to be allocated to 2 tasks and, as a result of the execution of (11), robots r 2 and r 4 are assigned to task t 1 and t 2 , respectively, based on their specialization introduced in Fig. 1. In Fig. 2b, the additional constraint that at least 2 robots are required to execute task t 1 is introduced (n r,1,min = 2), and robot r 3 is picked together with r 2 to perform t 1 . The resulting trajectories of the robots are depicted as dashed lines.
Example 2. Consider 4 mobile robots moving in a 2dimensional space, tasked with performing 2 tasks. For clarity of exposition, in this example, we modeled each robot i as single integratorẋ i = u i -so f and g in (8) are the zero and identity map, respectively-and each task consists in going to a point of the state space. In Fig. 2, the robots are depicted as gray triangles and labeled r 1 to r 4 , whereas the locations corresponding to the tasks are labeled t 1 and t 2 . The features, capabilities, and task mappings have been set as in Fig. 1, where the numerical quantities are given in Section III. So, per (7), robots r 1 , r 2 and r 3 are only specialized to perform task t 1 , while robot r 4 is specialized to perform both tasks.
For the scenario depicted in Fig. 2a, tasks t 1 and t 2 need to be executed and there is no further constraint on the amount of capability or the number of robots required for a task. As a result of the execution of Algorithm 1, the trajectories (red and yellow) show two of the robots performing the two tasks. In particular, robot r 4 is assigned to task t 2 , while robot r 2 has been allocated to task t 1 (the only one it can perform). Robots r 1 and r 3 have remained to their initial positions with no task assigned to them, as r 2 and r 4 were already satisfying the task constraints in (11).
In the scenario depicted in Fig. 2b, instead, n r,1,min = 2, i.e. at least 2 robots are required for the execution of task t 1 . Driven by the control inputs u 2 and u 3 calculated according to Algorithm 1, robots r 2 and r 3 are assigned to task t 1 , while r 4 is assigned to t 2 (as it is the only robot possessing the specialization for it), as can be seen in Fig. 2b.
C. Resilience of the Task Allocation Algorithm
In this subsection, we introduce two distinct methods that render the task allocation and execution framework presented above resilient to environmental disturbances and robot feature failures. To achieve this, we leverage the fact that the optimization problem presented in (11) is solved point-wise in time, and thus can be integrated along with online updates to the specializations and capabilities of the robots to construct a feedback loop as depicted in Fig. 3.
We begin by introducing an update law aimed at changing the specialization values of the robots based on their measured versus expected progress at completing the tasks they are allocated to. The latter allows the framework to account for exogenous disturbances, i.e., disturbances that are not detectable, or not explicitly modeled, or just unknown [39]. In the cases when the disturbances are endogenous (e.g., detectable sensor malfunction), we can directly account for them by modifying the specific values in the mappings introduced in Section III.
1) Exogenous disturbances: In certain deployment scenarios, the specializations of robots towards the tasks might be unknown prior to deployment of the team, or might vary due to changes in the environmental conditions. For the remainder of this paper, we refer to all such disturbances that cannot be modeled (i.e. cannot be accounted for in F ) as exogenous disturbances. These also include undetectable failures of robot components which are only reflected, and therefore be detected, in the way the robot executes the assigned task. In these cases, we would like to update the specialization parameters s ij on-the-fly to account for such changes. As described in [11], this is achieved through updating the parameters s ij at each time step k based on the difference between the expected and actual effectiveness of the task allocation and execution framework, where we assume that this difference manifests itself in terms of variations in the dynamical model of the robot. At discretized time intervals k∆t, k ∈ N, let x (k) act denote the actual ensemble state of the multi-robot system and (i) x (k) sim the ensemble state simulated by robot i assuming it itself obeyed its nominal dynamics with all the other robots being stationary. The simulated states can be then evaluated as follows:
(i) x (k) sim,j = x (k−1) act,i + ∆x (k−1) i ∆t if j = i x (k−1) act,j if j = i,(14)
where
(i) x (k)
sim,j denotes the j-th component of (i) x (k) sim , and ∆x
(k−1) i is defined as ∆x (k−1) i = f x (k−1) act,i + g x (k−1) act,i u (k−1) i , u (k−1) i
being the input evaluated by solving (11) at time (k − 1)∆t. Using (i) x (k) sim , robot i can measure its contribution towards the difference between the simulated and the actual progress in the completion of task m at time step k as follows:
∆h (k) im = min 0, h im x (k) act − h im (i) x (k) sim ,(15)
where h im (i) x (k) sim and h im x (k) act are the simulated and actual values of the CBF corresponding to robot i and task m at time step k, respectively. Note that the min operator in (15) is used to prevent ∆h (k) im from being positive. This situation may occur due to the coordinated nature of multi-robot tasks, where robot i need not know the actions of its neighbors, which could result in an unpredictable positive variations of h im .
We assume that the CBF corresponding to each task, h m , is decomposable into the respective contributions of each robot i, h im . This assumption holds for a large number of coordinated control tasks such as multi-robot coverage control and formation control [36], and allows each robot to assess its own effectiveness at executing a task by measuring ∆h
(k) im . In fact, if ∆h (k)
im < 0, robot i's actual effectiveness at accomplishing task m is lower than anticipated. Consequently, one can model the evolution of the specialization of robot i at task m according to the following update law:
s (k+1) im = s (k) im + βα (k) im ∆h (k) im ,(16)
where β ∈ R >0 is a constant controlling the update rate. Note that the update only occurs for tasks to which the robots are assigned since α (k) im = 1 if and only if robot i is assigned to task m at time step k. This update law renders the framework resilient to unknown environmental disturbances by allowing the framework to account for the dynamical variations in the environmental conditions through the updates of the specialization matrix according to the performance of the robots. Although it is not in the scope of this paper, in [11] we also show conditions under which the robot specialization lost because of the update law in (16) can be recovered over time. Algorithm 2 extends Algorithm 1 developed in the previous section to account for exogenous disturbances.
Algorithm 2 Task allocation and execution resilient to exogenous disturbance
Require:
Tasks h m , m ∈ {1, . . . , n t } Mappings F , T Parameters n r,m,min , n r,m,max , δ max , C, l 1: Evaluate S i , ∀i ∈ {1, . . . , n r } (7) 2: while true do Compute robot input u i , ∀i ∈ {1, . . . , n r }
Send input u i , ∀i ∈ {1, . . . , n r } to robots to execute 6: for all i ∈ {1, . . . , n r } do 7:
Evaluate simulated robot state (i) x (k) sim(14)
8:
for all m ∈ {1, . . . , n t } do 9:
Evaluate ∆h end for 13: end while
(k) im(15)
The following example showcases the use of Algorithm 2 in a simplified scenario with 2 robots, 2 tasks, and an unmodeled, exogenous environmental disturbance.
Example 3. Consider the example depicted in Fig. 4. The 2 robots, r 1 and r 2 (shown as gray triangles, and modeled as . Robots r 1 and r 2 (gray triangles) have to execute tasks t 1 and t 2 . They both possess the capabilities to perform both tasks (as pictorially shown in Fig. 4a), but r 1 is not capable of traversing the circular cyanshaded region (representing the exogenous disturbance), rendering practically impossible for it to execute task t 2 . Based on (11), r 1 is initially assigned to t 2 and r 2 to t 1 . As r 1 reaches the cyan zone, it is not able to proceed forward. According to Algorithm 2, per (16), s 12 -the specialization of r 1 to execute t 2 , depicted in 4b as a function of time t-starts decreasing until it reaches 0. At this point, the allocation evaluated by (11) automatically changes and robots r 1 and r 2 are assigned to tasks t 1 and t 2 , respectively, fulfilling, this way, the requirement that both tasks need to be executed. The trajectories of the robots resulting by the allocation algorithm are depicted as dashed lines in Fig. 4b. 2-dimensional single integrators as in Example 2) are asked to execute 2 tasks t 1 and t 2 . Their features and capabilities are depicted in Fig. 4a: both robots are capable of performing both tasks. Nevertheless, robot r 1 cannot traverse the region of the state space shaded in cyan (unmodeled disturbance), making the execution of task t 2 impossible for it. By implementing the control obtained by solving (11), robot r 1 is initially assigned to task t 2 , while r 2 is assigned to t 1 , as confirmed by the initial vertical segments of the dashed green and red trajectories of the robots. As r 1 reaches the circular cyan region, it is not able to advance anymore and the execution of Algorithm 2 makes its specialization towards task t 2 -represented by s 12 -decrease according to (16), as depicted in Fig. 4b. When s 12 = 0, the allocation algorithm (11) swaps the allocation of tasks as robot r 1 is not able to execute task t 2 to any extent anymore. The final allocation satisfies the requirements that both tasks are executed.
2) Endogenous disturbances: We now shift our focus to cases where the disturbances to the model are known to the robots-a condition happening in case of, e.g., detectable sensor malfunctions. We refer to this class of disturbances as endogenous disturbances for which we account by directly altering the mappings introduced in Section III. Specifically, by leveraging the feature representation, we directly alter the intermediate mappings (i.e. Robot-to-Feature and Feature-to-Capability mappings) on the fly to reflect such changes. The latter is achieved through modifying the corresponding values in the mappings defined in Section III and re-computing the Robot-to-Capability matrix F and the specialization matrices S i . Note that F is incorporated in the task allocation framework through the constraint (11f), which ensures that the task allocation among the robots reflects the change in F . For example, in case of a feature failure, the Robot-to-Feature mapping matrix A is altered to account for the failure. Moreover, in case of known environmental disturbances, the feature bundle weights W k is altered for each capability. Following the example from Fig. 1, if feature f 4 of robot r 2 malfunctions, we reflect that by altering the original A matrix from (2) to
A = 1 0 0 0 1 0 0 0 1 1 0 0 0 1 1 1 0 0 1 1 0 0 0 1 ,
which is equivalent to removing the edge from robot r 2 to feature f 4 in the hypergraph from Fig. 1. Similarly, known external environmental disturbances such as weather or terrain conditions are modeled by altering the weight vectors w k introduced in subsection III-E.
The approach described in this section to cope with endogenous disturbances is summarized in Algorithm 3. To conclude the section, we present a final example to showcase the behavior resulting from the application of Algorithm 3.
Algorithm 3 Task allocation and execution resilient to endogenous disturbance
Require:
Tasks h m , m ∈ {1, . . . , n t } Mappings F , T Parameters n r,m,min , n r,m,max , δ max , C, l, β 1: Evaluate S i , ∀i ∈ {1, . . . , n r } (7) 2: while true do Send input u i , ∀i ∈ {1, . . . , n r } to robots to execute 6: Update matrix A 7:
Re-evaluate matrices F and S (6), (7) 8: end while Example 4. In this last examples, 2 robots, r 1 and r 2 are considered, which possess the features depicted in Fig. 5a to execute 2 tasks, t 1 and t 2 , thanks to the capabilities c 1 and c 2 . The mappings from features to capabilities to tasks may represent the following scenario. Two robots are endowed with wheels (feature f 1 ) for mobility (capability c 1 ), as well as a camera (feature f 2 ) and a communication module (feature f 3 ) serving the ability of live streaming (capability c 2 ). Task t 1 consists in visiting a location of the state space of the robots, while task t 2 consists in visiting a location and live streaming a video feed. The endogenous disturbance consists in robot r 1 losing the communication functionality at a certain time instant, compromising its ability of performing task t 2 , as it cannot live stream video feed anymore. The dashed edge in Figure 5. Resilience of the task allocation algorithm to endogenous disturbances (Example 4). The 2 robots r 1 and r 2 (gray triangles) are asked to perform tasks t 1 and t 2 . Initially, both robots are able to perform both tasks based on their possessed features shown in Fig. 5a. Solving (11) initially assigns robot r 1 to task t 2 and r 2 to t 1 . At a certain time instant, A 31 transitions from 1 to 0, corresponding to the condition that robot r 1 loses feature f 3 (the dashed edge in Fig. 5a is lost). At this point, the constraint (11e) forces the task allocation to swap so that r 2 , the only robots capable of providing capability c 2 for executing t 2 , is assigned to it. This way, the requirement that both tasks are executed by at least one robot are satisfied. The trajectories of the robots are depicted as dashed lines in Fig. 5b. Fig. 5a is lost, and the robot-to-feature mapping matrix A is modified by setting A 31 = 0. Fig. 5b depicts the trajectories of the robots under the initial allocation of robot r 1 to task t 2 and r 2 to t 1 . As A 31 = 0, the matrix F changes according to (3). Consequently, the constraint (11e) in the optimization problem (11) prevents robot r 1 from being allocated to task t 2 . Thus, the task allocation swaps in order to be able to perform both tasks, as required.
V. ANALYSIS AND IMPLEMENTATION OF THE TASK PRIORITIZATION AND EXECUTION ALGORITHM
The definition of the optimization problem as in (11) gives rise to two main questions: (i) whether, despite its pointwisein-time nature, the allocation algorithm generates a stable allocation of tasks among robots, and (ii) whether it can be solved in real time to allocate tasks to robots and synthesize control inputs which allow robots to execute them. As far as (i) is concerned, a stable allocation is the amenable condition under which, with time-invariant parameters of the problem and no exogenous or endogenous disturbances, each robot does not continuously switch between the tasks it executes, but rather is able to accomplish one of them. Regarding (ii), (11) is a mixed-integer quadratic program and, as such, solving it in real time might too computationally intensive.
To address these two issues, in Section V-A we present results on the convergence of the task prioritization and execution algorithm introduced in the previous section. These results guarantee that the allocation of tasks to a heterogeneous multi-robot system obtained by executing Algorithm 1 will converge, allowing the robots to complete the tasks that have been assigned to them. Moreover, in Section V-B, we present a mixed centralized/decentralized implementation of the developed task allocation algorithm which enables its application in online settings.
A. Analysis of Convergence of the Task Prioritization and Execution Algorithm
In cases where the tasks that the robots are asked to execute are neither coordinated nor time-varying (namely the CBF associated with them does not explicitly depend on the time variable), the following Proposition shows that the application of the task allocation algorithm (11) leads to a convergent behavior of the robots, whose states converge to a stable equilibrium point. Proposition 1. Consider n r robots modeled by the driftless dynamical systemẋ
i = g(x i )u i ,(17)
executing the control input u (k) i at time k, where u (k) is obtained by solving the task allocation algorithm (11) at time k in order to perform n t tasks. Assume that the tasks are characterized by the functions h 1 , . . . , h nt which do not have an explicit dependence on time. Assume further that the tasks are not coordinated, i.e.
h m (x) = nr i=1 h m,i (x i ) ∀i ∈ {1, . . . , n t }, where ∂h m,i (x i ) ∂x i T (x i ) = λ (h m,i (x i )) ,(18)
λ being a class K function, and there exist a unique x m,icorresponding to the state at which the task characterized by the function h m,i is completed-such that h m,i (x m,i ) = 0. If all robots are capable of performing all tasks to a certain extent, i.e. the matrices S i , i ∈ {1, . . . , n r } are positive definite, then the sequences {u (k) } k∈N , {δ (k) } k∈N , and {α (k) } k∈N , solutions of (11), converge as k → ∞. In particular, u (k) → 0, δ (k) → 0.
Proof. Solving (11) at time k yields u (k) , δ (k) , and α (k) . At time k + 1, by Proposition 3 in [4] where α = α (k) and J m (x) = −h m (x), if α (k+1) = α (k) and δ (k+1) = δ (k) , then u (k+1) < u (k) is obtained using (18). Let
V (α, u, δ) = nr i=1 C Π i α −,i 2 + u i 2 + l δ i 2 Si ,
be a candidate Lyapunov function for the multi-robot system controlled via the solutions of the optimization problem (11) (see scheme in Fig. 3). Notice that V is equal to the cost (11a) and it is positive definite since S i is positive definite for all i by assumption. Then, one has:
V (k+1) = V (α (k+1) , u (k+1) , δ (k+1) ) ≤ V (α (k) , u (k+1) , δ (k) ) < V (α (k) , u (k) , δ (k) ) = V (k) .
Therefore, V (k) → 0 as k → ∞. Thus, u (k) → 0 as k → ∞, and x (k) i → x m,i for some m, by the driftless assumption on the robot model (17).
The application of the previous results is however restricted to a specific, but nevertheless quite rich, class of tasks.
However, in situations where the assumptions of Proposition 1 are not satisfied, the following proposition provides us with sufficient conditions to ensure the convergence of the flow of the dynamical system comprised of the multi-robot system, characterized by its nonlinear dynamics, in feedback with the optimization problem embodying the task allocation algorithm (depicted in Fig. 3). The similarity between the system in Fig. 3 and the Lure's problem [40] suggests us to resort to techniques that have been widely adopted in the stability analysis of such systems, with the aim of studying the convergence of the task allocation algorithm we propose in this paper. Indeed, in the following proposition a quadratic Lyapunov function is proposed, and conditions to establish the convergence of the task allocation algorithm are given in the form of a linear matrix inequality (LMI) using the S-procedure [41], [42].
Proposition 2.
If, for all integers k, there exist positive scalars τ 1 , τ 2 , τ 3 such that
B (k) 0 ≤ τ 1 B (k) 1 + τ 2 B (k) 2 + τ 3 B (k) 3 ,(19)
where B
(k) 0 , B (k) 1 , B (k) 2 , B(k)
3 are given by (27) in Appendix A, in which c ∈ R >0 , then the sequences {u (k) } k∈N , {δ (k) } k∈N , and {α (k) } k∈N , solutions of (11) at time step k, converge as k → ∞.
Proof. For notational convenience, we let theᾱ = α T −,1 . . . α T
−,nr
T ∈ {0, 1} ntnr be the vector composed of the stacked columns of α, and
Φ = 1 nr ⊗ Φ,Θ = 1 nr ⊗ Θ,Ψ = 1 nr ⊗ Ψ,
⊗ denoting the Kronecker product. From (12) and with the notation introduced above, one has that
Φᾱ ≥ 0,
where the symbol ≥ is always intended component-wise.
Then, as δ ∈ R ≥0 (see discussions in [10] and [4]), the constraints (11b) and (11c) in (11) can be re-written as follows:
δ (k)T L f h(x (k) ) + δ T L g h(x (k) )u (k) ≥ − δ (k)T γ(h(x (k) )) − δ (k)T δ (k)(20)
α TΦTΘδ(k) +ᾱ TΦTΦᾱ(k) ≤ᾱ TΦTΨ .
Similarly, the constraints (11d) to (11g), can be re-written as
α (k)T A T α A αᾱ (k) ≤ᾱ (k)T A T α b α(22)δ (k)T A T δ A δ δ (k) ≤ δ (k)T A T δ b δ .(23)
Then, define the following candidate Lyapunov function:
V (x) = γ(h(x)) T γ(h(x)),(24)
where h(x) = [h 1 (x), . . . , h nt (x)] T and γ(h(x)) is intended as a component-wise application of the extended class K ∞ function to the vector h(x). We want the following condition on its time derivative to be satisfied at every time step k V (x (k) , u (k) ) = 2γ(h(x (k) )) T dγ dh
dh dx f (x (k) )(25)+ 2γ(h(x (k) )) T dγ dh dh dx g(x (k) )u (k) ≤ −cV (x (k) ),
with c ∈ R >0 . Defining ϕ (k) = [γ(h(x (k) )), u (k) , δ (k) ,ᾱ (k) , 1] T , the inequalities (25), (20), (21), (22), (23) can be compactly written as follows:
ϕ (k)T B (k) 0 ϕ (k) ≤ 0 ϕ (k)T B (k) 1 ϕ (k) ≤ 0 ϕ (k)T B (k) 2 ϕ (k) ≤ 0 ϕ (k)T B (k) 3 ϕ (k) ≤ 0,
where B 0 , B 1 , B 2 , and B 3 are defined in (27).
Thus, applying the S-procedure [41], the linear matrix inequality (19) in the variables τ 1 , τ 2 , τ 3 is obtained. If a solution to (19) exists for all k, then (25) is satisfied for all k, and therefore V (x (k) ) → 0. Consequently x (k) converges as k → ∞, and so do the sequences {u (k) } k∈N , {δ (k) } k∈N , and {α (k) } k∈N , solution of (11), parameterized by x (k) . (19)). The convergence of the multi-robot system executing the allocated tasks as shown in Proposition 2 hinges on the existence of solution of (19). In [41], necessary and sufficient conditions for the existence of solutions are provided. For instance, for robots modeled with linear systems and tasks modeled with quadratic functions h m , stability of the task allocation can be certified by solving an algebraic Riccati equation.
Remark 3 (Certificate of feasibility of
Despite the flexibility determined by the variety of scenarios encompassed by the optimization-based task allocation formulation presented in this section, its mixed-integer nature does not allow, in most cases, to scale its applicability to a large number of robots [43]. Therefore, it is not always possible to solve the proposed task allocation optimization program (11) in an online fashion under real-time constraints. Thus, in the following section, we propose a mixed centralized/decentralized execution strategy which allows the computation of the task prioritization as well as the control inputs required by the robots to execute the tasks to take place in online settings.
B. Mixed Centralized/Decentralized Implementation of the Task Prioritization and Execution Algorithm
In order to allow the applicability of the proposed task prioritization and execution algorithm to scenarios where a large number of robots have to execute a large number of tasks, in the following we propose an alternative mixed centralized/decentralized formulation. We then analyze the performance in terms of task allocation and execution compared to the MIQP developed in the previous section.
To this end, the optimization problem (11) is solved by a central computational unit for u, δ, and α. The central computational unit then communicates to each robot i only its allocation vector α −,i . At this point, each robot can solve the following convex quadratic program (QP) in order to compute the control input it requires to execute the task prioritized based on its prioritization vector α −,i received by the central computational unit: Figure 6. A mixed centralized/decentralized architecture to implement the task allocation and execution algorithm. Unlike the MIQP centralized formulation in (11), the allocation is solved separately from the execution. The former is evaluated in a centralized fashion based on the states collected from all the robots, and it typically happens at a slower rate due to the computational complexity of mixed-integer programs. The latter is solved by each robot in a decentralized way once the allocation (in terms of α −,i ) is received by the robots from the central computational unit. At the interface between slow and fast rate a zero-order hold block signifies that each robot i receives a new allocation vector α −,i each time this is obtained by solving the MIQP.
Task execution optimization problem (QP)
(26) minimize ui,δi u i 2 + l δ i 2 Si (26a) subject to L f h m (x) + L g h m (x)u i ≥ −γ(h m (x)) − δ im (26b) Θδ i + Φα −,i ≤ Ψ (26c) δ i ∞ ≤ δ max (26d) ∀m ∈ {1 . . . n t }.
Depending on the coordinated nature of the tasks, the solution of (26) can be obtained with or without communication between the robots. See [10] for a detailed discussion on how to achieve a coordinated control of multi-robot systems using this formulation. Figure 6 summarizes the described mixed centralized/decentralized architecture.
Notice that, if solving the centralized MIQP cannot be done at each time step, by following the mixed centralized/decentralized approach, each robot solves for its control input u i using an outdated value of its prioritization vector α −,i , which is calculated by the central unit using old values of the state x i of the robots. Depending on the time that the central computational unit takes to solve the MIQP, the difference between the input u i solution of (26) and the one that would have been obtained by solving (11) might be different. In the following, we quantify the error that is introduced in the control input u i by adopting the mixed centralized/decentralized approach, rather than solving the centralized MIQP at each time step.
For notational convenience, we introduce the following mappings. We denote by Γ MIQP : R nxnr → {0, 1} nt×nr : x → α the natural projection of the solution map of (11), and by
Γ QP : R nxnr × {0, 1} nt×nr → R nunr : (x, α) → u,
the natural projection of the solution map of (26) for all the robots. Moreover, we let Γ(·, ·) = Γ QP (·, Γ MIQP (·)), and denote byΓ MIQP the solution map of the QP relaxation of (11) projected onto the subspace of allocation vectors αwhere α ∈ [0, 1] nt×nr ⊂ R nt×nr .
Assume that, at time k∆t, the central unit receives x (k) from the n r robots, and solves the MIQP (11) obtaining α (k) = Γ MIQP x (k) . This computation is assumed to take n steps 3 , or n∆t seconds. At time (k + n)∆t, the central unit transmits the computed allocation values α (k) to the robots, each of which solves (26), and the input to the robots can be expressed as u (k+n) = Γ QP x (k+n) , α (k) . This is assumed to take 1 step, or ∆t seconds. We are interested in quantifying the difference between the robot control inputs u (k+n) evaluated by the robots with the old value α (k) and the control input u (k+n) that would be evaluated with the current value α (k+n) . This difference is given by (28), and the different contributions are explicitly broken down in (29) in Appendix B, using the sensitivity results in [44]. The notation ∆(A) in (29) denotes the maximum of the absolute values of the determinants of the square submatrices of the matrix A. Moreover, A and B denote the matrix and the vector such that the inequality constraints in (11) can be written as
A u δ α ≤ B,
and, provided that the conditions of Theorem 5.2 in [45] hold, L MIQP , L QP , and Lẋ are the Lipschitz constants of the mappings Γ MIQP , Γ QP , and the robot dynamics (8), respectively.
As expected, the bound on u (k+n) −û (k+n) in (29) is a monotonically increasing function of the number of optimization variables, of the values L QP and L MIQP -which, in turn, depend on the parameters of the optimization problem [45]-, of Lẋ, and of n∆t, i.e. the time required by the central computational unit to solve the MIQP. In particular, the bound in (29) is comprised of two terms: the first one depends on the mixed-integer nature of the allocation algorithm (11), while the second one is due to the computation time that the central unit takes in order to solve the allocation optimization. The effect of the mixed-integer programming is the most critical one, as it is proportional to n 2 t n 3 r , and vanishes only when the solution of the MIQP (11) is equal to that of its QP relaxation. The term depending on the computational time, instead, vanishes if the MIQP can be solved at each time step.
Remark 4 (Communication delays). Notice that the time to communicate the allocation solution to all the robots, if not negligible, can be added to the quantity n∆t to account for the effects of communication delays in the execution of the allocated tasks.
We conclude this section by summarizing the mixed centralized/decentralized implementation of the proposed task allocation optimization problem in Algorithm 4, which is combined with Algorithms 2 and 3 to obtain an efficient implementation of the optimal allocation and execution algorithm resilient to endogenous and exogenous disturbances. This combination will be showcased in the next section, where the implementation of the developed allocation and execution algorithm on a real multi-robot platform is presented.
Algorithm 4 Mixed centralized/decentralized implementation of task allocation and execution
Require:
Tasks h m , m ∈ {1, . . . , n t } Mappings F , T Parameters n r,m,min , n r,m,max , δ max , C, l 1: Evaluate S i , ∀i ∈ {1, . . . , n r } end while 9: end procedure 10: procedure ROBOT i 11: while true do 12: Receive allocation α −,i if ready 13: Calculate input u i and execute (26) 14:
end while 15: end procedure
VI. EXPERIMENTS
In order to illustrate the properties of the resilient task prioritization and execution framework developed and demonstrated in this paper, in this section we present the results of its implementation on a team of mobile robots in the Robotarium [46], a remotely accessible swarm robotics testbed. The scenario of the experiment is depicted in Fig. 7. A team of 5 mobile robots, each endowed with a simulated camera system, are deployed in a 3.6×2.4 m rectangular domain and have to perform 2 tasks: task t 1 consists of 1 robot moving along a desired trajectory navigating the environment from a starting point (red circle in Fig. 7) to a goal point (red cross in Fig. 7); to perform task t 2 , 3 robots need to escort the robot executing task t 1 by arranging themselves into a ring around it while simultaneously monitoring a point of interest with their cameras (red star in Fig. 7). The physical robots are differential drive robots. In the experiment, we model their motion as well as that of their cameras using the following single integrator dynamics:ẋ
i,1 = u 1 ,ẋ i,2 = u 2 ,ẋ i,3 = u 3 ,
where p i = [x i,1 , x i,2 ] T ∈ R 2 represents the position of robot i, and x i,3 ∈ [0, 2π] is the orientation of its camera. u 1 , u 2 , u 3 ∈ R are the velocity inputs to the robot and to the camera.
Task t 1 is realized by tracking a predefined trajectory, while task t 2 is achieved by implementing a weighted coverage control [47], [48] in order to arrange the robots on the green ring in Fig. 7. The two tasks are encoded by the following two CBFs, respectively:
h 1,i (x, t) = − p i −p(t) 2 h 2,i (x, t) = − p i − G i (x) 2 − x i,3 − ∠(p * − p i ) 2 ,
wherep : R ≥0 → R 2 is the desired trajectory (dashed line in Fig. 7), p * ∈ R 2 is the position of the point of interest to monitor (red star in Fig. 7), ∠(p * − p i ) denotes the angle formed by the vector p * − p i with the horizontal coordinate axis, and G i (x) is the centroid of the Voronoi cell corresponding to robot i. In order to achieve the desired arrangement of robots performing task t 2 around the robot performing task t 1 , the centroids G i have been evaluated as follows:
G i (x) = Vi p i φ(p i )dp i Vi φ(p i )dp i ∈ R 2 where V i is the Voronoi cell of robot i, φ(p i ) is the function φ(p i ) = e −k( pi−p(t) 2 −r 2 ) 2 ,
with k ∈ R >0 and r being the radius of the green circle in Fig. 7 (see [47] for details on coverage control). These two parameters have been set to k = 100 and r = 0.4. Moreover, in order to be able to perform the prescribed tasks, the robots need certain features which allow them to exhibit the capabilities required by the two tasks. The mappings employed for the experiments are depicted in the bipartite graph in Fig. 8. The available features are wheels to locomote on the ground (f 1 ), set of propellers to fly (f 2 ), and a camera (f 3 ). The capabilities required to perform the given tasks are mobility (c 1 ) and monitoring (c 2 ). The former is supported by features f 1 or f 2 , while the latter by f 3 . To perform task t 1 , only c 2 is required, while both capabilities are required for t 2 . Finally, robots r 1 to r 4 are each endowed with wheels and a camera, while r 5 is able to fly-depicted in the Robotarium experiment by projecting down the shape of a quadcopter at its location-and possesses a camera.
Moreover, since 1 robot is required to be assigned to task t 1 and 3 robots to t 2 for all times, the following parameters have been set for the experiment: T = 1 0 3 3 , n r,1,min = n r,1,max = 1, n r,2,min = n r,2,max = 3.
Furthermore, the remaining parameters of (11) have been set to: C = 10 6 , l = 10 −6 , γ : s → 5s, κ = 10 6 , δ max = 10 3 . This choice was made considering the facts that (i) high values of C result in the robot specialization to be respected as accurately as possible, and (ii) low values of l result in the robots executing the assigned tasks as good as possible.
The total duration of the experiment is 80 seconds. During this time span, the resilience of the allocation algorithm to both endogenous and exogenous disturbances is tested. At time t = 15 s, the feature f 3 of robot r 3 is lost (endogenous disturbance), depicted by the dashed red edge on the hypergraph in Fig. 8. Moreover, in the middle of the environment, a region of low friction is present (brown blob in Fig. 7). This prevents the robots endowed with wheels from moving (exogenous disturbance).
In Fig. 9, snapshots recorded during the course of the experiment are shown. The robots start on the right of the rectangular environment (Fig. 9a). The task prioritization and execution framework results in the following allocation: r 4 is allocated to t 1 and therefore has to navigate the environment to reach the red cross, while r 1 , r 2 and r 3 are assigned to t 3 and thus need to escort r 4 during its mission. Using coverage control, they arrange themselves around r 4 and point their cameras-whose field of view is depicted through a yellow beam projected down onto the Robotarium testbed-at the red star (Fig. 9b). At t = 15 s, the camera of r 3 breaks (Fig. 9c). Therefore, it cannot keep on executing t 2 . The constraints (11e) and (11f) result in r 3 swapping its allocation with r 4 (Fig. 9d). Around t = 50 s, one of the robots, specifically r 4 , encounters the low-friction zone, and, as a result, its motion is impeded (Fig. 9e). The update law (16) makes the specialization of r 4 towards task t 2 drop (depicted as progress bars next to r 4 in Fig. 9f). When the specialization of r 4 towards task t 2 reaches 0, the task allocation driven by the cost in (11) changes once again to adapt to the unexpected environmental conditions: r 5 is recruited to perform t 2 while r 4 is relieved of its duty (Fig. 9g). The last snapshot (Fig. 9h) shows the robot team successfully accomplishing both tasks as desired: 1 robot has reached the goal point (red cross) while being escorted at all times by 3 more robots.
The result of Proposition 2 gives us another way of highlighting the resilience of the task allocation algorithm, by observing the trajectory of the Lyapunov function (24). Its value recorded over the course of the experiment is depicted in Fig. 10. At the beginning of the experiment, the value of the Lyapunov function V decreases as the robots perform the assigned tasks. The endogenous disturbance at t = 15 s makes the allocation swap: by means of the stability properties highlighted in Proposition 2, the allocation algorithm makes the robots perform forward progress towards the accomplishment of the tasks-which results in a decrease of the Lyapunov function for t > 15 s. Towards the end of the experiment, a similar situation is observed where the exogenous disturbance of one of the robots incapable of moving anymore results in a change of the task allocation. Again, owing to the aforementioned stability properties, the execution of the tasks makes the Lyapunov function decrease again towards 0 after a jump due to the allocation swap.
To conclude, as observed in Section V, the developed task prioritization and execution framework would not be realizable in realistic scenarios unless a mixed centralized/decentralized strategy is implemented. In the Robotarium experiment, two communicating processes have run in parallel: one responsible for solving the task allocation optimization problem (11), and one with the objective of synthesizing the controller for the robots given the task allocation, using (26). To show the difference between the implementation of a purely centralized allocation strategy versus a mixed centralized/decentralized one, both have been simulated and the results in terms of difference between robot inputs are reported in Fig. 11. From the graph, it is clear that without the effect of disturbance, the difference between the inputsû-obtained by solving the QP (26) with α −,i obtained by the MIQP (11) at each time step-and u-synthesized using the QP (26) with α −,i obtained from the MIQP (11) whenever it is available-is close to 0. The peaks around the times of the endogenous and exogenous disturbances are due to the fact that in the mixed centralized/decentralized case, there is a delay of n times steps (the effect of computation time in (29)) in recomputing the task allocation. In fact, solving the MIQP (11) takes on average 100 steps required to solve the QP (26), using the MATLAB CVX library [49] and the Gurobi solver [50].
VII. CONCLUSION
In this paper, we have presented an optimization-based task prioritization and execution framework that achieves a resilient and energy-aware task allocation strategy for heterogeneous multi-robot systems. The approach lies its foundations on a proposed decomposition of the ability of the robots at performing tasks into features, capabilities and specialization of the robots. Moreover, the approach builds up on the notion of set-based tasks, where each task executed by the robots is characterized by a set encoded using a control barrier function. These modeling choices allow us to prioritize tasks by considering the different specialization that different robots have at performing different tasks, effectively realizing a heterogeneous task allocation. Furthermore, the optimizationbased and pointwise-in-time nature of the task allocation algorithm contribute to foster its resilience properties.
We showed ways to achieve resilience with respect to endogenous disturbances (failure of a robot caused by loss (e) (f) (g) (h) Figure 9. Snapshots recorded during the course of the experiment on the Robotarium [46]. The scenario is the one depicted in Fig. 7. Five robots are initially arranged along the right side of the rectangular environment (Fig. 9a). The robot ID is projected down onto the Robotarium testbed at the top right corner of each robot. The capabilities of each robot to perform task t 1 and t 2 are indicated by vertical progress bars at the top left corner of each robot, using the same color codes as in Fig. 8, i.e. orange for t 1 and blue for t 2 . The height of the progress bars indicates the capability of the robots during the course of the experiment. Solving the task allocation optimization problem (11) results in the following initial allocation: r 4 is allocated to t 1 , which entails navigating the environment to reach the red cross, and r 1 , r 2 and r 3 are allocated to t 3 , for which they need to escort r 4 during its mission. In Fig. 9b, r 1 , r 2 and r 3 are arranged around r 4 and have pointed their cameras at the red star: the field of view of the cameras are depicted as thin yellow triangles. In Fig. 9c, the endogenous disturbance takes place: the camera of r 3 breaks-this event is represented by the the field of view of its camera becoming red. As r 3 is not able to perform the monitoring required for t 2 , the constraints (11e) and (11f) result in r 3 swapping its allocation with r 4 (Fig. 9d). In Fig. 9e, the exogenous disturbance is encountered: r 4 is not able to move away from the simulated low-friction zone (brown area in the middle of the environment). Thanks to the update law (16), the specialization of r 4 to perform t 2 drops to 0 (in Fig. 9f, the blue progress bar corresponding to task t 2 next to r 4 is emptying). The task allocation recruits r 5 to perform t 2 , while r 4 is not assigned to any task (Fig. 9g). In Fig. 9h, the robot team has successfully completed both tasks as desired: 1 robot has reached the goal point (red cross) while being escorted at all times by 3 more robots. The full video of the experiment is available online at https://youtu.be/fdfYID7u72o, where it is also possible to see, at the bottom left of the frames, a table containing current allocated task and values of the components of δ i for each robot over the course of the experiment. of features) as well as exogenous disturbances (caused by unmodeled phenomena in the environment) which leverage the reactive nature of the formulation. Moreover, we demonstrated how the formulation allows us to specify both the number of robots and the amount of capabilities required to perform a certain task. This way, thanks to the energy-awareness of the algorithm, robots which are not required to perform tasks are not utilized. Nevertheless, they can potentially be recruited at any point in time, achieving, this way, autonomy-on-demand in the context of task allocation. Figure 11. Comparison, in terms of robot input difference, between simulations of mixed centralized/decentralized task allocation (26) with up-todate and outdated α, respectively. The former has been obtained by solving the MIQP (11) at each time step and provide each robot with its allocation vector α −,i in order to solve the QP (26). Without endogenous or exogenous disturbances, the difference between the inputsû-obtained by solving the QP (26) with α −,i obtained by the MIQP (11) at each time step-and usynthesized using the QP (26) with α −,i obtained from the MIQP (11) whenever it is available-is close to 0. The difference peaks around the times of the endogenous and exogenous disturbances: this phenomenon is due to the delay introduced by the time required to solve the MIQP (11). The allocation changes 34 and 11 iterations later in the case of endogenous and exogenous disturbances, respectively. This effect due to the computation time is highlighted in (29).
The effectiveness of the proposed approach is showcased through a mixed centralized/decentralized implementation of the developed task allocation strategy on a team of 5 mobile robots, possessing 3 features and 2 capabilities to perform 2 tasks, under both endogenous and exogenous disturbances.
1 = 0 0 − 1 2 I 0 0 0 0 − 1 2 L g h(x (k) ) T 0 0 − 1 2 I − 1 2 L g h(x (k) ) −I 0 − 1 2 L f h(x (k) ) 0 0 0 0 0 0 − 1 2 L f h(x (k) ) T 0 0 0 , B (k) 2 = 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2ΘΦ 0 0 0 1 2Φ TΘTΦTΦ 1 2ΦΨ 0 0 0 1 2Ψ TΦT 0 , B (k) 3 = 0 0 0 0 0 0 0 0 0 0 0 0 A T δ A δ 0 1 2 A T δ b δ 0 0 0 A T α A α 1 2 A T α b α 0 0 1 2 b T δ A δ 1 2 b T α A α 0 (27) APPENDIX B BOUNDS EMPLOYED IN SUBSECTION V-B u (k+n) −û (k+n) ∞ = Γ x (k+n) , x (k) − Γ x (k+n) , x (k+n) ∞ = Γ QP x (k+n) , Γ MIQP x (k) − Γ QP x (k+n) , Γ MIQP x (k+n) ∞ (28) u (k+n) −û (k+n) ∞ = Γ QP x (k+n) , Γ MIQP x (k) − Γ QP x (k+n) , Γ MIQP x (k+n) ∞ ≤ Γ QP x (k+n) , Γ MIQP x (k) − Γ QP x (k+n) ,Γ MIQP x (k) ∞ + Γ QP x (k+n) ,Γ MIQP x (k) − Γ QP x (k+n) ,Γ MIQP x (k+n) ∞ + Γ QP x (k+n) , Γ MIQP x (k+n) − Γ QP x (k+n) ,Γ MIQP x (k+n) ∞ ≤ L QP x (k+n) , Γ MIQP x (k) − x (k+n) ,Γ MIQP x (k) ∞ + x (k+n) ,Γ MIQP x (k) − x (k+n) ,Γ MIQP x (k+n) ∞ + x (k+n) , Γ MIQP x (k+n) − x (k+n) ,Γ MIQP x (k+n) ∞ ≤ L QP n 2 t n 3 r m∆ A x (k) B x (k) + L MIQP x (k+n) , x (k) − x (k+n) , x (k+n) ∞ + n 2 t n 3 r m∆ A x (k+n) B x (k+n) ≤ L QP n 2 t n 3 r m∆ A x (k) B x (k) + L MIQP x (k) − x (k+1) ∞ + x (k+1) − x (
This research was sponsored by the Army Research Lab through ARL DCIST CRA W911NF-17-2-0181. 1 G. Notomista, Y. Emam, S. Hutchinson, and M. Egerstedt are with the Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA 30332, USA, {g.notomista, emamy, seth, magnus}@gatech.edu. 2 S. Mayya is with the GRASP Laboratory, University of Pennsylvania, Philadelphia, PA, USA [email protected] 3 C. Kroninger and A. Bohannon are with the Combat Capabilities Development Command, Army Research Laboratory (CCDC ARL)
c taskj (x, u) ≥ 0, ∀j ∈ {1, . . . , n t },
Figure 1 .
1Example of scenario including 2 tasks, 3 capabilities, 6 features and 4 robots shown from left to right. The capabilities to features mapping is shown through the gold and silver hyperedges. Note that not all of the hyperedges need to have the same cardinality.
. . . , s i k−1 , 0, s i k+1 , . . . , s int ]), and Π i = diag([0, . . . , 0 k−1 , 1, 0, . . . , 0 nt−k ]).
state x i , ∀i ∈ {1, . . . , n r } 4:
Figure 2 .
2Task allocation and execution (Example 2). In
Figure 3 .
3The multi-robot system interacting with the environment controlled in feedback by the task allocation and execution optimization program(11).
state x i , ∀i ∈ {1, . . . , n r } 4:
Figure 4 .
4Resilience of the task allocation algorithm to exogenous disturbances (Example 3)
state x i , ∀i ∈ {1, . . . , n r } 4:Calculate robot input u i , ∀i ∈ {1, . . . , n r }
Figure 7 .
7Experimental scenario. The robots need to perform 2 tasks: 1 robot has to navigate in the environment to reach a goal point (red cross) following the dashed trajectory, while 3 robots have to escort it by arranging themselves around it (on the green ring) while simultaneously monitoring a point of interest (red star). The brown blob in the middle of the rectangular environment represents a low-friction zone where the motion of ground robots is impeded.
Figure 8 .
8Robots, features, capabilities and tasks mappings used for the experiment on the Robotarium. The features are wheels to locomote on the ground (f 1 ), propellers to locomote in the air (f 2 ), and a camera (f 3 ). The resulting capabilities are locomotion (c 1 ) and monitoring of a point of interest (c 2 ). Tasks consist of navigating the environment to reach a goal point (t 1 ), escorting the robot navigating the environment by arranging around it and monitoring a point of interest (t 2 ).
Figure 10 .
10Trajectory of the value of the Lyapunov function(24) recorded over the course of the Robotarium experiments. At the beginning of the experiment, it decreases as the robots perform the assigned tasks. The endogenous and exogenous disturbances at t = 15 s and t = 50 s, respectively, make the value of the Lyapunov function jump to higher values, which are promptly decreased by the execution of the tasks by the robots, owing to the stability guarantees given in Proposition 2.
+
. . . + x (k+n−1) − x (k+n) ∞ + n 2 t n 3 r m∆ A x (k+n) B x (k+n) ≤ L QP n 2 t n 3 r m ∆ A x (k) B x (k) + ∆ A x (k+n) B x (k+n)Effect of mixed-integer programming+ L QP L MIQP Lẋn∆tEffect of computation time .
Table I NOTATION
ISymbol
Description
Section
nr
Number of robots
II-A
nt
Number of tasks
II-A
nc
Number of capabilities of all robots
II-A
n f
Number of features of all robots
II-A
T ∈ {0, 1} n t ×nc
Capability-to-Task mapping
III-A
A ∈ {0, 1} n f ×nr
Robot-to-Feature mapping
III-B
H k ∈ [0, 1] nc k ×n f
Feature-to-Capability mapping
III-C
F ∈ R nc×nr
Get robots' state x i , ∀i ∈ {1, . . . , n r }2: procedure CENTRAL COMPUTATIONAL UNIT
3:
while true do
4:
5:
Calculate allocation α
(11)
6:
Send allocation α −,i , ∀i ∈ {1, . . . , n r } to robots
7:
Update matrices S and F if required Algs. 2, 3
8:
An extended class K∞ function is a continuous function γ : R → R that is strictly increasing and with γ(0) = 0.
Note that constraint (11g) might cause(11) to become infeasible. However, when the state of the robots evolve in a compact set X , as the functions encoding the tasks are continuously differentiable, choosing max m∈{1,...,n t } max x∈X {hm(x)} ≤ δmax < ∞ guarantees that the task allocation optimization problem (11) is always feasible.
If stopping criteria are not met, the algorithm times out after n∆t seconds.
A formal analysis and taxonomy of task allocation in multi-robot systems. B P Gerkey, M J Matarić, The International Journal of Robotics Research. 239B. P. Gerkey and M. J. Matarić, "A formal analysis and taxonomy of task allocation in multi-robot systems," The International Journal of Robotics Research, vol. 23, no. 9, pp. 939-954, 2004.
A comprehensive taxonomy for multi-robot task allocation. G A Korsah, A Stentz, M B Dias, The International Journal of Robotics Research. 3212G. A. Korsah, A. Stentz, and M. B. Dias, "A comprehensive taxonomy for multi-robot task allocation," The International Journal of Robotics Research, vol. 32, no. 12, pp. 1495-1512, 2013.
Robot ecology: Constraint-based control design for long duration autonomy. M Egerstedt, J N Pauli, G Notomista, S Hutchinson, Annual Reviews in Control. 46M. Egerstedt, J. N. Pauli, G. Notomista, and S. Hutchinson, "Robot ecology: Constraint-based control design for long duration autonomy," Annual Reviews in Control, vol. 46, pp. 1-7, 2018.
An optimal task allocation strategy for heterogeneous multi-robot systems. G Notomista, S Mayya, S Hutchinson, M Egerstedt, 2019 18th European Control Conference (ECC). G. Notomista, S. Mayya, S. Hutchinson, and M. Egerstedt, "An optimal task allocation strategy for heterogeneous multi-robot systems," in 2019 18th European Control Conference (ECC), June 2019, pp. 2071-2076.
Heterogeneous multi-robot cooperation. L E Parker, Massachusetts Inst of Tech Cambridge Artificial Intelligence Lab. Tech. RepL. E. Parker, "Heterogeneous multi-robot cooperation," Massachusetts Inst of Tech Cambridge Artificial Intelligence Lab, Tech. Rep., 1994.
Distributed coordination in heterogeneous multi-robot systems. L Iocchi, D Nardi, M Piaggio, A Sgorbissa, Autonomous Robots. 152L. Iocchi, D. Nardi, M. Piaggio, and A. Sgorbissa, "Distributed co- ordination in heterogeneous multi-robot systems," Autonomous Robots, vol. 15, no. 2, pp. 155-168, 2003.
The impact of diversity on optimal control policies for heterogeneous robot swarms. A Prorok, M A Hsieh, V Kumar, IEEE Transactions on Robotics. 332A. Prorok, M. A. Hsieh, and V. Kumar, "The impact of diversity on optimal control policies for heterogeneous robot swarms," IEEE Transactions on Robotics, vol. 33, no. 2, pp. 346-358, 2017.
Resilience by reconfiguration: Exploiting heterogeneity in robot teams. R K Ramachandran, J A Preiss, G S Sukhatme, 2019R. K. Ramachandran, J. A. Preiss, and G. S. Sukhatme, "Resilience by reconfiguration: Exploiting heterogeneity in robot teams," in 2019
. IEEE. IEEEIEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2019, pp. 6518-6525.
Multi-robot task allocation: Analyzing the complexity and optimality of key architectures. B P Gerkey, M J Mataric, 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422). IEEE3B. P. Gerkey and M. J. Mataric, "Multi-robot task allocation: Ana- lyzing the complexity and optimality of key architectures," in 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422), vol. 3. IEEE, 2003, pp. 3862-3868.
Constraint-driven coordinated control of multi-robot systems. G Notomista, M Egerstedt, 2019 American Control Conference (ACC). G. Notomista and M. Egerstedt, "Constraint-driven coordinated control of multi-robot systems," in 2019 American Control Conference (ACC).
. IEEE. IEEE, 2019, pp. 1990-1996.
Y Emam, S Mayya, G Notomista, A Bohannon, M Egerstedt, 2020 International Conference on Robotics and Automation. Y. Emam, S. Mayya, G. Notomista, A. Bohannon, and M. Egerstedt, in 2020 International Conference on Robotics and Automation.
Persistification of robotic tasks using control barrier functions. G Notomista, S F Ruf, M Egerstedt, IEEE Robotics and Automation Letters. 32G. Notomista, S. F. Ruf, and M. Egerstedt, "Persistification of robotic tasks using control barrier functions," IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 758-763, 2018.
Persistification of robotic tasks. G Notomista, M Egerstedt, IEEE Transactions on Control Systems Technology. G. Notomista and M. Egerstedt, "Persistification of robotic tasks," IEEE Transactions on Control Systems Technology, 2020.
Energy autonomy for resource-constrained multi robot missions. H Fouad, G Beltrame, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2020. H. Fouad and G. Beltrame, "Energy autonomy for resource-constrained multi robot missions," in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2020, pp. 7006-7013.
Discovering the limits of ecological resilience. J Bridle, A Van Rensburg, Science. 3676478J. Bridle and A. van Rensburg, "Discovering the limits of ecological resilience," Science, vol. 367, no. 6478, pp. 626-627, 2020.
G Notomista, S Mayya, M Selvaggio, M Santos, C Secchi, 2020 International Conference on Robotics and Automation. G. Notomista, S. Mayya, M. Selvaggio, M. Santos, and C. Secchi, in 2020 International Conference on Robotics and Automation.
A taxonomy for task allocation problems with temporal and ordering constraints. E Nunes, M Manner, H Mitiche, M Gini, Robotics and Autonomous Systems. 90E. Nunes, M. Manner, H. Mitiche, and M. Gini, "A taxonomy for task allocation problems with temporal and ordering constraints," Robotics and Autonomous Systems, vol. 90, pp. 55-70, 2017.
Combinatorial bids based multi-robot task allocation method. L Lin, Z Zheng, Proceedings of the 2005 IEEE international conference on robotics and automation. the 2005 IEEE international conference on robotics and automationIEEEL. Lin and Z. Zheng, "Combinatorial bids based multi-robot task allocation method," in Proceedings of the 2005 IEEE international conference on robotics and automation. IEEE, 2005, pp. 1145-1150.
A complete methodology for generating multirobot task solutions using asymtre-d and market-based task allocation. F Tang, L E Parker, Proceedings 2007 IEEE International Conference on Robotics and Automation. 2007 IEEE International Conference on Robotics and AutomationF. Tang and L. E. Parker, "A complete methodology for generating multi- robot task solutions using asymtre-d and market-based task allocation," in Proceedings 2007 IEEE International Conference on Robotics and Automation, 2007, pp. 3351-3358.
Auctions for multi-robot task allocation in communication limited environments. M Otte, M J Kuhlman, D Sofge, Autonomous Robots. 443M. Otte, M. J. Kuhlman, and D. Sofge, "Auctions for multi-robot task allocation in communication limited environments," Autonomous Robots, vol. 44, no. 3, pp. 547-584, 2020.
Auction-based task allocation scheme for dynamic coalition formations in limited robotic swarms with heterogeneous capabilities. M Irfan, A Farooq, 2016 International Conference on Intelligent Systems Engineering (ICISE). IEEEM. Irfan and A. Farooq, "Auction-based task allocation scheme for dy- namic coalition formations in limited robotic swarms with heterogeneous capabilities," in 2016 International Conference on Intelligent Systems Engineering (ICISE). IEEE, 2016, pp. 210-215.
Macroscopic modeling of stochastic deployment policies with time delays for robot ensembles. T W Mather, M A Hsieh, International Journal of Robotics Research. 305T. W. Mather and M. A. Hsieh, "Macroscopic modeling of stochastic deployment policies with time delays for robot ensembles," International Journal of Robotics Research, vol. 30, no. 5, pp. 590-600, 2011.
Optimized stochastic policies for task allocation in swarms of robots. S Berman, Á Halász, M A Hsieh, V Kumar, IEEE Transactions on Robotics. 254S. Berman,Á. Halász, M. A. Hsieh, and V. Kumar, "Optimized stochas- tic policies for task allocation in swarms of robots," IEEE Transactions on Robotics, vol. 25, no. 4, pp. 927-937, 2009.
Closed-loop task allocation in robot swarms using inter-robot encounters. S Mayya, S Wilson, M Egerstedt, Swarm Intelligence. 132S. Mayya, S. Wilson, and M. Egerstedt, "Closed-loop task allocation in robot swarms using inter-robot encounters," Swarm Intelligence, vol. 13, no. 2, pp. 115-143, 2019.
Characterizing heterogeneity in cooperative networks from a resource distribution view-point. W Abbas, M Egerstedt, Communications in Information and Systems. 14W. Abbas and M. Egerstedt, "Characterizing heterogeneity in coopera- tive networks from a resource distribution view-point," Communications in Information and Systems, vol. 14, 2014.
Hierarchic social entropy: An information theoretic measure of robot group diversity. T Balch, Autonomous robots. 83T. Balch, "Hierarchic social entropy: An information theoretic measure of robot group diversity," Autonomous robots, vol. 8, no. 3, pp. 209-238, 2000.
STRATA: unified framework for task assignments in large teams of heterogeneous agents. H Ravichandar, K Shaw, S Chernova, Autonomous Agents and Multi-Agent Systems. 343838H. Ravichandar, K. Shaw, and S. Chernova, "STRATA: unified frame- work for task assignments in large teams of heterogeneous agents," Autonomous Agents and Multi-Agent Systems, vol. 34, no. 38, p. 38, 2020.
Analysis of dynamic task allocation in multi-robot systems. K Lerman, C Jones, A Galstyan, M J Matarić, The International Journal of Robotics Research. 253K. Lerman, C. Jones, A. Galstyan, and M. J. Matarić, "Analysis of dynamic task allocation in multi-robot systems," The International Journal of Robotics Research, vol. 25, no. 3, pp. 225-241, 2006.
Selfadaptive decision-making mechanisms to balance the execution of multiple tasks for a multi-robots team. N Palmieri, X.-S Yang, F De Rango, A F Santamaria, Neurocomputing. 306N. Palmieri, X.-S. Yang, F. De Rango, and A. F. Santamaria, "Self- adaptive decision-making mechanisms to balance the execution of multiple tasks for a multi-robots team," Neurocomputing, vol. 306, pp. 17-36, 2018.
Adaptive task resources allocation in multi-agent systems. S Fatima, M Wooldridge, Proceedings of the fifth international conference on autonomous agents, ser. AGENTS '01. the fifth international conference on autonomous agents, ser. AGENTS '01ACMS. Fatima and M. Wooldridge, "Adaptive task resources allocation in multi-agent systems," in Proceedings of the fifth international conference on autonomous agents, ser. AGENTS '01. ACM, 2001, pp. 537-544.
Adaptive task allocation based on social utility and individual preference in distributed environments. N Iijima, A Sugiyama, M Hayano, T Sugawara, Procedia Computer Science. 112N. Iijima, A. Sugiyama, M. Hayano, and T. Sugawara, "Adaptive task allocation based on social utility and individual preference in distributed environments," Procedia Computer Science, vol. 112, pp. 91-98, 2017.
Resilient flocking for mobile robot teams. K Saulnier, D Saldana, A Prorok, G J Pappas, V Kumar, IEEE Robotics and Automation letters. 22K. Saulnier, D. Saldana, A. Prorok, G. J. Pappas, and V. Kumar, "Re- silient flocking for mobile robot teams," IEEE Robotics and Automation letters, vol. 2, no. 2, pp. 1039-1046, 2017.
. P S Gonçalves, P D Torres, C O Alves, F Mondada, M Bonani, X Raemy, J Pugh, C Cianci, A Klaptocz, S Magnenat, J C Zufferey, D Floreano, A Martinoli, P. S. Gonçalves, P. D. Torres, C. O. Alves, F. Mondada, M. Bonani, X. Raemy, J. Pugh, C. Cianci, A. Klaptocz, S. Magnenat, J. C. Zufferey, D. Floreano, and A. Martinoli, "The e-puck, a robot designed for education in engineering," 2009.
Control barrier functions: Theory and applications. A D Ames, S Coogan, M Egerstedt, G Notomista, K Sreenath, P Tabuada, 2019 18th European Control Conference (ECC). A. D. Ames, S. Coogan, M. Egerstedt, G. Notomista, K. Sreenath, and P. Tabuada, "Control barrier functions: Theory and applications," in 2019 18th European Control Conference (ECC), June 2019, pp. 3420-3431.
Robustness of control barrier functions for safety critical control. X Xu, P Tabuada, J W Grizzle, A D Ames, IFAC-PapersOnLine. 4827X. Xu, P. Tabuada, J. W. Grizzle, and A. D. Ames, "Robustness of control barrier functions for safety critical control," IFAC-PapersOnLine, vol. 48, no. 27, pp. 54-61, 2015.
Coordinated control of multi-robot systems: A survey. J Cortés, M Egerstedt, SICE Journal of Control, Measurement, and System Integration. 106J. Cortés and M. Egerstedt, "Coordinated control of multi-robot sys- tems: A survey," SICE Journal of Control, Measurement, and System Integration, vol. 10, no. 6, pp. 495-503, 2017.
A generalized inverse for matrices. R Penrose, Mathematical proceedings of the Cambridge philosophical society. Cambridge University Press51R. Penrose, "A generalized inverse for matrices," in Mathematical proceedings of the Cambridge philosophical society, vol. 51, no. 3. Cambridge University Press, 1955, pp. 406-413.
Formally correct composition of coordinated behaviors using control barrier certificates. A Li, L Wang, P Pierpaoli, M Egerstedt, RSJ International Conference on Intelligent Robots and Systems. IEEE. IEEEA. Li, L. Wang, P. Pierpaoli, and M. Egerstedt, "Formally correct composition of coordinated behaviors using control barrier certificates," in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2018, pp. 3723-3729.
Active disturbance rejection control of dynamic systems: a flatness based approach. H Sira-Ramírez, A Luviano-Juárez, M Ramírez-Neria, E W Zurita-Bustamante, Butterworth-HeinemannH. Sira-Ramírez, A. Luviano-Juárez, M. Ramírez-Neria, and E. W. Zurita-Bustamante, Active disturbance rejection control of dynamic systems: a flatness based approach. Butterworth-Heinemann, 2018.
Nonlinear systems. H K Khalil, Prentice HallH. K. Khalil, Nonlinear systems. Prentice Hall, 2002.
S Boyd, L El Ghaoui, E Feron, V Balakrishnan, Linear matrix inequalities in system and control theory. Siam15S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear matrix inequalities in system and control theory. Siam, 1994, vol. 15.
S-procedure in nonlinear control theory. V Yakubovich, Vestnick Leningrad Univ. Math. 4V. Yakubovich, "S-procedure in nonlinear control theory," Vestnick Leningrad Univ. Math., vol. 4, pp. 73-93, 1997.
Mixed integer nonlinear programming. J Lee, S Leyffer, Springer Science & Business Media154J. Lee and S. Leyffer, Mixed integer nonlinear programming. Springer Science & Business Media, 2011, vol. 154.
Some proximity and sensitivity results in quadratic integer programming. F Granot, J Skorin-Kapov, Mathematical Programming. 471-3F. Granot and J. Skorin-Kapov, "Some proximity and sensitivity results in quadratic integer programming," Mathematical Programming, vol. 47, no. 1-3, pp. 259-268, 1990.
Sensitivity and stability analysis for nonlinear programming. A V Fiacco, Y Ishizuka, Annals of Operations Research. 271A. V. Fiacco and Y. Ishizuka, "Sensitivity and stability analysis for nonlinear programming," Annals of Operations Research, vol. 27, no. 1, pp. 215-235, 1990.
The robotarium: Globally impactful opportunities, challenges, and lessons learned in remote-access, distributed control of multirobot systems. S Wilson, P Glotfelter, L Wang, S Mayya, G Notomista, M Mote, M Egerstedt, IEEE Control Systems Magazine. 401S. Wilson, P. Glotfelter, L. Wang, S. Mayya, G. Notomista, M. Mote, and M. Egerstedt, "The robotarium: Globally impactful opportunities, challenges, and lessons learned in remote-access, distributed control of multirobot systems," IEEE Control Systems Magazine, vol. 40, no. 1, pp. 26-44, 2020.
Coverage control for mobile sensing networks. J Cortes, S Martinez, T Karatas, F Bullo, IEEE Transactions on robotics and Automation. 202J. Cortes, S. Martinez, T. Karatas, and F. Bullo, "Coverage control for mobile sensing networks," IEEE Transactions on robotics and Automation, vol. 20, no. 2, pp. 243-255, 2004.
Decentralized minimum-energy coverage control for time-varying density functions. M Santos, S Mayya, G Notomista, M Egerstedt, 2019 International Symposium on Multi-Robot and Multi-Agent Systems (MRS). IEEEM. Santos, S. Mayya, G. Notomista, and M. Egerstedt, "Decentralized minimum-energy coverage control for time-varying density functions," in 2019 International Symposium on Multi-Robot and Multi-Agent Systems (MRS). IEEE, 2019, pp. 155-161.
CVX: Matlab software for disciplined convex programming. M Grant, S Boyd, Y Ye, M. Grant, S. Boyd, and Y. Ye, "CVX: Matlab software for disciplined convex programming," 2009.
Gurobi optimizer reference manual. G Optimization, G. Optimization, "Gurobi optimizer reference manual," 2015.
|
[] |
[
"Ultrahigh ion diffusion in oxide crystal by engineering the interfacial transporter channels",
"Ultrahigh ion diffusion in oxide crystal by engineering the interfacial transporter channels"
] |
[
"Li Liang ",
"Min 1+ \nSchool of Nuclear Science and Technology\nNational Synchrotron Radiation Laboratory\nUniversity of Science and Technology of China\n230029HefeiAnhuiP. R. China\n",
"Hu \nCollaborative Innovation Center of Chemistry for Energy Materials\nSchool of Chemistry and Materials Science\nHefei National Laboratory for Physical Sciences at the Microscale\nCAS Center for Excellence in Nanoscience\nUniversity of Science and Technology of China\n230026HefeiAnhuiP. R. China\n",
"Changlong Hu \nSchool of Nuclear Science and Technology\nNational Synchrotron Radiation Laboratory\nUniversity of Science and Technology of China\n230029HefeiAnhuiP. R. China\n",
"Bowen Li \nSchool of Nuclear Science and Technology\nNational Synchrotron Radiation Laboratory\nUniversity of Science and Technology of China\n230029HefeiAnhuiP. R. China\n",
"Shanguang Zhao \nSchool of Nuclear Science and Technology\nNational Synchrotron Radiation Laboratory\nUniversity of Science and Technology of China\n230029HefeiAnhuiP. R. China\n",
"Guobin Zhang \nSchool of Nuclear Science and Technology\nNational Synchrotron Radiation Laboratory\nUniversity of Science and Technology of China\n230029HefeiAnhuiP. R. China\n",
"Liangbin Li ",
"Jun Jiang \nSchool of Nuclear Science and Technology\nNational Synchrotron Radiation Laboratory\nUniversity of Science and Technology of China\n230029HefeiAnhuiP. R. China\n\nCollaborative Innovation Center of Chemistry for Energy Materials\nSchool of Chemistry and Materials Science\nHefei National Laboratory for Physical Sciences at the Microscale\nCAS Center for Excellence in Nanoscience\nUniversity of Science and Technology of China\n230026HefeiAnhuiP. R. China\n",
"Chongwen Zou \nSchool of Nuclear Science and Technology\nNational Synchrotron Radiation Laboratory\nUniversity of Science and Technology of China\n230029HefeiAnhuiP. R. China\n"
] |
[
"School of Nuclear Science and Technology\nNational Synchrotron Radiation Laboratory\nUniversity of Science and Technology of China\n230029HefeiAnhuiP. R. China",
"Collaborative Innovation Center of Chemistry for Energy Materials\nSchool of Chemistry and Materials Science\nHefei National Laboratory for Physical Sciences at the Microscale\nCAS Center for Excellence in Nanoscience\nUniversity of Science and Technology of China\n230026HefeiAnhuiP. R. China",
"School of Nuclear Science and Technology\nNational Synchrotron Radiation Laboratory\nUniversity of Science and Technology of China\n230029HefeiAnhuiP. R. China",
"School of Nuclear Science and Technology\nNational Synchrotron Radiation Laboratory\nUniversity of Science and Technology of China\n230029HefeiAnhuiP. R. China",
"School of Nuclear Science and Technology\nNational Synchrotron Radiation Laboratory\nUniversity of Science and Technology of China\n230029HefeiAnhuiP. R. China",
"School of Nuclear Science and Technology\nNational Synchrotron Radiation Laboratory\nUniversity of Science and Technology of China\n230029HefeiAnhuiP. R. China",
"School of Nuclear Science and Technology\nNational Synchrotron Radiation Laboratory\nUniversity of Science and Technology of China\n230029HefeiAnhuiP. R. China",
"Collaborative Innovation Center of Chemistry for Energy Materials\nSchool of Chemistry and Materials Science\nHefei National Laboratory for Physical Sciences at the Microscale\nCAS Center for Excellence in Nanoscience\nUniversity of Science and Technology of China\n230026HefeiAnhuiP. R. China",
"School of Nuclear Science and Technology\nNational Synchrotron Radiation Laboratory\nUniversity of Science and Technology of China\n230029HefeiAnhuiP. R. China"
] |
[] |
The mass storage and removal in solid conductors always played vital role on the technological applications such as modern batteries, permeation membranes and neuronal computations, which were seriously lying on the ion diffusion and kinetics in bulk lattice. However, the ions transport was kinetically limited by the low diffusional process, which made it a challenge to fabricate applicable conductors with high electronic and ionic conductivities at room temperature. It was known that at essentially all interfaces, the existed space charge layers could modify the charge transport, storage and transfer properties. Thus, in the current study, we proposed an acid solution/WO3/ITO structure and achieved an ultrafast hydrogen transport in WO3 layer by interfacial job-sharing diffusion. In this sandwich structure, the transport pathways of the protons and electrons were spatially separated in acid solution and ITO layer respectively, resulting the pronounced increasing of effective hydrogen diffusion coefficient (Deff) up to 10 6 times. The experiment and theory simulations also revealed that this accelerated hydrogen transport based on the interfacial job-sharing diffusion was universal and could be extended to other ions and oxide materials as well, which would potentially stimulate systematic studies on ultrafast mixed conductors or faster solid-state electrochemical switching devices in the future.
| null |
[
"https://arxiv.org/pdf/2205.13414v1.pdf"
] | 249,097,597 |
2205.13414
|
7c667059a1e4f85ea94d4ce8b81628b8bbae7d9c
|
Ultrahigh ion diffusion in oxide crystal by engineering the interfacial transporter channels
Li Liang
Min 1+
School of Nuclear Science and Technology
National Synchrotron Radiation Laboratory
University of Science and Technology of China
230029HefeiAnhuiP. R. China
Hu
Collaborative Innovation Center of Chemistry for Energy Materials
School of Chemistry and Materials Science
Hefei National Laboratory for Physical Sciences at the Microscale
CAS Center for Excellence in Nanoscience
University of Science and Technology of China
230026HefeiAnhuiP. R. China
Changlong Hu
School of Nuclear Science and Technology
National Synchrotron Radiation Laboratory
University of Science and Technology of China
230029HefeiAnhuiP. R. China
Bowen Li
School of Nuclear Science and Technology
National Synchrotron Radiation Laboratory
University of Science and Technology of China
230029HefeiAnhuiP. R. China
Shanguang Zhao
School of Nuclear Science and Technology
National Synchrotron Radiation Laboratory
University of Science and Technology of China
230029HefeiAnhuiP. R. China
Guobin Zhang
School of Nuclear Science and Technology
National Synchrotron Radiation Laboratory
University of Science and Technology of China
230029HefeiAnhuiP. R. China
Liangbin Li
Jun Jiang
School of Nuclear Science and Technology
National Synchrotron Radiation Laboratory
University of Science and Technology of China
230029HefeiAnhuiP. R. China
Collaborative Innovation Center of Chemistry for Energy Materials
School of Chemistry and Materials Science
Hefei National Laboratory for Physical Sciences at the Microscale
CAS Center for Excellence in Nanoscience
University of Science and Technology of China
230026HefeiAnhuiP. R. China
Chongwen Zou
School of Nuclear Science and Technology
National Synchrotron Radiation Laboratory
University of Science and Technology of China
230029HefeiAnhuiP. R. China
Ultrahigh ion diffusion in oxide crystal by engineering the interfacial transporter channels
1 +These two authors contributed equally to this paper. *Corresponding Author: [email protected] 2
The mass storage and removal in solid conductors always played vital role on the technological applications such as modern batteries, permeation membranes and neuronal computations, which were seriously lying on the ion diffusion and kinetics in bulk lattice. However, the ions transport was kinetically limited by the low diffusional process, which made it a challenge to fabricate applicable conductors with high electronic and ionic conductivities at room temperature. It was known that at essentially all interfaces, the existed space charge layers could modify the charge transport, storage and transfer properties. Thus, in the current study, we proposed an acid solution/WO3/ITO structure and achieved an ultrafast hydrogen transport in WO3 layer by interfacial job-sharing diffusion. In this sandwich structure, the transport pathways of the protons and electrons were spatially separated in acid solution and ITO layer respectively, resulting the pronounced increasing of effective hydrogen diffusion coefficient (Deff) up to 10 6 times. The experiment and theory simulations also revealed that this accelerated hydrogen transport based on the interfacial job-sharing diffusion was universal and could be extended to other ions and oxide materials as well, which would potentially stimulate systematic studies on ultrafast mixed conductors or faster solid-state electrochemical switching devices in the future.
Introduction
The element doping and ion migration in solid oxide compounds could regulate the properties of functional materials(1-3) ,which always played vital role on various applications, such as electrochromic windows (4)(5)(6)(7), electrical tri-state phase transition (8,9), selective electrocatalysis (10) , surface self-assembling (11), ion storage (12) and synaptic transistors (13,14). Naturally, as a key process in material and physical science (15)(16)(17), the migration or diffusion of atoms in solid, was also significant to be investigated for more tunable and high-performance devices (18)(19)(20)(21)(22)(23).
Quick transport of atoms doping in films could shorten the dynamic process to improve the performance of devices effectively (22,24). This kinetic process was characterized by a key parameter, chemical diffusion coefficient (Deff), which was very important in the field of diffusion in solids (25,26).
Normally the ions transport was kinetically limited by the extremely low diffusional process driven by the concentration gradient or even by external electric field, which made it a challenge to fabricate applicable conductors with high electronic and ionic conductivities at room temperature. It had been suggested that at essentially all interfaces, the existed space charge layers could modify the charge transport, storage and transfer properties (25,26). The recent reports showed that quick diffusion of lithium (27) and other alkali elements (16) had been realized in multilayer graphene, owing to its fast-track between layers and the collective effects (16). Another inspiring progress was the realization of ultrafast Ag storage and removal in the interface of RbAg4I5-graphite due to the chemical diffusion along the interface (26), which resulted the diffusion coefficient of Ag atoms along this interface up to Deff ~ 10 -4 cm 2 /s. However, it was still a challenge to realize an ultrafast atomic migration in bulk solid, such as oxide solid materials. Though it had been reported that in oxides films, external energy input, such as applying electrostatic (21) or static magnetic(23) field, was a useful way to realize tunable mass transport. This route could change the initial and final energy state through inducing external field, as shown in Supplementary Fig. S1A. To accelerate the atomic diffusion process without external energy input, reducing the diffusion activation energy ( ) was the possible way according to the Fick's laws of diffusion and the Arrhenius equation of diffusion coefficient:
∝ (− ⁄ ),
where was the Boltzmann constant and was the surrounding temperature (16,28).
The schematic evolution of energy barrier was show in Supplementary Fig. S1B.
According to this strategy, many attempts had been made in oxide material to improve the Deff value with low by lattice defect engineering (18-20, 22, 24, 29), while the final increase was not such pronounced. For example, by increasing the density of domain boundaries in VO2 crystal as the "highway", the diffusion coefficient of H atoms only improved by about twenty times (22).
It was known that the atomic diffusion consisted of cationic and electronic migration, which could be described by the concept of ambipolar diffusion (30) and was successfully applied in plasma physics and astrophysics (31). From this viewpoint, it was possible to achieve ultrafast diffusion of doped atoms along an artificial heterojunctions or interfaces in multi-phase systems. This fabricated interface composited of ion and electron conducting phases, which made the transport pathways of ions and electrons spatially separated and driven the transport of a neutral component via space charge effect.
Based on the above strategy, in the current study, we proposed an acid solution/WO3/ITO structure and achieved an ultrafast hydrogen transport in WO3 layer with the Deff value of about 10 -1 cm 2 /s, which was almost ~10 6 times higher than that in WO3 bulk lattice. Interestingly, due to the distinct electrochromic property of WO3, this ultrafast hydrogen transport could be directly observed by eyesight. The experiment and theory simulations also revealed that the accelerated hydrogen transport was mainly lying on the interfacial job-sharing diffusion of the protons and electrons in acid solution and ITO layer respectively. Furthermore, this proposed proton-electron synergistic diffusion mechanism was universal and could be extended to other ions and oxide materials as well, which would potentially stimulate systematic studies on ultrafast mixed conductors or other emerging ionic devices in the future. Supplementary Fig. S4A showed that the borderline was almost kept at the same position even after one week, confirming this ignored H atoms diffusion in WO3 film (34). It was reasonable if considering this ultra-low H atoms diffusion was just driven by the H concentration gradient at the HxWO3 -WO3 interface lattice as shown Fig.1D. In fact, previous studies also demonstrated that the H atoms doping induced micro/nano patterns in WO3 film showed no obvious diffusions at ambient condition, which was suitable for functional devices applications (35).
Accelerated diffusion of H atoms with proton and electron bridges
It was known that the neutral H atom diffusion could be separated by the proton and electron migrations, thus based on the HxWO3/WO3 interface, we established a protonic bridge by covering a sulfuric acid layer (Fig. 1E), which showed a high proton conductivity. From the optical image in Fig. 1B, it was observed the considerable borderline migration within several minutes, which indicated that this protonic bridge could greatly accelerate the in-plane transport process of H atoms. The ultrafast immigration of H atoms in WO3 layer was also examined by XPS in Supplementary Supplementary Fig. S4), it was suggested that an intriguing mechanism existed behind this accelerated diffusion phenomenon.
The polarized proton-electron synergistic diffusion
To understand the H atoms diffusion in WO3 crystal film driven by the concentration gradient within the micro/nano scale size, we have fabricated an HxWO3-WO3-HxWO3 hetero-junction on the deposited WO3/sapphire film as shown in the optical image of Fig Fig. S6).
In addition, due to the different work function at the HxWO3-WO3 junction, pronounced electron transfer from HxWO3 to WO3 side would occurred. This electron transfer and the interfacial polarization were also confirmed by the first-principles calculations as shown in Fig. 2E. Though the electrons was easily transferred from HxWO3 to WO3 side as shown in the scheme of Fig. 2F, the H + ions (protons) was quite difficult to move simultaneously even driven by the interfacial polarization, mainly due to the high diffusion barrier height in WO3 lattice. Resultantly, the H atoms showed extremely low diffusion behavior in WO3 crystal, as observed in Fig.1A or previous reports (13)(14)(15)(16)(17).
However, if adding a liquid acid solution layer covering the HxWO3-WO3 junction, an effective H + bridge was quickly established via the solid-liquid interfaces.
Accordingly, a polarized H + -esynergistic diffusion was formed for this system (9). It was suggested that at the first stage, the electrons transport from the HxWO3 side to pure WO3 area under the driving force of work function difference (Fig. 2F). Then due to the charge redistribution, a localized interfacial polarization would drive the H + intercalating to the lower concentration area and extrapolating from the higher area via the H + bridge (acid solution), as shown in the scheme of Fig. 2G. Simultaneously, H + and erecombination occurs in WO3 part, resulting a quick in-plane hydrogen migration.
Furthermore, if adding an electronic bridge (ITO layer was used) to improve the interfacial electron transfer process, the polarized H + -esynergistic diffusion would be further enhanced. It was pointed out that here the H diffusion in WO3 films was obviously separated by protonic/electronic bridges at the solid-liquid interfaces, resulting an effective "job-share" diffusion. This proposed "job-share" diffusion greatly accelerated the H diffusion speed, which was quite consistent with the experimental observations for the quick color change in WO3 layer within several minutes or even several seconds as shown in Fig.1A or 1B.
Controllable ultrafast atomic diffusion in oxide
From the experimental observations, it was revealed that the "job-share" diffusion Table S3).
Furthermore, the ambipolar diffusion model (30) showed that ∝ + − / ( + + − ) , where is the conductivity. Obviously, the value would greatly increase due to the collective effects in protonic bridge (16). The additional high conductivity of e-bridge would further improve the effective diffusion coefficient if considering the "job-share" diffusion. Thus, accelerated diffusion process was simulated by the combination of proton and electron bridges. Five different econductivity values were considered in the diffusion coefficient calculation, which showed a positive relationship (Fig. 4C).
The effective diffusion coefficient Deff values for the current study and previous reports were listed in Table 1
The universal of "job-share" diffusion behavior
According to the above ultrafast H atoms diffusion via the"job-share" mechanism, it was believed that this strategy was element independent and could be extended to other ions and materials (Fig. 5A). It was known that Li atoms intercalation and diffusion in electrode material were very important for the performance of Li-ion Furthermore, it was also certified that this "job-share" diffusion strategy was suitable for another oxides. As a typical correlated oxide material, vanadium dioxide (VO2) showed an insulator state at room temperature and had a typical metal-insulator phase transition property with the critical temperature of about 340K. While if doping some H atom into it, the VO2 film would be stabilized in metallic state at room temperature. Accordingly, in the experiment, we just measured the film resistance distribution to monitor the H diffusion in VO2 crystal lattice, which demonstrated that the H atoms diffusion in VO2 crystal film was also greatly accelerated via the "job-share" diffusion route. From the surface resistance variation, it revealed that the quick hydrogenation induced metallization in the whole VO2 film will be accomplished within several minutes ( Supplementary Fig. S9).
For many practical device applications, an all-solid-structure was highly desirable to improve the mechanical stability and security. Thus, we proposed a structure designed for element fast storage and removal in bulk, as shown in Fig. 5B. In this structure, the catalyst, such as palladium (Pd), was utilized to split H2 and inject H to the oxide. By this way, it was able to not only reduce the usage of expensive catalysts, but also make the structure more compact, comparing with classic device basing on films (12). To verify its advantages, prototype WO3 film devices were fabricated in
Conclusion and outlook
Achieving ultrafast atom diffusion in functional solids had been a challenge for many years. In the current study, we achieved an accelerated diffusion of doping atoms in oxide films via a synergistic job-share strategy, which separated the electronic and cationic transport pathways by configuring the artificial electrons/cations bridges. Table S3). (C)The normalized diffusion coefficient n as the function of electron conductivity of substrates, which were obtained from the simulation results in Supplementary Fig. S7. (D) The comparison of the effective diffusion coefficient eff values shown in Table 1. It showed that ~10 4 times of promotion for the Deff value with the H + bridges. If further combined the ebridges, another ~10 2 times of promotion would be obtained with the Deff value of up to ~ 10 -1 cm 2 /s, overwhelming all of the previous reports. The schematic diagram for this cation-electron synergistic accelerated diffusion, which was also suitable for other elements (such as Li) or other oxides (such as VO2). (B)The proposed bulk structure designed for ultra-fast hydrogen (or other eligible elements) storage and removal system. (C)Top row, the prototype WO3 films device for hydrogen storage: Nano-sized palladium (Pd) was deposited on a selected area and the H2SO4 (0.1 M) treated polyacrylamide (PAAM) film were selected as the protonic bridges. Then, the whole WO3 film was converted to blue color, which could be visualized in Movie 4. Bottom row, the comparative experiment without a protonic bridge: only the exposed WO3 area became blue color. The scale bar for the optical images was 5 mm. Ref (2,40,41)
Fig. S2C~S2F ,
S2C~S2Fconfirmed the H atoms insertion in WO3 layer and resulted the blue color change. In addition, if we deposited similar WO3 film on a conductive ITO-glass substrate, an electronic bridge was established due to the excellent conductively of ITO layer. If covering a sulfuric acid layer on the top surface of WO3/ITO-glass, a protonic bridge was also established as shown in the schematic diagram of Fig. 1F. Based on this configuration, the migration of H atoms induced blue-color spread occurred in several seconds (Fig. 1C). This quick color change of WO3 layer was also reflected by the detailed optical variation tests shown in Supplementary Fig. S4C. The observation was clearly demonstrated that the added electronic bridge could further accelerate the atomic migration if combined with the existed protonic bridge. The dynamic diffusion processes were visualized in the Supplementary Movie 1. From the different diffusional behaviors of H atom transport in WO3 with and without cationic/electronic bridge (blue arrows in
. 2A. The prepared WO3 nanogap (as shown by the red circle) was about 5 μm, which was fabricated by the selected area hydrogenation based on the combination of UV lithography and metal-acid treated H doping process (see Methods). The topography image (Fig. 2B) mapped by scanning of Atom Force Microscopy (AFM) showed that the WO3 nanogap had clear interfaces between the HxWO3 parts. The step analysis curve implied that the hydrogenation treatment would lead some corrosion of WO3 layer, which made the HxWO3 session ~4.3 nm thinner than the intrinsic WO3 area. The potential distribution of this HxWO3-WO3-HxWO3 hetero-junction was also probed by Kelvin Probe Force Microscopy (KPFM) in Fig. 2C. It was observed that there existed clear potential difference between HxWO3 and WO3 sessions and obviously the potential value of WO3 was much lower. Since this potential value was closely associated with the work function of the tested materials, it was inferred that the HxWO3 should have lower work function than that of WO3 crystal. This potential distribution was reasonable if considering the hydrogenation induced electron doping in WO3 crystal, which would raise the Femi level ( ) of WO3. The theoretical calculation results (Fig. 2D and Supplementary Fig. S5) were also consistent with the above observations. In fact, from the projected densities of state (PDOS) of H-doped WO3, it was clear that a transition from semi-conducting to metallic characteristic would be existed. This H doping induced continuous phase transitions were also verified by the resistance measurement in dynamic hydrogenation process via external voltage gating treatment, which showed that the WO3 resistance decreased and transformed to metallic state gradually as the function of gating time (Supplementary
behavior greatly enhanced the H atoms diffusion in WO3 layer due to the protonic/electronic bridges at the solid-liquid interfaces. In addition, the diffusion process should be closely associated with the conductivity of the protonic/electronic bridges respectively. Thus to gain more insight into the relationship between the diffusion rate and the protons/electrons conductivity, a detailed simulations by finite element analysis (FEA) was conducted to evaluate the diffusion behavior as thefunction of time and position (or the diffusion distance) in Fig. 3. According to the scheme of the HxWO3-WO3 junction in Fig.3A, the H atoms diffusion was separated by the directional movements of protons and electrons via the polarized H + or ebridges. This diffusion behavior could be described by the schematic circuit in Fig.3B, if considering the interfacial potential, the movement of electrons and protons through the H + or ebridges based on the Kirchhoff's lows. During the simulation, the hypothesis of linear relationship of potential vs. charge doping concentration was adopted (see Methods). To examine the effect of conductivity of e-bridge (σe) on the diffusion behavior during the "job-sharing" diffusion route, the conductivity of H + bridge was kept as a constant value. Then the simulated timedependent distributions of the H atoms in HxWO3-WO3 junction (Fig.3A) were plotted with the conductivity of ebridge varied from σe = 10 3 to 10 5 in Fig.3C~3E, respectively.The simulation results showed that H atoms would gradually diffuse from the HxWO3 side to undoped WO3 side via the interface (position =0) and the H concentration showed a gradually decreased trend (from the position= -1 to 1). In addition, it was observed that the higher ebridge conductivity would lead to the faster diffusion of H atoms. When the conductivity was increased to σe = 10 5 , the H atoms in HxWO3-WO3 junction could quickly go to a uniform distribution state as shown inFig.3E, highlighting the important role of the conductivity of ebridge.Fig.3F showed the H atom diffusion behavior as the function of diffusion time and the length (from position=0 to position=1) via the HxWO3-WO3 junction if setting the H concentration to a constant value (0.1Cmax). I was revealed that the diffusion time showed a quadraticlike relationship with the position when the conductivity of ebridge was low (less than 10 3 ). While when the conductivity was further increased to σe = 10 4 or 10 5 , the H atoms distribution could reach a balance state within much faster time. These simulations were quite consistent with the experimental observations in Fig.1C, which showed the quick color change in WO3 layer on ITO substrate. In fact the conductivity of H + bridge (or the H + concentration in sulfuric acid) also played important role in the "job-share" diffusion strategy. Since the color of WO3 layer could be changed upon the H atom doping and the related transmission were directly lying on the H atom concentration, thus the visible transmission value of the WO3 layer was able to be used as the indicator to evaluate the H diffusion behavior. In the experiment, we used sulfuric acid layer with different H + concentration (from 10 -4 to 10 -1 M) as the proton bridge and then examined the related transmission at different position as the function of diffusion time. Then the positional transmission evolution mapping of HxWO3-WO3 junction was figured in Supplementary Fig. S7. If setting the transmission value of T=75%, the diffusion time as the function of diffusion distance (position from 0.0 to 0.4) was obtained in Fig. 4A .The experimental results revealed that the conductivity of proton bridge had the similar effect as the electron bridge during the H atoms diffusion in WO3 layer. Based on these experimental results, the effective diffusion coefficients were obtained and revealed the positive relationship with proton concentration (Fig. 4B and Supplementary
for comparison. It showed that about four orders of magnitude of promotion for the Deff value could be achieved if applying the acid solution as protonic bridges. While if further combined the ITO as the e-bridge to form the "job-share" diffusion, another two orders of magnitude of promotion for the Deff value would be obtained as shown inFig. 4D. This result showed that the Deff value could be controlled by adjusting the conductivities of the fabricated bridges for protons or electrons transport. More importantly, our current diffusion coefficient Deff values obtained through the "job-share" strategy were greatly increased up to ~ 10 -1 cm 2 /s, obviously exceed all of the previous reports completely, showing the overwhelming advantage.
battery system. In the experiment, the accelerated diffusion of Li was also certified by the experimental results of LixWO3-WO3 junction, utilizing LiClO4 dissolved in propylene carbonate (PC) as Li + bridges (Supplementary Fig. S8 and Supplementary movie 2).
Fig. 5C .
5CHere the WO3 films were deposited on conductive ITO substrates, some nanosize Pd islands were deposited and covered partial area to act as the catalysts. Typically of exposing this system in H2 gas, the Pd-covered area would become blue color due to the H atoms doping into WO3 layer. The uncovered area would still be transparent.To fabricate an effective proton conductor, we just socked a solid polymer, polyacrylamide (PAAM) in sulfuric acid (1M) for 2 h, then a conductive proton conductor was obtained. Thus if we put this acid treated PAAM layer on HxWO3-WO3 junction as a solid protonic bridge, the uncovered WO3 area would transfer to blue color quickly and confirmed the important role of the solid protonic bridge to accelerate H diffusion(Supplementary movie 4). This solid protonic bridge would greatly benefit the practical device fabrication and integration. For comparison, the normal polymide layer also used to act as the protonic bridge. However, due to the poor conductivity, no H diffusion induced color change was observed for this system as shown inFig.5C, further confirming the importance of the conductive solid protonic bridge.
FiguresFig. 1 .
1Accelerated diffusion of hydrogenation atoms in WO3 films. (A to C) Optical images for the H diffusion in WO3 films under different conditions, which could be directly visualized due to the electrochromic feature of H-doped WO3. Three different sandwich structures were conducted for the experiments, showing different diffusion time. (D to F) The schemes for the diffusion behaviors of hydrogenation atoms in WO3 films under different conditions.
Fig. 2 .
2The mechanism of proton-electron synergistic diffusion. (A) The optical micro-image for the planar structure of hydrogenated-intrinsic-hydrogenated WO3. (B) The atomic force microscope (AFM) image for the selected nano-gap area marked by red circle in (A). The yellow solid line showed the line-scan for the gap, which shows that the intrinsic area is~4.3 nm higher than the hydrogenated area. (C) The Kelvin probe force microscope (KPFM) measurement for the same nano-gap in (B), showing the boundaries of the contact potential. (D) The calculated projected density of states (PDOS) of WO3 with different H-concentration. (E) Charge difference at the interface between HxWO3(001) and WO3(001). The value of the iso-surface was 0.0026 e/Å 3 . Yellow and blue iso-surfaces indicated the accumulation and depletion of charge density. (F) The scheme for the interfacial electron transfer from HxWO3 to WO3 side due to the different work function. (G) The schematic process for H + directional movement through the proton bridge driven by the solid-liquid interfacial potential.
Fig. 3 .Fig. 4 .
34The simulations of accelerated diffusion with the method of finite element analysis (FEA). (A) The corresponding model and initial conditions of this simulation. (B) Schematic of one-dimension resistor network to simulate the diffusion process. (C~E) the results of positional concentration evolution, with different conductivities (σe) of the substrate, from 10 3 to 10 5 a.u.. (F) The relationship between the diffusion time and diffusion position (from 0 to 1) under different conductivity of electron bridge if setting the H concentration to be a constant value (0.The dependence of protonic and electronic bridges in accelerated diffusion. (A) For the HxWO3-WO3 junction with the cover of sulfuric acid as the protonic bridge, the relationship between the diffusion length (borderline position) and the time under different concentration of sulfuric acid, from 10 -4 M to 10 -1 M. The borderline position was confirmed according to the visible transmittance of T~75 %, which was extracted from the transmission-position curves as shown in Supplementary Fig. S4B. (B)The increasing trend of normalized diffusion coefficient, n = eff / Min as the function of acid concentration (supplementary
Fig. 5 .
5The feasibility in other system with ultrafast hydrogen storage/removal. (A)
As a member of transition metal oxides, WO3 was an ideal prototype to investigatethe diffusion of doping atom in oxide films, owing to its electrochromic property by M
(M = H, Li, Al etc.) intercalating. For example, H atoms could be easily driven into
WO3 film by external voltage gating, which was applicable for WO3 based smart
window device (Supplementary Fig. S2A). It was suggested that the reduction of W 6+
was responsible to H (or other M atoms) doping induced color change of WO3, which
was originated from the optical absorptions of small-polaron (32, 33). The detailed
XRD and SEM/TEM investigations for the ~580nm amorphous WO3 film before and
after the H atoms insertion showed no obvious thickness and crystal structure changes
(Supplementary Fig. S3). While due to color contrast between the initial transparent
state and the H-doped blue state(7), the transport of H atom in WO3 films could be
directly visualized even by eyesight if the atomic diffusion was fast enough to be traced
by the film color change. However, the concentration gradient induced dopants
diffusion behavior in solid WO3 lattice was always negligible, since the effective
diffusion coefficients were extremely slow. For example, according to the references
(13-17), the effective diffusion coefficient of H atoms in WO3 bulk was only ~ 10 -10
cm 2 /s in crystal and ~ 10 -8 cm 2 /s in amorphous at room temperature (RT).
This extremely slow diffusion of H atoms in WO3/sapphire film was directly
reflected in the optical image of Fig. 1A. The blue region of the film was related to the
heavily H doped WO3 film (HxWO3), while the transparent region was the pure WO3
film. It was observed that there existed a clear borderline between the deep-blue HxWO3
and transparent WO3 area. Though a large H concentration gradient existed at the
interface, almost no migration of the borderline was observed from the optical image
after one day or even one week at ambient condition. Since the H atoms diffusion was
directly associated with the color change of WO3 film, the detailed transmittance-
position curve plotted in
Table 1 :
1The effective diffusion coefficient of H atom in WO3.Samples
Deff cm 2 s -1
Ref
H2SO4(1E-1 M)/WO3/ITO
~1.63×10 -1
This Work
H2SO4(1E-1 M)/WO3/Al2O3
~1.01×10 -4
This Work
a-WO3
2.90×10 -10~1 .33×10 -8
Ref (36)
a-WO3
5.10×10 -10~3 .45×10 -9
Ref (34, 37)
c-WO3
2.70×10 -9~3 .28×10 -8
Ref (36)
WO3
4×10 -10
Ref (38)
c-WO3
2.9×10 -11~7 .1×10 -11
Ref (39)
WO3(calculated)
<10 -9
insertion into WO3 films -On the applicability of the diffusion-trapping model.
AcknowledgementsThis work was partially supported by the National Natural Science Foundation of China L. wrote the manuscript. All authors discussed the results and commented on the manuscript.Supplementary MaterialsMaterials and MethodsFigure. S1 to S10
Beyond electrostatic modification: design and discovery of functional oxide phases via ionic-electronic doping. H.-T Zhang, Advances in Physics: X 4. H.-T. Zhang et al., Beyond electrostatic modification: design and discovery of functional oxide phases via ionic-electronic doping. Advances in Physics: X 4, (2018).
Tunable Hydrogen Doping of Metal Oxide Semiconductors with Acid-Metal Treatment at Ambient Conditions. L Xie, Journal of the American Chemical Society. 142L. Xie et al., Tunable Hydrogen Doping of Metal Oxide Semiconductors with Acid-Metal Treatment at Ambient Conditions. Journal of the American Chemical Society 142, 4136- 4140 (2020).
Emergent Ferromagnetism with Fermi-Liquid Behavior in Proton Intercalated CaRuO3. S Shen, Physical Review X. 11S. Shen et al., Emergent Ferromagnetism with Fermi-Liquid Behavior in Proton Intercalated CaRuO3. Physical Review X 11, (2021).
Gate-controlled VO2 phase transition for high-performance smart windows. S Chen, Science Advances. 5S. Chen et al., Gate-controlled VO2 phase transition for high-performance smart windows. Science Advances 5, (2019).
WO3 quantum-dots electrochromism. Y Yao, Nano Energy. 68Y. Yao et al., WO3 quantum-dots electrochromism. Nano Energy 68, (2020).
Tunable near-infrared and visible-light transmittance in nanocrystal-in-glass composites. A Llordes, G Garcia, J Gazquez, D J Milliron, Nature. 500A. Llordes, G. Garcia, J. Gazquez, D. J. Milliron, Tunable near-infrared and visible-light transmittance in nanocrystal-in-glass composites. Nature 500, 323-326 (2013).
Photodeposited Amorphous Oxide Films for Electrochromic Windows. W Cheng, 4W. Cheng et al., Photodeposited Amorphous Oxide Films for Electrochromic Windows. Chem 4, 821-832 (2018).
Electric-field control of tri-state phase transformation with a selective dualion switch. N Lu, Nature. 546N. Lu et al., Electric-field control of tri-state phase transformation with a selective dual- ion switch. Nature 546, 124-128 (2017).
Non-catalytic hydrogenation of VO2 in acid solution. Y Chen, Nature Communications. 9818Y. Chen et al., Non-catalytic hydrogenation of VO2 in acid solution. Nature Communications 9, 818 (2018).
Selective electrocatalysis imparted by metal-insulator transition for durability enhancement of automotive fuel cells. S.-M Jung, Nature Catalysis. 3S.-M. Jung et al., Selective electrocatalysis imparted by metal-insulator transition for durability enhancement of automotive fuel cells. Nature Catalysis 3, 639-648 (2020).
Electron-Proton Co-doping-Induced Metal-Insulator Transition in VO2 Film via Surface Self-Assembled l-Ascorbic Acid Molecules. B Li, Angewandte Chemie-International Edition. 58B. Li et al., Electron-Proton Co-doping-Induced Metal-Insulator Transition in VO2 Film via Surface Self-Assembled l-Ascorbic Acid Molecules. Angewandte Chemie- International Edition 58, 13711-13716 (2019).
Reversible phase modulation and hydrogen storage in multivalent VO2 epitaxial thin films. H Yoon, Nature Materials. 151113H. Yoon et al., Reversible phase modulation and hydrogen storage in multivalent VO2 epitaxial thin films. Nature Materials 15, 1113-+ (2016).
Deep Proton Insertion Assisted by Oxygen Vacancies for Long-Term Memory in VO2 Synaptic Transistor. C Oh, Advanced Electronic Materials. C. Oh et al., Deep Proton Insertion Assisted by Oxygen Vacancies for Long-Term Memory in VO2 Synaptic Transistor. Advanced Electronic Materials, (2020).
Gating-induced reversible HxVO2 phase transformations for neuromorphic computing. C Ge, Nano Energy. 67C. Ge et al., Gating-induced reversible HxVO2 phase transformations for neuromorphic computing. Nano Energy 67, (2020).
Direct imaging of atomistic grain boundary migration. J Wei, Nature Materials. J. Wei et al., Direct imaging of atomistic grain boundary migration. Nature Materials, (2021).
Atomic-scale ion transistor with ultrahigh diffusivity. Y Xue, Science. 372Y. Xue et al., Atomic-scale ion transistor with ultrahigh diffusivity. Science 372, 501-503 (2021).
Nanofluidics coming of age. L Bocquet, Nature Materials. 19L. Bocquet, Nanofluidics coming of age. Nature Materials 19, 254-256 (2020).
An Atomic View of Cation Diffusion Pathways from Single-Crystal Topochemical Transformations. J V Handy, Y Luo, J L Andrews, N Bhuvanesh, S Banerjee, Angew Chem Int Ed Engl. 59J. V. Handy, Y. Luo, J. L. Andrews, N. Bhuvanesh, S. Banerjee, An Atomic View of Cation Diffusion Pathways from Single-Crystal Topochemical Transformations. Angew Chem Int Ed Engl 59, 16385-16392 (2020).
Thin Film RuO2: Lithiation: Fast Lithium-Ion Diffusion along the Interface. S Kim, Advanced Functional Materials. 28S. Kim et al., Thin Film RuO2: Lithiation: Fast Lithium-Ion Diffusion along the Interface. Advanced Functional Materials 28, (2018).
Defining Diffusion Pathways in Intercalation Cathode Materials: Some Lessons from V2O5 on Directing Cation Traffic. L R De Jesus, J L Andrews, A Parija, S Banerjee, L. R. De Jesus, J. L. Andrews, A. Parija, S. Banerjee, Defining Diffusion Pathways in Intercalation Cathode Materials: Some Lessons from V2O5 on Directing Cation Traffic.
. ACS Energy Letters. 3ACS Energy Letters 3, 915-931 (2018).
Catalytic Hydrogen Doping of NdNiO3 Thin Films under Electric Fields. U Sidik, A N Hattori, R Rakshit, S Ramanathan, H Tanaka, ACS Appl Mater Interfaces. 12U. Sidik, A. N. Hattori, R. Rakshit, S. Ramanathan, H. Tanaka, Catalytic Hydrogen Doping of NdNiO3 Thin Films under Electric Fields. ACS Appl Mater Interfaces 12, 54955-54962 (2020).
Accelerated Hydrogen Diffusion and Surface Exchange by Domain Boundaries in Epitaxial VO2 Thin Films. J Park, H Yoon, H Sim, S Y Choi, J Son, ACS Nano. 14J. Park, H. Yoon, H. Sim, S. Y. Choi, J. Son, Accelerated Hydrogen Diffusion and Surface Exchange by Domain Boundaries in Epitaxial VO2 Thin Films. ACS Nano 14, 2533-2541 (2020).
Controllable two-dimensional movement and redistribution of lithium ions in metal oxides. X Tang, Nature Communications. 10X. Tang et al., Controllable two-dimensional movement and redistribution of lithium ions in metal oxides. Nature Communications 10, (2019).
Artificial channels for confined mass transport at the subnanometre scale. J Shen, G Liu, Y Han, W Jin, Nature Reviews Materials. 6J. Shen, G. Liu, Y. Han, W. Jin, Artificial channels for confined mass transport at the sub- nanometre scale. Nature Reviews Materials 6, 294-312 (2021).
Diffusion in solids. H Mehrer, H. Mehrer, Diffusion in solids. (2007).
Synergistic, ultrafast mass storage and removal in artificial mixed conductors. C.-C Chen, L Fu, J Maier, Nature. 536C.-C. Chen, L. Fu, J. Maier, Synergistic, ultrafast mass storage and removal in artificial mixed conductors. Nature 536, 159-164 (2016).
Ultrafast lithium diffusion in bilayer graphene. M Kühne, Nature Nanotechnology. 12M. Kühne et al., Ultrafast lithium diffusion in bilayer graphene. Nature Nanotechnology 12, 895-900 (2017).
The mathematics of diffusion. J Crank, J. Crank, The mathematics of diffusion. (1956).
Cation diffusion in polycrystalline thin films of monoclinic HfO2 deposited by atomic layer deposition. M P Mueller, APL Materials. 8M. P. Mueller et al., Cation diffusion in polycrystalline thin films of monoclinic HfO2 deposited by atomic layer deposition. APL Materials 8, (2020).
Mass Transport in the Presence of Internal Defect Reactions-Concept of Conservative Ensembles: I, Chemical Diffusion in Pure Compounds. J Maier, Journal of the American Ceramic Society. 76J. Maier, Mass Transport in the Presence of Internal Defect Reactions-Concept of Conservative Ensembles: I, Chemical Diffusion in Pure Compounds. Journal of the American Ceramic Society 76, 1212-1217 (1993).
A P Boss, F J Ciesla, Treatise on Geochemistry. H. D. Holland, K. K. TurekianOxfordElsevierSecond EditionA. P. Boss, F. J. Ciesla, in Treatise on Geochemistry (Second Edition), H. D. Holland, K. K. Turekian, Eds. (Elsevier, Oxford, 2014), pp. 37-53.
All-solid-state proton-based tandem structures for fast-switching electrochromic devices. Z Shao, Nat. Electron. 5Z. Shao et al., All-solid-state proton-based tandem structures for fast-switching electrochromic devices. Nat. Electron. 5, 45-52 (2022).
Electrochromic mechanism in a-WO3-y thin films. S.-H Lee, Applied Physics Letters. 74S.-H. Lee et al., Electrochromic mechanism in a-WO3-y thin films. Applied Physics Letters 74, 242-244 (1999).
Protonic solid-state electrochemical synapse for physical neural networks. X Yao, Nature Communications. 113134X. Yao et al., Protonic solid-state electrochemical synapse for physical neural networks. Nature Communications 11, 3134 (2020).
Spatially-resolved insulator-metal transition for rewritable optical gratings. Y Chen, Commun. Mater. 2Y. Chen et al., Spatially-resolved insulator-metal transition for rewritable optical gratings. Commun. Mater. 2, (2021).
S Burkhardt, M T Elm, B Lani-Wayda, P J Klar, Situ Monitoring of Lateral Hydrogen Diffusion in Amorphous and Polycrystalline WO3 Thin Films. 5S. Burkhardt, M. T. Elm, B. Lani-Wayda, P. J. Klar, In Situ Monitoring of Lateral Hydrogen Diffusion in Amorphous and Polycrystalline WO3 Thin Films. Advanced Materials Interfaces 5, (2018).
Studies on the correlation between electrochromic colouration and the relative density of tungsten trioxide (WO3−x) thin films prepared by electron beam evaporation. K Karuppasamy, A Subrahmanyam, Journal of Physics D: Applied Physics. 42K. Muthu Karuppasamy, A. Subrahmanyam, Studies on the correlation between electrochromic colouration and the relative density of tungsten trioxide (WO3−x) thin films prepared by electron beam evaporation. Journal of Physics D: Applied Physics 42, (2009).
In situ" optical and electrochemical characterization of electrochromic phenomena into tungsten trioxide thin films. O Bohnke, Solar Energy Materials and Solar Cells. 25O. Bohnke et al., "In situ" optical and electrochemical characterization of electrochromic phenomena into tungsten trioxide thin films. Solar Energy Materials and Solar Cells 25, 361-374 (1992).
The impedance related to the electrochemical hydrogen. L Bóbics, L Sziráki, G G Láng, Electrochemistry Communications. 10L. Bóbics, L. Sziráki, G. G. Láng, The impedance related to the electrochemical hydrogen Electrochemistry Communications 10, 283-287 (2008).
Non-Grotthuss proton diffusion mechanism in tungsten oxide dihydrate from first-principles calculations. H Lin, F Zhou, C.-P Liu, V Ozoliņ Š, Journal of Materials Chemistry A. 2H. Lin, F. Zhou, C.-P. Liu, V. Ozoliņ š, Non-Grotthuss proton diffusion mechanism in tungsten oxide dihydrate from first-principles calculations. Journal of Materials Chemistry A 2, (2014).
Y Xi, Q Zhang, H Cheng, Mechanism of Hydrogen Spillover on WO3(001) and Formation of HxWO3. 118x = 0.125, 0.25, 0.375, and 0.5Y. Xi, Q. Zhang, H. Cheng, Mechanism of Hydrogen Spillover on WO3(001) and Formation of HxWO3 (x = 0.125, 0.25, 0.375, and 0.5). The Journal of Physical Chemistry C 118, 494- 501 (2014).
|
[] |
[
"Space number density of bright quasars in the halo model of galaxy formation",
"Space number density of bright quasars in the halo model of galaxy formation"
] |
[
"Kulinich Yu \nAstronomical Observatory of Ivan\nFranko National University of Lviv\n\n",
"Novosyadlyj B \nAstronomical Observatory of Ivan\nFranko National University of Lviv\n\n"
] |
[
"Astronomical Observatory of Ivan\nFranko National University of Lviv\n",
"Astronomical Observatory of Ivan\nFranko National University of Lviv\n"
] |
[] |
We analyse the redshift dependence of space number density of quasars assuming that they are the short-lived active stages of the massive galaxies and arise immediately after the collapse of homogeneous central part of protogalaxy clouds. Obtained dependence fits the observational data ChaMP+CDF+ROSAT (Silverman et al. 2005) very well for protogalaxy clouds of mass M ≈ 8 · 10 11 h −1 M ⊙ and ellipticity e < 0.4. The lifetime of bright X-ray AGNs or QSOs with L X > 10 44.5 erg·s −1 in the range of energies 0.3 − 8 keV is τ QSO ∼ 6·10 6 years when the mass of supermassive black hole is M SM BH ∼ 10 9 M ⊙ and the values of other quasar parameters are reasonable. The analysis and all calculations were carried out in the framework of ΛCDM-model with parameters determined from 5years WMAP, SNIa and large scale structure data(Komatsu et al. 2009). It is concluded, that the halo model of galaxy formation in the ΛCDM cosmological model matches well observational data on AGNs and QSOs number density coming from current optical and X-ray surveys.
|
10.30970/jps.14.2901
|
[
"https://arxiv.org/pdf/1006.4466v2.pdf"
] | 117,945,838 |
1006.4466
|
6fa8300ba3f021a8612f827cd968491784be74e9
|
Space number density of bright quasars in the halo model of galaxy formation
11 Jul 2010 July 13, 2010
Kulinich Yu
Astronomical Observatory of Ivan
Franko National University of Lviv
Novosyadlyj B
Astronomical Observatory of Ivan
Franko National University of Lviv
Space number density of bright quasars in the halo model of galaxy formation
11 Jul 2010 July 13, 2010arXiv:1006.4466v2 [astro-ph.CO]galaxiesquasarscosmological modelsdark matter PACS number(s): 9854Aj9854Kt9854-h9865Fz9835Jk
We analyse the redshift dependence of space number density of quasars assuming that they are the short-lived active stages of the massive galaxies and arise immediately after the collapse of homogeneous central part of protogalaxy clouds. Obtained dependence fits the observational data ChaMP+CDF+ROSAT (Silverman et al. 2005) very well for protogalaxy clouds of mass M ≈ 8 · 10 11 h −1 M ⊙ and ellipticity e < 0.4. The lifetime of bright X-ray AGNs or QSOs with L X > 10 44.5 erg·s −1 in the range of energies 0.3 − 8 keV is τ QSO ∼ 6·10 6 years when the mass of supermassive black hole is M SM BH ∼ 10 9 M ⊙ and the values of other quasar parameters are reasonable. The analysis and all calculations were carried out in the framework of ΛCDM-model with parameters determined from 5years WMAP, SNIa and large scale structure data(Komatsu et al. 2009). It is concluded, that the halo model of galaxy formation in the ΛCDM cosmological model matches well observational data on AGNs and QSOs number density coming from current optical and X-ray surveys.
Introduction
The radio, optical and X-ray observational data indicate that co-moving space number density of bright AGNs and QSOs is unmonotonous function of redshift: it increases up to z ∼ 2.5, declines and goes to zero after that [6,14,17,33,34,36]. Small angular diameters and huge luminosities of QSOs signify that their central power engines are supermassive black holes (SMBH) which efficiently transform the mass of accreting matter into radiation. About ten percents of mass can be converted into the high energy radiation by this mechanism instead of decimal parts of percent for the nucleosynthesis reactions providing the luminosity of stars. Observations using Hubble space telescope and the largest ground-based ones have revealed the host galaxies of QSOs in which they occupy central parts. High emissivity of quasars, their popularity in comparison with galaxies, genetic closeness to galaxy active nuclei support the idea that QSOs are short-lived active phases of galaxies with SMBH at their nuclei. So, the space number density of quasars can be described in the scenario of galaxy formation taking into account the peculiarities which make them QSOs. Really, in such approach the redshift distribution of QSOs space number density becomes qualitatively understood: the growth of QSOs number density in the redshift range z = (∞, ∼ 2.5] is caused by the rate of galactical nuclei formation, the later decay in the redshift range z = [∼ 2.5, 0] is due to the short-lived action of quasar engine (finite fuel resource in the vicinity of galaxy centrum and the large rate of its burning) as well as to the decreasing of galaxy birth-rate. In this paper we develop the semi-analytical approach proposed in the previous papers [9,30,16,31,10] in order to adjust quantitatively the prediction of concordance cosmological ΛCDM-model [19] to observational data on bright QSOs number density given by [36].
Formation of bright QSOs in the halo model
Dependence of the QSOs number density on redshift is determined by several factors: birthrate of galaxies of relevant mass, probabilities of SMBH formation in their centra and the tempo of accretion of matter to them. The birth-rate of galaxies of given mass is determined completely by cosmological model and initial power spectrum of density perturbations. For the estimation of other factors the additional assumptions are needed. The high popularity of quasars suggests that the formation of SMBH is quite frequent phenomenon. The physical conditions in the centrum of forming galaxy are favourable for formation of the central SMBHs through the collapse of dark matter and hydrogen-helium gas [16,39,25,12,8,22,3] as well as through the merging of remnants of supermassive stars [1,18,7,41]. In both cases the time-lag between halo virialization and the birth of the bright quasar is expected to be short comparing to the cosmological time-scale, even at high redshifts. For appearance of the luminous quasar the one solar mass of gas per year must infall into SMBH. A few physical mechanisms of gas supplying to quasar engine are analysed in literature. For example, [28] suppose that accreation of gas into SMBH is caused by the tide collisions of close galaxies in the process of their merging. The semi-analytical models of the growth of SMBHs in the central parts of their host galaxies, developed by [26,27,37,5], fit well the observational data on quasar luminosity function and two-point space correlation one for selected redshifts.
In this paper we assume that bright QSOs appear in the process of galaxies formation and their luminous efficiency is provided by the collapsing gas from nearest vicinity of SMBH. The durations of both processes are essentially lower than cosmological time. So, quasars are supposed to be the short-lived active phases of the galaxies formation, which is well described by the halo model. The central density peak of protogalaxy cloud corresponding to the bottom of gravitational potential well is practically homogeneous and collapses first of all with formation of SMBH. The outer shells will collapse later and the relation between their co-moving radius R and the collapse time t col is given by simple formula following from the model of spherical collapse [13,20]:
M(R) − M R M R D(1) = δ c (t col ),
where M(R) = 4π R 0 ρ(r)r 2 dr is the mass of the matter in the sphere with co-moving radius R, ρ(r) = ρ(1 + δ(r)), ρ is the mean matter density at moment t col , M R ≡ 4 3 πR 3 ρ, δ is the initial amplitude of density perturbation, δ c (t col ) is the critical overdensity as function of collapse time, D(a) is growth factor to the present from high redshift relative to the growth factor from the same initial amplitude and initial redshift in the Einstein-de Sitter model, a is the scale factor, which equals 1 at current epoch. This formula is correct if mass of matter inside each shell is fixed. For close to centrum shells this condition is satisfied at the initial stage of SMBH formation because particles belonging to them have small target parameters. Outer shells with lower averaged density contrast will collapse later when hydrodynamical and dark matter halo virialization processes take place. Some part of matter will fall into a trap of SMBH providing the quasar luminous efficiency. The quasar will be active some time after formation of galaxy and virialization of dark matter halo till the matter which serve the fuel for quasar engine will be depleted from the range of unstable orbits. The tide collision of merging galaxies can renew the quasar activity and this can be repeated many times during the galaxy life. We suppose that most luminous quasars are short-lived active stages of early evolution of galaxies.
For calculation of bright QSOs number density at different redshifts we have used the analytical approach proposed by [32] and improved by [23]. Let us suppose that masses of protogalactic clouds (protohalos) which will host brightest QSOs are in the range M, M + ∆M and masses of homogeneous central parts of perturbations (top-hat) in which the SMBH will be formed are in the range M th , M th + ∆M th . The mass of SMBH can be lower than M th .
The halo number density with mass in the range M th , M th + ∆M th at some time t is determined by the mass function [32]
n(M th , t) = ρ M th δ c (2π) 1/2 σ 3 th exp − δ 2 c 2σ 2 th dσ 2 th dM th ∆M th , where σ 2 th ≡ σ 2 (M th )
is the r.m.s. of density perturbations in the top-hat sphere with radius R th = (3M th /4πρ) 1/3 and δ c (t) is the critical magnitude of the linear density perturbation which collapses at t. We will take into account only that haloes with mass M th which are central peaks of protogalaxy clouds with mass M. They can be extracted from the total number density by estimation of the probability that its mass after collapse and virialization of protogalaxy cloud will be in the range M, M + ∆M during short time ∆t [23]
d 2 p dMdt (M th → M|t)∆M∆t = 1 (2π) 1/2 σ 2 th σ 2 (σ 2 th − σ 2 ) 3/2 × exp − δ 2 c (σ 2 th − σ 2 ) 2σ 2 th σ 2 dσ 2 dM dδ c dt ∆M∆t,
where σ 2 ≡ σ 2 (M) is the r.m.s. of density perturbations in the sphere which contains the mass M of the whole halo. The important condition for SMBH formation is spherical symmetry and top-hat density profile of central region of protogalaxy cloud in which the appearance of bright quasar is expected. All shells in such region collapse practically simultaneously. This should result in formation of SMBH. As the main fraction of mass consists of the collisionless cold dark matter particles, the pressure of hot baryonic gas can not prevent the collapse. So, from estimated number density of haloes with given mass we must extract the essentially non-spherical ones. The distribution of the ellipticity e and prolateness p of ellipsoidal clouds with given initial amplitude of density perturbation δ has been obtained by [4,35]:
g(e, p|δ) = 1125 √ 10π e(e 2 − p 2 ) δ σ 5 e −2.5(δ/σ) 2 (3e 2 +p 2 ) ,(1)
where 0 ≤ e < ∞ and −e ≤ p ≤ e. We suppose that ellipticity of homogeneous regions of protogalactic clouds, the collapse of which leads to SMBH formation, is constrained from above: e ≤ e ′ m . Their fraction is as follows Finally, we present the QSOs number density in the co-moving space as product of the number density of haloes with mass in the range M th , M th + ∆M th , the fraction of those of them which belong to larger haloes with mass in the range M, M + ∆M and the fraction of such haloes with ellipticity lower than e m :
p(e < e ′ m ) = e ′n QSO (t) ≃ ρ M th f (e m δ c /σ) 4π[σ 2 (σ 2 th − σ 2 )] 3/2 dδ 2 c dt × exp − δ 2 c 2σ 2 dσ 2 dM dσ 2 th dM th ∆M th ∆M∆t.(3)
We present the dependence of QSOs number density on redshift in the form:
n QSO (z) = A · f e m δ c (z) σ(M) dδ 2 c (z) dz 1 H 0 dz dt × exp − 1 2 δ c (z) σ(M) 2 ,(4)
where
1 H 0 dz dt = −(1 + z) 2.5 Ω m + Ω K 1 + z + Ω Λ (1 + z) 3 .(5)
All redshift-independent multipliers in (3) are collected by A ([A] =Mpc −3 ). It is the product of relation ρ M th , some constants, spectral dependent values (σ, σ th ), their derivatives with respect to mass as well as of the unknown values such as the QSO lifetime (τ QSO = ∆t) and mass range of haloes (∆M) which maintain the bright QSOs or X-ray AGNs. Therefore, we assume it is a free parameter of model. The critical amplitude of density perturbations δ c (t col ) depends on matter density and value of cosmological constant [21,20]. For ΛCDM-model with parameters used here it equals 1.673 at z = 0. For other redshifts we have approximated the numerical results for δ c (z) from [21] by the simple analytic formula δ c (z) = 0.564 0.39 + (1 + z) 2.86 + 1.267 (1 + z) , which works well in the range of redshifts 0 ≤ z ≤ 5. So, the dependence of QSOs number density (4) on redshift has 3 free parameters: the normalization constant A, mass M and maximal ellipticity of the protogalactic clouds e m . They are used for the fitting of n QSO (z) to the observational data on QSOs number density.
Results and discussions
We have used the observational data on bright QSOs number densities at different redshifts presented by [36], which are the compilation of optical (SDSS, COMBO-17) and X-ray (Chandra, ROSAT) QSOs and AGNs redshift surveys (see Fig.1). The ChaMP+CDF+ROSAT data, which include the X-ray AGN with L X > 10 44. 5 [19] are shown there by lines. We calculated the r.m.s. of density perturbation as follows:
σ(M) = A(2π) −3/2 ∞ 0 k (2+ns) T 2 (k)W 2 (Rk)dk,
where R is the radius of the top-hat sphere containing the mass M, W (x) = 3(sin x − x cos x)/x 3 is the window function and T (k) is the transfer function. The latter has been calculated using the publicly available code CAMB [24]. The normalization constant of power spectrum A we determined from the equality Fig.1 shows n QSO (z) calculated for M = 8 · 10 11 h −1 M ⊙ and e m = 0.38. The parameter A has been used for fitting of n QSO (z) to observational data at the maximum 001M th which should shine like bright QSO during τ QSO ∼ 5.6 · 10 6 years (∆ log M = ∆ log M th = 1). As we can see the solid line agrees very well with the observational data ChaMP+CDF+ROSAT [36] covering the wide range of redshifts from z ≃ 0 to z ≃ 5.
σ 2 8 = A(2π) −3/2 ∞ 0 k (2+ns) T 2 (k)W 2 (8k)dk.
The solid line in
Without constraint of ellipticity of protogalaxy clouds hosting QSOs the solid line in Fig. 1 should be essentially higher than corresponding observational data at z < 1.2. As it follows from (1) the mean value of ellipticity of cloud which collapse at given z is e = σ(M)/(2δ c (z)). For high z the ellipticity is small as δ c (z) is large. The clouds which collapse at lower z are more elliptical and more rarely produce the QSOs.
The mass M of protogalaxy clouds, which can host bright QSOs, depends on σ 8 and δ c (z) and should be different in the cosmological models with different sets of parameters. For higher σ 8 the mass of protogalaxy will be higher. Increasing of the mass leads to the steeper curve of QSOs number density at z > 2.5 and to the displacement of its maximum to lower z. On the other hand, the decreasing of the upper limit of ellipticity of protogalaxy clouds leads to displacement of that maximum to higher z. The dashed and dash-dotted lines in Fig.1 show the QSOs number density for ellipticities e ≤ e m = 0.2 (A = 7 · 10 −7 Mpc −3 ) and e ≤ e m = 0.17 (A = 1.1 · 10 −6 Mpc −3 ) correspondingly. The mass of protogalaxy cloud is M = 4 · 10 12 h −1 M ⊙ for both cases.
The minimal mass of halo in which the bright QSOs may arise can be estimated by comparison of the model curve for quasar number density with corresponding observational data. Indeed, the number density of haloes with lower mass is higher and they collapse earlier. So, for large z the predicted number density curve will be less steep and its maximum will be shifted to z > 2.5. The dotted line shows the redshift dependence of number density of bright quasars which arise in the halo with mass M = 3 · 10 11 h −1 M ⊙ and initial ellipticity lower than e m = 0.5 (A = 5.35 · 10 −8 Mpc −3 ). It lays above all observational data (excluding one ROSAT point [29]) and that mass can serve as rough estimation of lower limit for mass of haloes which produce the bright QSOs and/or X-ray AGN.
The dependence of bright QSOs number density on redshift obtained here matches the observational data essentially better than ones deduced in the frame of semi-analytical merging models of SMBHs formation (see for comparison Fig. 9 from [18], Fig. 3 from [26], Fig. 7 from [27], Fig. 3 from [5]), which underestimate the number density of luminous AGNs/QSOs at high redshifts.
We must note that number density of quasars with lower luminosity, L 2−8keV < 10 44 erg·s −1 , has maximum at z ∼ 1 [2,11,15,38,40]. This can not be explained by simple model for bright quasars used here. Some reasons should be taken into account for this case. 1) The assumption that lifetime of bright QSOs is small in comparison with cosmological time can be incorrect for fainter quasars. It can depend on density profile of protogalaxy cloud, angular momentum etc. 2) SMBHs can be formed in the protogalaxies of smaller mass essentially later then collapse has occured. The time-lag can depend on physical parameters of protogalaxy cloud too. 3) Fainter quasars can be the result of reactivation of SMBH in the centra of some galaxies by tidal collision of satellite galaxies or their merging. Some galactic nuclei can experience such reactivations several times during the galaxy life.
Conclusions
We have assumed that the bright QSOs are short-lived active phases of early evolution of massive galaxies. Their large X-ray and optical luminosities are provided by accretion of baryonic matter to SMBHs which have been formed as result of collapse of homogeneous central parts of protogalaxy clouds (haloes). Using the expression for co-moving number density of haloes of mass M th present at time t [23] and constraining the ellipticity (e ≤ e m ) of those of them which belong to larger haloes of mass M we have obtained the formula for dependence of QSOs number density on redshift. It has 3 free parameters (M, e m and normalization constant A) which allow us to fit the model dependence to observational data. As background cosmological model we have used the ΛCDM-model with best fitting parameters determined by [19]: Ω m = 0.258, Ω Λ = 0.742, h = 0.719, σ 8 = 0.796 and n s = 0.963. Obtained dependence for n QSO (z) (4) fits in the best way the observational data on bright QSOs and X-ray AGN number densities ChaMP+CDF+ROSAT [36] for M = 8 · 10 11 h −1 M ⊙ and e m = 0.38. The best fitting value of normalization constant is A = 9.75 · 10 −7 Mpc −3 . The lifetime of such QSOs τ QSO ≈ 6 · 10 6 years. The lower limit of halo mass estimated in such approach is M = 3 · 10 11 h −1 M ⊙ .
Therefore, the halo model of galaxy formation together with the assumption that bright QSOs are short-lived active phases of early evolution of massive galaxies quantitatively agrees well with corresponding observational data.
(e, p|δ). (2) This integral has functional presentation in the form p(e < e ′ m ) ≡ f (x), where x = e ′ m δ/σ(M th ). It means that constraining of ellipticity of central homogeneous regions e ≤ e ′ m constrains the ellipticity of the whole protogalactic cloud e ≤ e m , where e ′ m /σ(M th ) = e m /σ(M).
erg·s −1 in the energy range 0.3 − 8 keV, are shown in the figure by squares. The results of calculations of n QSO (z) for ΛCDM-model with parameters Ω b = 0.044, Ω m = 0.258, Ω Λ = 0.742, h = 0.719, σ 8 = 0.796 and n s = 0.963
Figure 1 :
1The dependence of QSOs co-moving number density on redshift: theoretical curves versus observational data. Solid line shows the best fitting model with M = 8 · 10 11 h −1 M ⊙ and e m = 0.38. Dashed and dash-dotted lines present the models with the same protogalactic clouds masses M = 4 · 10 12 h −1 M ⊙ , but different ellipticities, e m = 0.2 and e m = 0.17 respectively. Dotted line shows the co-moving space number density of QSOs for the model with M = 3 · 10 11 h −1 M ⊙ and e m = 0.5. (z ≃ 2.5) and for this line it equals 9.75 · 10 −7 Mpc −3 . It is the product of lifetime of QSOs, spectral dependent values σ(M), σ(M th ) and mass range of haloes ∆M which maintain bright QSOs. So, in the ΛCDM-model the halo of mass M = 8 · 10 11 h −1 M ⊙ with top-hat central part M th = 10 9 h −1 M ⊙ may have SMBH with M SM BH ∼ 0.1 − 0.
AcknowledgmentsThis work was supported by the projects of Ministry of Education and Science of Ukraine "Investigation of variable stars, supernova remnants and stellar clusters using observational data obtained by groundbased and space telescopes" and "Formation of large-scale structure in the Universe with dark energy". The authors appreciate a partial support from the National Academy of Sciences of Ukraine under the research program "Cosmomicrophysics".
. T Abel, G Bryan, M Norman, ApJ. 54039Abel T., Bryan G. & Norman M., 2000, ApJ, 540, 39.
. A J Barger, L L Cowie, P Capak, D M Alexander, F E Bauer, W N Brandt, ApJ. 58461Barger A.J., Cowie L.L., Capak P., Alexander D.M., Bauer F.E., Brandt W.N. et al., 2003, ApJ, 584, L61.
. M C Begelman, M Volonteri, M J Rees, MNRAS. 289Begelman M.C., Volonteri M., Rees M.J., 2006, MNRAS, 370, 289.
. J R Bond, S Myers, ApJS. 1031Bond J.R., Myers S., 1996, ApJS, 103, 1.
. S Bonoli, F Marulli, V Springel, MNRAS. 423Bonoli S., Marulli F., Springel V. et al., 2009, MNRAS, 396, 423.
. B J Boyle, T Shanks, B A Peterson, MNRAS. 935Boyle B.J., Shanks T., Peterson B.A., 1988, MNRAS, 235, 935.
. V Bromm, P S Coppi, R B Larson, ApJ. 56423Bromm V., Coppi P.S., Larson R.B., 2002, ApJ, 564, 23.
. V Bromm, A Loeb, ApJ. 59634Bromm V., Loeb A., 2003, ApJ, 596, 34.
. R Cen, N Gnedin, Yu, L A Kofman, J P Ostriker, ApJ. 11Cen R., Gnedin N.Yu., Kofman L.A., Ostriker J.P., 1992, ApJ, 399, L11.
. Chornij Yu, Kulinich Yu, B Novosyadlyj, Kinematics and Physics of Celestial Bodies. 20359Chornij Yu., Kulinich Yu., Novosyadlyj B., 2004, Kinematics and Physics of Celestial Bodies, 20, 359.
. L L Cowie, A J Barger, M W Bautz, ApJ. 57Cowie L.L., Barger A.J., Bautz M.W. et al., 2003, ApJ, 584L, 57.
. D J Eisenstein, A Loeb, ApJ. 11Eisenstein D.J., Loeb A., 1995, ApJ, 443, 11.
. V R Eke, S Cole, C S Frenk, MNRAS. 266Eke V.R., Cole S., Frenk C.S., 1996, MNRAS, 282, 266.
. X Fan, M A Strauss, D P Schneider, J E Gunn, R H Lupton, AJ. 54Fan X., Strauss M.A., Schneider D.P., Gunn J.E., Lupton R.H. et al., 2001, AJ, 121, 54.
. F Fiore, F Fiore, M Brusa, F Cocchia, A Baldi, A&A. 40979Fiore F., Fiore F., Brusa M., Cocchia F., Baldi A. et al., 2003, A&A, 409, 79.
. M G Haehnelt, M J Rees, MNRAS. 263168Haehnelt M.G. & Rees M.J., 1993, MNRAS, 263, 168.
. M R S Hawkins, P Veron, MNRAS. 348Hawkins M.R.S. & Veron P., 1996, MNRAS, 281, 348.
. G Kauffmann, M G Haehnelt, MNRAS. 576Kauffmann G. & Haehnelt M.G., 2000, MNRAS, 311, 576.
. E Komatsu, J Dunkey, M R Nolta, ApJS. 330Komatsu E., Dunkey J., Nolta M.R. et al., 2009, ApJS, 180, 330.
. Kulinich Yu, Kinematics and Physics of Celestial Bodies. 24169Kulinich Yu., 2008, Kinematics and Physics of Celestial Bodies, 24, 169.
. Kulinich Yu, B Novosyadlyj, Journal of Physical Studies. 7234Kulinich Yu., Novosyadlyj B., 2003, Journal of Physical Studies, 7, 234.
. S M Koushiappas, J S Bullock, A Dekel, MNRAS. 354292Koushiappas S.M., Bullock J.S. & Dekel A., 2004, MNRAS, 354, 292.
. C Lacey, S Cole, MNRAS. 262627Lacey C., Cole S., 1993, MNRAS, 262, 627.
. A Lewis, A Chalinor, A Lasenby, ApJ. 538473Lewis A., Chalinor A. & Lasenby A., 2000, ApJ, 538, 473 (http://camb.info).
. A Loeb, F A Rasio, ApJ. 52Loeb A. & Rasio F.A., 1994, ApJ, 432, 52.
. R K Malbon, C M Baugh, C S Frenk, C G Lacey, MNRAS. 3821394Malbon R.K., Baugh C.M., Frenk C.S. & Lacey C.G., 2007, MNRAS, 382, 1394.
. F Marulli, S Bonoli, E Branchini, L Moscardini, V Springel, MNRAS. 1846Marulli F., Bonoli S., Branchini E., Moscardini L. & Springel V., 2008, MNRAS, 385, 1846.
. N Menci, A Cavaliere, A Fontana, ApJ. 58763Menci N., Cavaliere A., Fontana A., et al., 2003, ApJ, 587, L63.
. T Miyaji, G Hasinger, M Schmidt, A&A. 25Miyaji T., Hasinger G., & Schmidt M. 2000, A&A, 353, 25.
. A Nuser, J Silk, ApJ. 1Nuser A. & Silk J., 1993, ApJ, 411, L1.
. B Novosyadlyj, Yu Chornij, Journal of Physical Studies. 1287Novosyadlyj B. & Chornij Yu., 1997, Journal of Physical Studies, 1, 287.
. W H Press, P Schechter, ApJ. 187425Press W. H. & Schechter P., 1974, ApJ. 187, 425.
. P A Shaver, I M Hook, C A Jackson, ASP Conf. Ser. Carilli C.L. et al156163Shaver P.A., Hook I.M., Jackson C.A. et al., 1999, ASP Conf. Ser., ed. by Carilli C.L. et al, 156, 163.
. M Schmidt, D P Schneider, J E Gunn, AJ. 68Schmidt M., Schneider D.P., & Gunn J.E., 1995, AJ, 110, 68.
. R K Sheth, H J Mo, G Tormen, MNRAS. 3231Sheth R.K., Mo H.J. & Tormen G., 2001, MNRAS, 323, 1.
. J D Silverman, ApJ. 624630Silverman J.D. et al., 2005, ApJ, 624, 630.
. R S Somerville, P F Hopkins, T J Cox, B E Robertson, L Hernquist, MNRAS. 481Somerville R.S., Hopkins P.F., Cox T.J., Robertson B.E., Hernquist L., 2008, MNRAS, 391, 481.
. A T Steffen, A J Barger, L L Cowie, ApJ. 23Steffen A.T., Barger A.J., Cowie L.L., et al., ApJ, 2003, 596, L23.
. M Umemura, A Loeb, E I Turner, ApJ. 459Umemura M., Loeb A. & Turner E.I., 1993, ApJ, 419, 459.
. Y Ueda, M Akiyama, K Ohta, M Takamitsu, ApJ. 598886Ueda Y., Akiyama M., Ohta K., & Takamitsu M., 2003, ApJ, 598, 886.
. M Volonteri, F Haardt, P Madau, ApJ. 582559Volonteri M., Haardt F. & Madau P., 2003, ApJ, 582, 559.
. C Wolf, L Wisotzki, A Borch, S Dye, M Kleinheinrich, K Meisenheimer, A&A. 499Wolf C., Wisotzki L., Borch A., Dye S., Kleinheinrich M., & Meisenheimer K., 2003, A&A, 408, 499.
|
[] |
[
"Multi-Phase Locking Value: A Generalized Method for Determining Instantaneous Multi-frequency Phase Coupling",
"Multi-Phase Locking Value: A Generalized Method for Determining Instantaneous Multi-frequency Phase Coupling"
] |
[
"Bhavya Vasudeva \nIndian Statistical Institute\n700108KolkataWest BengalIndia\n",
"Runfeng Tian \nStephenson School of Biomedical Engineering\nThe University of Oklahoma\nOklahoma-74135TulsaUSA\n",
"Dee H Wu \nDepartment of Radiological Sciences\nThe University of Oklahoma Health Sciences Center\n73104Oklahoma CityOklahomaUSA\n",
"Shirley A James \nDepartment of Rehabilitation Sciences\nCollege of Allied Health\nThe University of Oklahoma Health Sciences Center\n73117Oklahoma CityOklahomaUSA\n",
"Hazem H Refai \nDepartment of Electrical and Computer Engineering\nThe University of Oklahoma\nOklahoma-74135TulsaUSA\n",
"Fei He \nCentre for Computational Science and Mathematical Modelling\nCoventry University\nCV1 2JHCoventryUK\n",
"Yuan Yang \nStephenson School of Biomedical Engineering\nThe University of Oklahoma\nOklahoma-74135TulsaUSA\n\nDepartment of Physical Therapy and Human Movement Sciences\nFeinberg School of Medicine\nNorthwestern University\nIllinois-60611ChicagoUSA\n"
] |
[
"Indian Statistical Institute\n700108KolkataWest BengalIndia",
"Stephenson School of Biomedical Engineering\nThe University of Oklahoma\nOklahoma-74135TulsaUSA",
"Department of Radiological Sciences\nThe University of Oklahoma Health Sciences Center\n73104Oklahoma CityOklahomaUSA",
"Department of Rehabilitation Sciences\nCollege of Allied Health\nThe University of Oklahoma Health Sciences Center\n73117Oklahoma CityOklahomaUSA",
"Department of Electrical and Computer Engineering\nThe University of Oklahoma\nOklahoma-74135TulsaUSA",
"Centre for Computational Science and Mathematical Modelling\nCoventry University\nCV1 2JHCoventryUK",
"Stephenson School of Biomedical Engineering\nThe University of Oklahoma\nOklahoma-74135TulsaUSA",
"Department of Physical Therapy and Human Movement Sciences\nFeinberg School of Medicine\nNorthwestern University\nIllinois-60611ChicagoUSA"
] |
[] |
Background: Many physical, biological and neural systems behave as coupled oscillators, with characteristic phase coupling across different frequencies. Methods such as n : m phase locking value (where two coupling frequencies are linked as: mf1 = nf2) and bi-phase locking value have previously been proposed to quantify phase coupling between two resonant frequencies (e.g. f , 2f /3) and across three frequencies (e.g. f1, f2, f1 + f2), respectively. However, the existing phase coupling metrics have their limitations and limited applications. They cannot be used to detect or quantify phase coupling across multiple frequencies (e.g. f1, f2, f3, f4, f1 + f2 + f3 − f4), or coupling that involves non-integer multiples of the frequencies (e.g. f1, f2, 2f1/3 + f2/3). New methods: To address the gap, this paper proposes a generalized approach, named multi-phase locking value (M-PLV), for the quantification of various types of instantaneous multi-frequency phase coupling. Different from most instantaneous phase coupling metrics that measure the simultaneous phase coupling, the proposed M-PLV method also allows the detection of delayed phase coupling and the associated time lag between coupled oscillators. Results: The M-PLV has been tested on cases where synthetic coupled signals are generated using white Gaussian signals, and a system comprised of multiple coupled Rössler oscillators, as well as a human subject dataset. Results indicate that the M-PLV can provide a reliable estimation of the time window and frequency combination where the phase coupling is significant, as well as a precise determination of time lag in the case of delayed coupling. This method has the potential to become a powerful new tool for exploring phase coupling in complex nonlinear dynamic systems.
|
10.1016/j.bspc.2022.103492
|
[
"https://arxiv.org/pdf/2102.10471v2.pdf"
] | 231,986,604 |
2102.10471
|
a7382ce8002d5e59a1c6913fb2d8c863723974c0
|
Multi-Phase Locking Value: A Generalized Method for Determining Instantaneous Multi-frequency Phase Coupling
Bhavya Vasudeva
Indian Statistical Institute
700108KolkataWest BengalIndia
Runfeng Tian
Stephenson School of Biomedical Engineering
The University of Oklahoma
Oklahoma-74135TulsaUSA
Dee H Wu
Department of Radiological Sciences
The University of Oklahoma Health Sciences Center
73104Oklahoma CityOklahomaUSA
Shirley A James
Department of Rehabilitation Sciences
College of Allied Health
The University of Oklahoma Health Sciences Center
73117Oklahoma CityOklahomaUSA
Hazem H Refai
Department of Electrical and Computer Engineering
The University of Oklahoma
Oklahoma-74135TulsaUSA
Fei He
Centre for Computational Science and Mathematical Modelling
Coventry University
CV1 2JHCoventryUK
Yuan Yang
Stephenson School of Biomedical Engineering
The University of Oklahoma
Oklahoma-74135TulsaUSA
Department of Physical Therapy and Human Movement Sciences
Feinberg School of Medicine
Northwestern University
Illinois-60611ChicagoUSA
Multi-Phase Locking Value: A Generalized Method for Determining Instantaneous Multi-frequency Phase Coupling
Background: Many physical, biological and neural systems behave as coupled oscillators, with characteristic phase coupling across different frequencies. Methods such as n : m phase locking value (where two coupling frequencies are linked as: mf1 = nf2) and bi-phase locking value have previously been proposed to quantify phase coupling between two resonant frequencies (e.g. f , 2f /3) and across three frequencies (e.g. f1, f2, f1 + f2), respectively. However, the existing phase coupling metrics have their limitations and limited applications. They cannot be used to detect or quantify phase coupling across multiple frequencies (e.g. f1, f2, f3, f4, f1 + f2 + f3 − f4), or coupling that involves non-integer multiples of the frequencies (e.g. f1, f2, 2f1/3 + f2/3). New methods: To address the gap, this paper proposes a generalized approach, named multi-phase locking value (M-PLV), for the quantification of various types of instantaneous multi-frequency phase coupling. Different from most instantaneous phase coupling metrics that measure the simultaneous phase coupling, the proposed M-PLV method also allows the detection of delayed phase coupling and the associated time lag between coupled oscillators. Results: The M-PLV has been tested on cases where synthetic coupled signals are generated using white Gaussian signals, and a system comprised of multiple coupled Rössler oscillators, as well as a human subject dataset. Results indicate that the M-PLV can provide a reliable estimation of the time window and frequency combination where the phase coupling is significant, as well as a precise determination of time lag in the case of delayed coupling. This method has the potential to become a powerful new tool for exploring phase coupling in complex nonlinear dynamic systems.
Background: Many physical, biological and neural systems behave as coupled oscillators, with characteristic phase coupling across different frequencies. Methods such as n : m phase locking value (where two coupling frequencies are linked as: mf1 = nf2) and bi-phase locking value have previously been proposed to quantify phase coupling between two resonant frequencies (e.g. f , 2f /3) and across three frequencies (e.g. f1, f2, f1 + f2), respectively. However, the existing phase coupling metrics have their limitations and limited applications. They cannot be used to detect or quantify phase coupling across multiple frequencies (e.g. f1, f2, f3, f4, f1 + f2 + f3 − f4), or coupling that involves non-integer multiples of the frequencies (e.g. f1, f2, 2f1/3 + f2/3). New methods: To address the gap, this paper proposes a generalized approach, named multi-phase locking value (M-PLV), for the quantification of various types of instantaneous multi-frequency phase coupling. Different from most instantaneous phase coupling metrics that measure the simultaneous phase coupling, the proposed M-PLV method also allows the detection of delayed phase coupling and the associated time lag between coupled oscillators. Results: The M-PLV has been tested on cases where synthetic coupled signals are generated using white Gaussian signals, and a system comprised of multiple coupled Rössler oscillators, as well as a human subject dataset. Results indicate that the M-PLV can provide a reliable estimation of the time window and frequency combination where the phase coupling is significant, as well as a precise determination of time lag in the case of delayed coupling. This method has the potential to become a powerful new tool for exploring phase coupling in complex nonlinear dynamic systems.
I. INTRODUCTION
Complex systems such as the human brain behave as a series of oscillators with their instantaneous phases dynamically coupled over multiple frequency bands [1][2][3][4][5]. Sheremet et al. [6] use quadratic nonlinearity to detect cross-frequency coupling between theta and gamma waves in the hippocampus. Recent works focus on reconstructing coupling functions [7] and estimating the phase oscillator model [8] using real data. The phase and amplitude dynamics of large nonlinear systems of heterogeneous, globally coupled oscillators [9] and non-identical damped harmonic oscillators [10] have also been studied.
Methods such as n : m phase locking value (PLV) [11], bi-phase locking value (bPLV) [12] and their variants [13][14][15] have previously been proposed to detect and quantify different types of phase coupling. The n : m PLV measures phase coupling between two resonant frequencies when n cycles of one oscillatory signal are phase locked to m cycles of another oscillatory signal, i.e., mφ(f n , t) − nφ(f m , t) ≤ [11,16] (φ(f, t) is * [email protected] the instantaneous phase at frequency f and time point t, two resonant frequencies f n and f m are linked as f n : f m = n : m, and denotes a small constant). The bPLV quantifies quadratic phase coupling among three frequencies, where a pair of frequencies, f 1 and f 2 are coupled to a third frequency
f 3 = f 1 + f 2 or f 1 − f 2 , i.e., φ(f 1 , t) ± φ(f 2 , t) − φ(f 3 , t) ≤ [12].
However, phase coupling could be shown in more complicated patterns involving more than three frequencies (e.g.
f 1 , f 2 , f 3 , f 4 , f 1 + f 2 + f 3 − f 4 )
as well as their non-integer multiples (e.g. f 1 , f 2 , 2f 1 /3 + f 2 /3), which cannot be detected or quantified by using the conventional phase coupling metrics, such as n : m PLV [11] and bPLV [12]. A novel measure called multi-spectral phase coherence (MSPC) has been recently developed by Yang and colleagues to provide a generalized approach for quantifying integer multi-frequency phase coupling [17]. This method has been applied to the human nervous system to advance our understanding of nonlinear neuronal processes and their functions in movement control [18,19] and sensory perception [20]. The MSPC is a straightforward extension of bPLV based on high-order spectra [21]; however, it does not cover either the non-integer multifrequency phase coupling (e.g. f 1 , f 2 , 2f 1 /3 + f 2 /3) or non-integer resonant coupling (e.g. 2:3 coupling [22] revealed by n : m PLV) problems.
Thus, this paper aims to introduce a more generalized approach, namely multi-phase locking value (M-PLV), that integrates the concepts of MSPC and n : m PLV to allow the detection and quantification of various types of phase coupling, including integer and non-integer, multifrequency and resonant phase coupling. The proposed M-PLV provides us with a tool to explore the unreported non-integer multi-frequency phase coupling that has never been captured by existing phase coupling methods. Furthermore, different from commonly used instantaneous phase coupling metrics, the proposed method also allows the detection of delayed phase coupling and the associated time lag between coupled oscillators. We tested M-PLV on two scenarios where synthetic coupled signals are generated using white Gaussian signals, and a system comprised of multiple coupled Rössler oscillators. The real application of M-PLV was demonstrated in a EEG-EMG data dataset recorded from human subjects during a motor task [23].
The rest of this paper is organized as follows: Section 2 describes M-PLV, Section 3 summarizes the experiments used to validate the method, Section 4 presents the results and discussion, and Section 5 concludes the paper.
II. MULTI-PHASE LOCKING VALUE (M-PLV): THEORY AND CALCULATION
The proposed M-PLV is generalized approach that integrates the concept of MSPC [17] and n : m PLV [11]. It not only provides us with a formulated mathematical description for the phase coupling problems separately described by MSPC and n : m PLV, but also permits the detection and quantification of non-integer multi-frequency phase coupling that cannot be assessed by using existing phase coupling methods.
A. M-PLV
The MSPC considers the case where multiple input frequencies f 1 , f 2 , ..., f L are coupled to an output frequency f Σ based on an integer combination, such that
f Σ = L l=1 m l f l , m l ∈ N: L l=1 m l φ(f l , t) − φ(f Σ , t) ≤(1)
The formula for MSPC is given by:
M SP C(f 1 , f 2 , ..., f L ; m 1 , m 2 , ..., m L ; t) = 1 K K k=1 exp j L l=1 m l φ k (f l , t) − φ k (f Σ , t)(2)
The MSPC does not cover the case where non-integer multiples of input frequencies are coupled to the output frequency. To address this gap, the proposed M-PLV generalizes the relation between frequencies as
nf Σ = L l=1 m l f l or f Σ = L l=1
m l n f l (L is a finite integer). It can be seen that although m l , n are integers, their ratio can give rational numbers. This idea is in line with the concept of n : m PLV [11], but allows assessment of phase coupling between multiple input frequencies and one targeted output frequency.
Moreover, there may exist a delay τ in the system between the input and the output, such that the coupling can be detected only after this delay has been compensated by aligning the indices of all the instantaneous phases. Incorporating these factors, the proposed M-PLV aims to detect and quantify a more generalized phase coupling phenomenon that can be described as:
L l=1 m l φ(f l , t − τ ) − nφ(f Σ , t) ≤(3)
Based on this theoretical definition and the formulae used by other methods to quantify phase coupling, the formula of M-PLV (Ψ) is given as follows for the calculation:
Ψ(f 1 , f 2 , ..., f L ; m 1 , m 2 , ..., m L , n; t, τ ) = 1 K K k=1 exp j L l=1 m l φ k (f l , t − τ ) − nφ k (f Σ , t) (4)
where K is a finite integer number of observations, φ k (f l , t) is an instantaneous input phase at the k th observation, which can be obtained from the Hilbert transform of narrowband filtered time series with the spectrum centered at frequency f l [24].
B. Detecting significant M-PLV
In order to detect the time window and frequency at which phase coupling is significant, a reference threshold value of M-PLV is required. For this purpose, the 95% significance threshold is obtained by a Monte Carlo simulation [17], which is a generally acceptable confidence level for determining statistical significance [25]. The Monte Carlo is a typical method to show the significance of cross-spectral based analysis such as coherence and phase coupling [26]. The null hypothesis is that the phase difference ∆φ(t; k) is completely random so that the cyclic phase difference ∆φ(t; k) = ∆φ(t; k) mod 2π will be uniformly and randomly distributed in the interval [0, 2π]. The cyclic phase difference is used here because the phase returned by taking the inverse of a sinusoid is cyclic/periodic with period 2π. The M-PLV corresponding to other frequency combinations for all instants t as well as those corresponding to the combination of interest for the instants t = t − t c (t c is the estimated coupling window) are taken as surrogate data of uniformly and randomly distributed phase values of ∆φ(t; k). This procedure is repeated N times (typically N = 1000 is sufficient for a reliable Monte Carlo simulation for phase coupling measures [17]) to obtain the statistical distribution of M-PLV values for a given number of observations, which is determined by the experimental design or available real data. Then, the threshold is determined as the minimum value greater than 95% of the sum of all the values in the distribution.
C. Delay Estimation
In order to estimate the delay τ , the M-PLV for different values τ i within a given range is calculated. The value of τ i corresponding to the maximum value of M-PLV is the estimated delayτ of the system.
III. EXPERIMENTS
We tested M-PLV on two scenarios where synthetic coupled signals are generated (1) using white Gaussian signals alone, as well as (2) from a system comprised of multiple coupled Rössler oscillators. In these simulations, the sampling frequency is 1 kHz. Noteworthy, the numerical values used in the simulations are just example values for testing the proposed method. In real applications, different numerical values could be used based on real experimental data. For example, we applied M-PLV to check 1:1 (integer) and 2:1 (1/2 non-integer) coupling and estimated delay between electroencephalography (EEG) and electromyography (EMG) signals during a motor task to demonstrate a real application of M-PLV (see Section III C), where the numerical values are from the real data obtained in a human subject experiment [23].
A. Coupled white Gaussian signals
In this case, x(t) and y(t) are two independent white Gaussian signals (zero mean and unit variance). The synthetic signal, y c (t) is generated as follows:
y c (t) = y(t) − y(f Σ , t c ) + x |m1| (f 1 , t c )x |m2| (f 2 , t c ) A |m1| x (f 1 , t c )A |m2| x (f 2 , t c ) A y (f Σ , t c ) (5)
where t is in the range of [0.001,10] s, x(t c ) represents x(t) in the phase coupling time window t c = [2.501, 7.5] s. x(f 1 , t c ) is a narrowband signal with the spectrum centered at frequency f 1 , which is obtained after x(t c ) is passed through a Butterworth band-pass filter [27] centered at frequency f 1 (bandwidth: 2 Hz, 6 th order). A x (f 1 , t c ) is the envelope of the Hilbert transform of x(f 1 , t c ). In order to eliminate the effect of filter on the signal phase, zero-phase shift filter (Matlab function: filtfilt.m) is used in this study. The normalization of the signal x(f 1 , t c ) by its envelope A x (f 1 , t c ) prevents abrupt changes in its amplitude.
In these designed signals, there is phase coupling between y c (t) and x(t) in the time interval t c , following the rule f Σ = m 1 f 1 + m 2 f 2 , serving as the ground truth in this "white" box problem for testing the M-PLV for integer (n = 1) multi-frequency phase coupling with zero delay (τ = 0).
In order to check for the phase coupling between x(t) and y c (t), M-PLV is calculated based on Eq. (4), and the set of input frequencies includes f 1 and f 2 .
B. Coupled Rössler oscillators
In this case, y(t) is white Gaussian signal, while x i (t) are obtained from a system comprised of coupled Rössler oscillators in the chaotic regime, which consists of N − 1 independent oscillators coupled to the N th oscillator. The system is characterized by the following equations:
x i = − N −1 j=1 m j ω j n x i − z i + ε i N −1 j=1 m j x j n − x i (6) u i = ω i x i + au i (7) z i = c + z i (x i − b)(8)
where ε i = 0 for i < N (N is a finite integer) and ω j = 2πf j . These coupled oscillators are designed to mimic a multiple-input-single-output (MISO) system. In this case, Eq. (5) can be generalized to include a larger number of signals coupled at different frequencies, so that nf Σ = N l=1 m l f l and the coupled signal can be obtained as follows:
y c (t) = y(t) − y(nf Σ , t c ) + A y (nf Σ , t c ) x |m1| 1 (f 1 , t c ) A |m1| x1 (f 1 , t c ) ... x |m N | N (f N , t c ) A |m N | x N (f N , t c )(9)
where x i (f j , t c ) is obtained after x i (t c ) is passed through a Butterworth band-pass filter [27] centered at frequency f j (bandwidth: 2 Hz, 6 th order). In order to introduce a delay τ in the system, t c can be replaced by t c − τ in the above equation. The coupling is evaluated between x N (t) and y c (t) by calculating the M-PLV according to Eq.(4).
The 95% significance threshold and delay τ can be estimated through the procedure described in Section 2.2 and 2.3.
C. EEG-EMG dataset
The real application of the proposed method is demonstrated in the EEG and EMG data from four healthy participants that were recorded in a previous study at Northwestern University, Chicago, USA [23]. In this previous study, the participants were recruited with written informed consent and permission of the Northwestern University institutional review board. Participants were seated with tested arm positioned with 85 • shoulder abduction, 45 • shoulder flexion and 90 • elbow flexion in a Biodex pedestal. Maximum voluntary torque (MVT) of the shoulder abduction (SABD) was measured at the beginning of the experiment for each participant. After that, the participants were asked to lift the tested arm and hold for 10 seconds with 40% of SABD MVT for each trial. In total, the trials were repeated for 25 times. 32-channel EEG (Biosemi, Inc, Active II, Amsterdam, the Netherlands) was recorded using 10/20 recording system. The EMG from muscle activity at Intermediate Deltoid of the tested arm was recorded simultaneously during the experiment. The brain and muscles are coupled during the movement task since the brain controls/communicates with the muscles [19,23]. Thus, this dataset is suitable to test the proposed method. The C3 (if the tested arm is right arm) or C4 (if the tested arm is left one) channel of EEG was used in this project to compute the coupling between EEG and EMG. These EEG channels are used since they are over brain regions in the primary motor cortex controlling arm movements [28,29]. Both EEG and EMG were sampled at 2048 Hz. Hz (e.g. 29 − 2 × 13 = 3 Hz, 0 × 29 + 3 × 13 = 39 Hz, etc). Significant M-PLV is only detected at the targeted frequency f Σ = 45Hz within the coupled time window.
B. Coupled Rössler oscillators: integer and non-integer multi-frequency phase coupling with a delay
In these simulations, we set K = 400, N = 3, and the parameters of the coupled Rössler oscillators (Eq. (7) -(8)) as a = 0.15, c = 0.2, b = 10, and ε N = 0.1. Noteworthy, the proposed method is able to work with larger N . However, without loss of generality, here N = 3 has already shown the capability of the proposed method as detailed below.
To demonstrate the performance of the method for integer (n = 1) multi-frequency phase coupling with zero delay (τ = 0) in a MISO system, the oscillators are sim- To demonstrate the performance of the method for non-integer multi-frequency phase coupling, the procedure is repeated for another case where f 1 = 7 Hz, f 2 = 13 Hz, m 1 = 1, m 2 = 1, and n = 5 so that f Σ = (1×7+1×13) To demonstrate the performance of the method for delay estimation, the synthetic signal is generated after τ is set as 1 s. In this case, t = [10.001, 40] s, C. Phase coupling between brain and muscle activities with a delay: a real application For this case, the signals were first low-pass filtered using a sixth order Butterworth filter with cutoff frequency 256 Hz and downsampled to frequency 512 Hz. Then, we Fig. 7 shows M-PLV as a function of time for various frequencies. The estimated time delay is in line with the nerve conduction delay from the brain to the muscles reported in the previous experimental studies [30,31]. Next, we checked for 2:1 coupling between EEG and EMG signals with the same time delay. This is because, in healthy participants, nonlinear coupling is generated in the same motor descending pathway as the linear coupling [19]. Fig. 8 shows results where EEG at 20 and 26 Hz is coupled to EMG at 10 and 13 Hz.
Although a continuous shoulder abduction torque was generated during the motor control task, both linear and nonlinear parts do not show continuous coupling. This is likely related to the discontinuous firing patterns of neurons in the motor descending pathway which may be associated with the excitatory and inhibitory processes of the continuous motor command [32].
D. Comparison of M-PLV, MSPC, bPLV, and n:m PLV
When time delay τ = 0, the proposed method can be used for detecting and quantifying the simultaneous multi-frequency phase coupling. Additionally, if n = 1, M-PLV is further degraded to MSPC, for measuring simultaneous integer multi-frequency phase coupling:
M SP C(f 1 , f 2 , ..., f L ; m 1 , m 2 , ..., m L ; t) = 1 K K k=1 exp j L l=1 m l φ k (f l , t) − φ k (f Σ , t)(10)
Noteworthy, bPLV [12] is basically a special form of MSPC or M-PLV when the interest is in determining quadratic phase coupling:
bP LV (f 1 , f 2 ; t)
= 1 K K k=1 exp (j ((φ k (f 1 , t) + φ k (f 2 , t)) − φ k (f 1 + f 2 , t)))(11)
When L = 1, M-PLV can also be degraded to n : m PLV [11]: P LV n:m (f n , f m ; m, n; t)
= 1 K K k=1 exp (j (mφ k (f n , t) − nφ k (f m , t)))(12)
As such, M-PLV not only allows the detection and quantification of delayed coupling, non-integer and integer multi-frequency coupling, but also provides a generic mathematical framework that can accommodate all common forms of phase coupling in the existing literature. Noteworthy, simultaneous phase coupling measures MSPC, bPLV, and n:m PLV are not able to correctly detect the delayed coupling (showing non-significant values) such as the cases shown in Section IV B (simulation) and IV C (real data) since there is no time delay τ in their definitions. The comparison of M-PLV, MSPC, bPVL and n:m PLV is summarized in Table I
Methods
Type of phase coupling Type of dynamic coupling M-PLV All multi-frequency coupling Coupling with/without delays MPSC [17] Integer multi-frequency coupling only Coupling without delays only bPLV [33] Quadratic coupling only Coupling without delays only m:n PLV [11] Two frequency coupling only Coupling without delays only
V. CONCLUSION
In this paper, a new method for quantifying multifrequency phase coupling has been proposed. This method addresses the limitation of existing approaches that only allow the detection of coupling between two resonant frequencies (i.e. n : m PLV) or quadratic coupling between three frequencies (i.e. bPLV). The M-PLV allows us to quantify various types of phase coupling, including both integer and non-integer phase coupling across multiple frequencies, so as to permit the exploration of more complicated, even unreported phase coupling phenomena in the real world. Simulation studies have been performed on synthetic coupled signals generated using white Gaussian signals, and a complex system comprised of multiple coupled Rössler oscillators. We also tested our approach for a real-time application to check neural coupling between electrical brain (EEG) and muscle (EMG) signals. Our results suggest that the proposed method can achieve a reliable estimate of the frequency combination as well as the time window during which phase coupling is present. Furthermore, this method can be used for a precise estimation of the delay between the input and the output when delayed phase coupling is present between the oscillators. This method has the potential to become a powerful new tool for exploring phase coupling in complex nonlinear dynamic systems such as the human motor system.
Conflicts of interest/Competing interests
Authors claim that they do not have any conflicts of interest.
IV. RESULTS AND DISCUSSIONA. Coupled white Gaussian signals: integer multi-frequency phase coupling with zero delayThe results are shown for f 1 = 29 Hz, f 2 = 13 Hz, m 1 = 2, and m 2 = −1, so that f Σ = 2 × 29 − 1 × 13 = 45 Hz.Fig. 1 shows M-PLV plotted as a function of time and frequency for varying numbers of epoches K (K = 500, 750, 900). M-PLV is calculated for all possible linear combinations of the frequencies f 1 = 29 and f 2 = 13 Hz with integral weights to examine whether the significant M-PLV is only detected on the target frequency 45 Hz rather than other frequencies. It is observed that M-PLV shows significant values for f Σ = 45Hz in the time window t c ∼ t c = [2.501, 7.5], i.e., the interval t c = [2.492, 7.511] s, [2.421, 7.461] s, and [2.431, 7.6] s for K = 500, 750, and 900, respectively. The error of time window estimation can be defined as the difference between t c and t c and divided by the window size. The errors are below 5 % for all tested K values. To further demonstrate the performance of M-PLV, Fig. 2 shows a few of example plots of M-PLV for K = 600 for some possible combination frequencies of f 1 = 29 and f 2 = 13
ulated for 80 seconds and two sets of 30 000 samples are obtained from the simulated signals, with t = [10.001, 40] s, t c = [17.501, 32.5] s for the first set and t = t + 40 s, t c = t c + 40 s = [57.501, 72.5] s for the second set. In this case, f 1 = 3 Hz, f 2 = 5 Hz, m 1 = −1, m 2 = 2, so that f Σ = −1 × 3 + 2 × 5 = 7 Hz. M-PLV is calculated for possible combinations of the frequencies f 1 = 3 Hz, f 2 = 5 Hz to examine whether the significant M-PLV is only detected on the target frequency 7 Hz rather than others (e.g. 2 × 3 − 5 = 1, 2 × 3 + 5 = 11, etc.). Fig. 3 and Fig. 4 show M-PLV for the first and second time set, respectively. The coupling is detected in the time window t c = [17.383, 32.286] s (error: 2.2%. Let the coupling interval be t c = [t1, t2] and the estimated coupling interval be t c = [t1 , t2 ]. Then, the estimated error is given by 100 * (|t1 − t1 | + |t2 − t2 |)/(t2 − t1) for the first set and t c = [57.484, 72.279] s (error: 1.6%) for the second set.
5 = 4
54Hz. Also, t = [10.001, 40] s and t c = [17.501, 32.5] s. Fig. 5 shows the results obtained for f Σ = 4 Hz. Using the 95% significance threshold, t c = [17.295, 32.444] s (error: 1.7%).
FIG. 1 :
1M-PLV as a function of time and frequency for K = (a) 500, (b) 750 and (c) 900. FIG. 2: M-PLV for K = 600 as a function of time (unit: ms) for the set of frequencies (a) 3, (b) 39, (c) 45, (d) 55, (e) 71, and (f) 87 Hz. FIG. 3: M-PLV for the first set of values of the coupled Rössler oscillators, as a function of time, for the set of frequencies (a) 1, (b) 7, (c) 9, (d) 11, (e) 13, and (f) 15 Hz.
FIG. 4 :
4M-PLV for the second set of values of the coupled Rössler oscillators, as a function of time, for the set of frequencies (a) 1, (b) 7, (c) 9, (d) 11, (e) 13, and (f) 15 Hz. FIG. 5: M-PLV as a function of time for n = 5 (non-integer multiples) at frequency 4 Hz.
-PLV for frequencies in the range 14-40 Hz, averaged over 25 trials for 4 subjects, for different values of the delay τ . On comparing average M-PLV for various values of τ , we getτ = 25.4 ms.
FIG. 6 :
6Average M-PLV as a function of delay τ . The local maxima occurs atτ = 1.02 s in this case. FIG. 7: 1:1 M-PLV for the EEG-EMG signals, as a function of time for frequencies between 14-40 Hz.
FIG. 8
8: 2:1 M-PLV for the EEG-EMG signals, as a function of time for frequencies (a) 20 and (b) 26 Hz.
was supported by NIH R21HD099710 and P20GM121312, OCAST HR21-164-1 and NSF RII Track-2 FEC 1539068. B. Vasudeva received stipend from S. N. Bose Scholars Program 2019.
TABLE I :
IComparison of different phase coupling methods
t c = [17.501, 32.5] s, f 1 = 29 Hz, f 2 = 13 Hz, m 1 = 2, and m 2 = −1, so that f Σ = 45.Fig. 6shows the average M-PLV obtained for varying τ i . The estimated local maxima over 10 such simulations isτ = 0.994 ± 0.0568 s, with an average error less than 5%.
The brainweb: phase synchronization and largescale integration. F Varela, J.-P Lachaux, E Rodriguez, J Martinerie, Nature reviews neuroscience. 2229F. Varela, J.-P. Lachaux, E. Rodriguez, and J. Mar- tinerie, The brainweb: phase synchronization and large- scale integration, Nature reviews neuroscience 2, 229 (2001).
Dynamic models of large-scale brain activity. M Breakspear, Nature neuroscience. 20340M. Breakspear, Dynamic models of large-scale brain ac- tivity, Nature neuroscience 20, 340 (2017).
The functional role of cross-frequency coupling. R T Canolty, R T Knight, Trends in cognitive sciences. 14506R. T. Canolty and R. T. Knight, The functional role of cross-frequency coupling, Trends in cognitive sciences 14, 506 (2010).
Cross-frequency coupling between neuronal oscillations. O Jensen, L L Colgin, Trends in cognitive sciences. 11267O. Jensen and L. L. Colgin, Cross-frequency coupling be- tween neuronal oscillations, Trends in cognitive sciences 11, 267 (2007).
F He, Y Yang, Nonlinear system identification of neural systems from neurophysiological signals. 458213F. He and Y. Yang, Nonlinear system identification of neural systems from neurophysiological signals, Neuro- science 458, 213 (2021).
Theta-gamma coupling: a nonlinear dynamical model. A Sheremet, Y Zhou, J P Kennedy, Y Qin, S N Burke, A P Maurer, 10.1101/304238A. Sheremet, Y. Zhou, J. P. Kennedy, Y. Qin, S. N. Burke, and A. P. Maurer, Theta-gamma coupling: a non- linear dynamical model, bioRxiv 10.1101/304238 (2018).
Coupling functions: dynamical interaction mechanisms in the physical, biological and social sciences. T Stankovski, T Pereira, P V E Mcclintock, A Stefanovska, 10.1098/rsta.2019.0039Philosophical Transactions of the Royal Society A. 377T. Stankovski, T. Pereira, P. V. E. McClintock, and A. Stefanovska, Coupling functions: dynamical interac- tion mechanisms in the physical, biological and social sciences, Philosophical Transactions of the Royal Society A 377: 20190039, 10.1098/rsta.2019.0039 (2019).
A dynamical systems approach for estimating phase interactions between rhythms of different frequencies from experimental data. T Onojima, 10.1371/journal.pcbi.1005928PLoS computational biology. 14T. Onojima et al., A dynamical systems approach for es- timating phase interactions between rhythms of different frequencies from experimental data, PLoS computational biology 14,1 e1005928, 10.1371/journal.pcbi.1005928 (2018).
Phase and amplitude dynamics in large systems of coupled oscillators: Growth heterogeneity, nonlinear frequency shifts, and cluster states, Chaos: An Interdisciplinary. W Lee, E Ott, T M Antonsen, https:/arxiv.org/abs/https:/doi.org/10.1063/1.4816361Journal of Nonlinear Science. 2333116W. Shing Lee, E. Ott, and T. M. Antonsen, Phase and amplitude dynamics in large systems of cou- pled oscillators: Growth heterogeneity, nonlinear fre- quency shifts, and cluster states, Chaos: An Interdisci- plinary Journal of Nonlinear Science 23, 033116 (2013), https://doi.org/10.1063/1.4816361.
Phase and amplitude dynamics of nonlinearly coupled oscillators. P Cudmore, C A Holmes, 10.1063/1.4908604Chaos: An Interdisciplinary Journal of Nonlinear Science. 2523110P. Cudmore and C. A. Holmes, Phase and amplitude dy- namics of nonlinearly coupled oscillators, Chaos: An In- terdisciplinary Journal of Nonlinear Science 25, 023110 (2015).
Ermentrout, n: m phase-locking of weakly coupled oscillators. G B , Journal of Mathematical Biology. 12327G. B. Ermentrout, n: m phase-locking of weakly cou- pled oscillators, Journal of Mathematical Biology 12, 327 (1981).
Bi-phase locking -a tool for probing non-linear interaction in the human brain. F Darvas, J Ojemann, L Sorensen, 10.1016/j.neuroimage.2009.01.034NeuroImage. 46123F. Darvas, J. Ojemann, and L. Sorensen, Bi-phase locking -a tool for probing non-linear interaction in the human brain, NeuroImage 46, 123 (2009).
Partial phase synchronization for multivariate synchronizing systems. B Schelter, M Winterhalder, R Dahlhaus, J Kurths, J Timmer, Physical review letters. 96208103B. Schelter, M. Winterhalder, R. Dahlhaus, J. Kurths, and J. Timmer, Partial phase synchronization for mul- tivariate synchronizing systems, Physical review letters 96, 208103 (2006).
An improved index of phase-synchronization for electrophysiological data in the presence of volume-conduction, noise and sample-size bias. M Vinck, R Oostenveld, M Van Wingerden, F Battaglia, C M Pennartz, Neuroimage. 551548M. Vinck, R. Oostenveld, M. Van Wingerden, F. Battaglia, and C. M. Pennartz, An improved index of phase-synchronization for electrophysiological data in the presence of volume-conduction, noise and sample-size bias, Neuroimage 55, 1548 (2011).
Online epileptic seizure prediction using wavelet-based bi-phase correlation of electrical signals tomography. Z Vahabi, R Amirfattahi, F Shayegh, F Ghassemi, International journal of neural systems. 251550028Z. Vahabi, R. Amirfattahi, F. Shayegh, and F. Ghassemi, Online epileptic seizure prediction using wavelet-based bi-phase correlation of electrical signals tomography, In- ternational journal of neural systems 25, 1550028 (2015).
Chapter 9 phase synchronization: From theory to data analysis. M Rosenblum, A Pikovsky, J Kurths, C Schäfer, P Tass, 10.1016/S1383-8121(01)80012-9Neuro-Informatics and Neural Modelling. F. Moss and S. GielenNorth-Holland4M. Rosenblum, A. Pikovsky, J. Kurths, C. Schäfer, and P. Tass, Chapter 9 phase synchronization: From theory to data analysis, in Neuro-Informatics and Neural Mod- elling, Handbook of Biological Physics, Vol. 4, edited by F. Moss and S. Gielen (North-Holland, 2001) pp. 279 - 321.
A general approach for quantifying nonlinear connectivity in the nervous system based on phase coupling. Y Yang, T Solis-Escalante, J Yao, A Daffertshofer, A C Schouten, F C T Van Der, Helm, https:/arxiv.org/abs/https:/doi.org/10.1142/S0129065715500318pMID: 26404514International Journal of Neural Systems. 261550031Y. Yang, T. Solis-Escalante, J. Yao, A. Daffertshofer, A. C. Schouten, and F. C. T. van der Helm, A general approach for quantifying nonlinear connectivity in the nervous system based on phase coupling, International Journal of Neural Systems 26, 1550031 (2016), pMID: 26404514, https://doi.org/10.1142/S0129065715500318.
Nonlinear connectivity in the human stretch reflex assessed by cross-frequency phase coupling. Y Yang, T Solis-Escalante, J Yao, F C Van Der, J P Helm, A C Dewald, Schouten, International journal of neural systems. 261650043Y. Yang, T. Solis-Escalante, J. Yao, F. C. Van Der Helm, J. P. Dewald, and A. C. Schouten, Nonlinear connectivity in the human stretch reflex assessed by cross-frequency phase coupling, International journal of neural systems 26, 1650043 (2016).
Unveiling neural coupling within the sensorimotor system: directionality and nonlinearity. Y Yang, J P Dewald, F C Van Der Helm, A C Schouten, European journal of neuroscience. 482407Y. Yang, J. P. Dewald, F. C. van der Helm, and A. C. Schouten, Unveiling neural coupling within the sensori- motor system: directionality and nonlinearity, European journal of neuroscience 48, 2407 (2018).
Expectation and attention increase the integration of top-down and bottom-up signals in perception through different pathways. N Gordon, N Tsuchiya, R Koenig-Robert, J Hohwy, PLoS biology. 173000233N. Gordon, N. Tsuchiya, R. Koenig-Robert, and J. Ho- hwy, Expectation and attention increase the integration of top-down and bottom-up signals in perception through different pathways, PLoS biology 17, e3000233 (2019).
Signal processing with higher-order spectra. C L Nikias, J M Mendel, IEEE Signal processing magazine. 1010C. L. Nikias and J. M. Mendel, Signal processing with higher-order spectra, IEEE Signal processing magazine 10, 10 (1993).
Multi-frequency phase locking in human somatosensory cortex. A J Langdon, T W Boonstra, M Breakspear, Progress in biophysics and molecular biology. 10558A. J. Langdon, T. W. Boonstra, and M. Breakspear, Multi-frequency phase locking in human somatosensory cortex, Progress in biophysics and molecular biology 105, 58 (2011).
Assessing the usage of indirect motor pathways following a hemiparetic stroke. R Tian, J P Dewald, Y Yang, IEEE Transactions on Neural Systems and Rehabilitation Engineering. 291568R. Tian, J. P. Dewald, and Y. Yang, Assessing the us- age of indirect motor pathways following a hemiparetic stroke, IEEE Transactions on Neural Systems and Reha- bilitation Engineering 29, 1568 (2021).
Estimating and interpreting the instantaneous frequency of a signal. i. fundamentals. B Boashash, 10.1109/5.135376Proceedings of the IEEE. 80B. Boashash, Estimating and interpreting the instanta- neous frequency of a signal. i. fundamentals, Proceedings of the IEEE 80, 520 (1992).
Exact confidence interval for magnitude-squared coherence estimates. S Wang, M Tang, IEEE signal processing letters. 11326S. Wang and M. Tang, Exact confidence interval for magnitude-squared coherence estimates, IEEE signal processing letters 11, 326 (2004).
Spectral and cross-spectral analysis of uneven time series with the smoothed lomb-scargle periodogram and monte carlo evaluation of statistical significance. E Pardo-Igúzquiza, F J Rodríguez-Tovar, Computers & Geosciences. 49207E. Pardo-Igúzquiza and F. J. Rodríguez-Tovar, Spectral and cross-spectral analysis of uneven time series with the smoothed lomb-scargle periodogram and monte carlo evaluation of statistical significance, Computers & Geo- sciences 49, 207 (2012).
L O Chua, C A Desoer, E S Kuh, Linear and nonlinear circuits. McGraw-Hill CollegeL. O. Chua, C. A. Desoer, and E. S. Kuh, Linear and nonlinear circuits (McGraw-Hill College, 1987).
Nonlinear coupling between cortical oscillations and muscle activity during isotonic wrist flexion. Y Yang, T Solis-Escalante, M Van De Ruit, F C Van Der Helm, A C Schouten, Frontiers in computational neuroscience. 10126Y. Yang, T. Solis-Escalante, M. van de Ruit, F. C. van der Helm, and A. C. Schouten, Nonlinear coupling between cortical oscillations and muscle activity during isotonic wrist flexion, Frontiers in computational neuro- science 10, 126 (2016).
Eeg time-frequency analysis provides arguments for arm swing support in human gait control. J B Weersink, N M Maurits, B M De, Jong , Gait & posture. 7071J. B. Weersink, N. M. Maurits, and B. M. de Jong, Eeg time-frequency analysis provides arguments for arm swing support in human gait control, Gait & posture 70, 71 (2019).
Evidence for sustained cortical involvement in peripheral stretch reflex during the full long latency reflex period. M Perenboom, M Van De Ruit, J De Groot, A Schouten, C Meskers, Neuroscience letters. 584214M. Perenboom, M. Van de Ruit, J. De Groot, A. Schouten, and C. Meskers, Evidence for sustained cor- tical involvement in peripheral stretch reflex during the full long latency reflex period, Neuroscience letters 584, 214 (2015).
C L Witham, C N Riddle, M R Baker, S N Baker, Contributions of descending and ascending pathways to corticomuscular coherence in humans, The Journal of physiology. 5893789C. L. Witham, C. N. Riddle, M. R. Baker, and S. N. Baker, Contributions of descending and ascending path- ways to corticomuscular coherence in humans, The Jour- nal of physiology 589, 3789 (2011).
The discontinuous nature of motor execution. G Staude, R Dengler, W Wolf, Biological cybernetics. 8223G. Staude, R. Dengler, and W. Wolf, The discontinuous nature of motor execution, Biological cybernetics 82, 23 (2000).
A note on the phase locking value and its properties. S Aydore, D Pantazis, R M Leahy, 10.1016/j.neuroimage.2013.02.008NeuroImage. 74231S. Aydore, D. Pantazis, and R. M. Leahy, A note on the phase locking value and its properties, NeuroImage 74, 231 (2013).
|
[] |
[
"Quantum random number generator based on quantum tunneling effect",
"Quantum random number generator based on quantum tunneling effect"
] |
[
"Junlin Li \nState Key Laboratory of Low-Dimensional Quantum Physics\nTsinghua University\n100084BeijingChina\n",
"Haihan Zhou \nState Key Laboratory of Low-Dimensional Quantum Physics\nTsinghua University\n100084BeijingChina\n",
"Dong Pan \nState Key Laboratory of Low-Dimensional Quantum Physics\nTsinghua University\n100084BeijingChina\n",
"Guilu Long \nState Key Laboratory of Low-Dimensional Quantum Physics\nTsinghua University\n100084BeijingChina\n"
] |
[
"State Key Laboratory of Low-Dimensional Quantum Physics\nTsinghua University\n100084BeijingChina",
"State Key Laboratory of Low-Dimensional Quantum Physics\nTsinghua University\n100084BeijingChina",
"State Key Laboratory of Low-Dimensional Quantum Physics\nTsinghua University\n100084BeijingChina",
"State Key Laboratory of Low-Dimensional Quantum Physics\nTsinghua University\n100084BeijingChina"
] |
[] |
In this paper, we proposed an experimental implementation of quantum random number generator(QRNG) with inherent randomness of quantum tunneling effect of electrons. We exploited InGaAs/InP diodes, whose valance band and conduction band shared a quasi-constant energy barrier. We applied a bias voltage on the InGaAs/InP avalanche diode, which made the diode works under Geiger mode, and triggered the tunneling events with a periodic pulse. Finally, after data collection and post-processing, our quantum random number generation rate reached 8Mb/s, and final data was verified by NIST test and Diehard test. Our experiment is characterized as an innovative low-cost, photonic source free, integratable or even chip-achievable method in quantum random number generation.
| null |
[
"https://arxiv.org/pdf/1711.01752v1.pdf"
] | 51,690,853 |
1711.01752
|
3b6543d30997e384a477f5da262fb7b62f641de5
|
Quantum random number generator based on quantum tunneling effect
Junlin Li
State Key Laboratory of Low-Dimensional Quantum Physics
Tsinghua University
100084BeijingChina
Haihan Zhou
State Key Laboratory of Low-Dimensional Quantum Physics
Tsinghua University
100084BeijingChina
Dong Pan
State Key Laboratory of Low-Dimensional Quantum Physics
Tsinghua University
100084BeijingChina
Guilu Long
State Key Laboratory of Low-Dimensional Quantum Physics
Tsinghua University
100084BeijingChina
Quantum random number generator based on quantum tunneling effect
PACS numbers:
In this paper, we proposed an experimental implementation of quantum random number generator(QRNG) with inherent randomness of quantum tunneling effect of electrons. We exploited InGaAs/InP diodes, whose valance band and conduction band shared a quasi-constant energy barrier. We applied a bias voltage on the InGaAs/InP avalanche diode, which made the diode works under Geiger mode, and triggered the tunneling events with a periodic pulse. Finally, after data collection and post-processing, our quantum random number generation rate reached 8Mb/s, and final data was verified by NIST test and Diehard test. Our experiment is characterized as an innovative low-cost, photonic source free, integratable or even chip-achievable method in quantum random number generation.
In this paper, we proposed an experimental implementation of quantum random number generator(QRNG) with inherent randomness of quantum tunneling effect of electrons. We exploited InGaAs/InP diodes, whose valance band and conduction band shared a quasi-constant energy barrier. We applied a bias voltage on the InGaAs/InP avalanche diode, which made the diode works under Geiger mode, and triggered the tunneling events with a periodic pulse. Finally, after data collection and post-processing, our quantum random number generation rate reached 8Mb/s, and final data was verified by NIST test and Diehard test. Our experiment is characterized as an innovative low-cost, photonic source free, integratable or even chip-achievable method in quantum random number generation.
PACS numbers:
I. INTRODUCTION
Random numbers are crucial in many fields, for instance, the physical simulation [1], information processing [2],quantum communication protocols [3], quantum cryptography [4] and quantum computation [5]. Under most circumstances, security is directly associated with its unpredictability and uncopyablity. However, the prevalent pseudo-random number or chaotic random number [6] are theoretically pre-determined or not proven to be unpredictable. Meanwhile, it is of great significance to develop true random number generator. Fortunately, uncertainty is a fundamental property of quantum mechanics. So, numerous studies, on true random number generation, has focused on the application of quantum inherent uncertainty or probability , such as the path choice of single photon with fixed polarization after passing a PBS [7] [8]; the uncertainty of the arrival time of single photons [9] [10]; or the phase fluctuation of photons [11] [12]. And more recently, Bowels proposed a protocol of self-testing quantum random number generator [13] with the measurement of 'dimension witness' [14]. It made a quantative analysis on the true randomness of a given system. Also, Ma studied how to generate true randomness with an untrustworthy random source [15]. Moreover, Xu introduced a robust quantum random number generator via the high dimensional interference [16]. The generation speed of these protocols varies from bps to Gbps. Noteworthy, all these pervasive protocols exploited photonic sources, or, even single-photon sources.
In light of the difficulty of integration and the vulnerability to the environmental influence of photonic source, together with other flaws that impede the pragmatic application of quantum random number generators, we focused on another intrinsic randomness of quantum mechanics-the tunneling probability of electrons [17][18] [19], which aborted the essence of photonic source and turn to the electronic source. Consequently, our QRNG could be highly integratable.
In this paper, we introduced an efficient protocol of quantum random number generation via the application of intrinsic indeterministic property of quantum tunneling effect and experimentally realized this protocol via the widely applied InGaAs/InP avalanche diode [20][21] [22], with the generation speed reaching 8Mb/s. Furthermore, higher speed, up to 20Mb/s, can be reached by changing the frequency of trigger pulses. On the other hand, a more efficient system which could respond to a higher frequency of trigger pulse is competent to augment the generation speed, as stable high frequency voltage pulses up to Gb/s could be realized in a precise way in nowadays electronic controlling. Also, post-processing program can be easily transplanted into our data-collecting FPGA, which enable a real-time output of quantum random number sequence.
II. PROTOCOL
Consider electrons trapped in a potential well, we apply a periodic bias voltage, whose peak value is U H on this system, which could induce tunneling events with a constant probability p within each single period, and then record a sequence of these signal with '0' when no tunneling occurs in a single pulse period and '1' when it occurs. Here, the tunneling probability p can be determined theoretically [23]. Finally, post-processing of the sequence was operated and we obtained the eligible random number sequences [24]. This protocol is also summarized in Fig1:
We noticed that Shelan Khasro Tawfeeq has exploited the dark counts of InGaAs/InP avalanche diode in random number generation [25]. However, our protocol is utterly distinct with hers. This difference is interpreted in the next section.
III. EXPERIMENTAL IMPLEMENTATION
The key problem in our experiment is setup of the electron reservoir, bounded by a stable potential barrier, as the electronic realization of precise bias voltage pulses is not a challenge. After several pre-tests, we utilized the InGaAs/InP avalanche diode. Although the In-GaAs/InP avalanche diode is prevailing in the photon detectors, our experiment is totally irrelevant to photonic source. On the contrary, we just take advantage of the quasi static barrier it possessed. As concoluded in [26], an InGaAs/InP avalanche diode consists four parts in its energy band diagram. And in our experiment, trigger signals were applied to accelerate electrons in the P + −InP section. These electrons tunneled through junction between P + −InP section and n−InP section with certain probability determined on the peak voltage of trigger signals U H . Subsequently, we recorded the tunneling signals and came to raw data. Setup of our experiment was shown in Fig.2. And 3 showed the circuit of out experiment. We confined the InGaAs/InP diode in a seal box. Hence, no environmental photons could contribute to the signals received by the receptor module.
As we mentioned above, the QRNG source is the tunneling effect of electrons in InGaAs/InP avalanche diode, which is irrelevant to the photons and was considered as part of the dark counts in the previous study [27].
Dark counts of InGaAs/InP avalanche diode can be When proper bias voltage is applied on the In-GaAs/InP avalanche diode, it works under the Geiger mode [21]. Under this circumstance, the accelerated electrons triggered avalanche effect in the 'accelerating section', which could induce current signals. While the bias voltage varies, the tunneling probability changes, so does the data properties(data entropy, data auto-correlation parameter and the final data generation speed).
And as mentioned in the previous section, our protocol focus more on the 'Dark counts induced by quantum tunneling effect', where the voltage of the trigger signal's high level U H dominates the tunneling probability in a single period T . While Shelan's work emphasized more about the pulsewidth of a fixed trigger signal. The afterpulse effect could responsible for her thesis, which is not In order to restrain the f isrt effect mentioned above, the working environment is monitored at 200K by the semiconductor cooling system. Under this circumstance, the I type dark counts was reduced to 500/s, which is equivalent to 10 −5 /pulse in a 50M hz bias voltage pulse triggered system. Meanwhile, the III type dark counts is partly circumvented by the deadtime of this system. The deadtime system ensures that after each tunneling occurs, there will be a time interval ∆T during which the detector is forced offline. Namely, the after-pulse during this time interval ∆T could not be counted. However, due to the hardware impediment and the remnant afterpulse, our raw random number data requires a further optimization, and post-processing program is applied to countervail this bias and subsequently generate true randomness.
The counting number of the tunneling-induced signals in 1s can be directly displayed on a screen. In our experiment, the frequency of bias voltage pulses was set to 50M b/s, then we adjusted the amplitude of these pulses U until the counting number reach 2.5 × 10 7 . According to [29], we can demonstrate our tunneling probability as :
P (V ) = Ae −B V −V 0(1)
Here, A,B are parametric expression, which are determined by several indexes; V 0 is the critical voltage, under which the tunneling probability is 0. In order to determine proper voltage, we measured the mean and entropy of output data from 49.25V to 49.5V , and simulated the result based on the above equation. We obtained figure 4,5,6,7.
We noticed when U = 49.28V, U = 49.29V, U = 49.30V , the mean obviously biased from other data, so we omitted them and comes to figure 5,7.
As the forward tunneling probability is too complicated to have an analytical expression [23], we quantitatively fitted the data with curves shown above. Finally, we chose U H = 49.40V . And we designed a F P GA module to collect all tunneling data and saved it into a .txt document. The speed of raw data collection is about 20M b/s, which is restricted by the U SB communication serial port.
IV. POST-PROCESSING
The post-processing program is realized by the application of Toeplitz-hashing extractor [24].
Min-entropy estimation-We measure the minimum entropy H m of our raw data [30].
H m = −log 2 (max x P [x])(2)
Here x refers to all the possible sequence that 0, 1 n could reach. In our scheme, we took n = 8, namely, we divide our raw data into 8 − bits sequences. And then calculate maximum probability of these segments and the min-entropy. In our experiments, H m = 5.1204. Toeplitz-hashing extractor-After the min-entropy estimation, we characterized the raw data with the proportion of quantum randomness. Namely, independent 2.8− bits quantum random code can be extracted from each 8 − bits raw data segment. Subsequently, we generate a Toeplitz matrix T with two independent random seeds s A = {s A1 , s A2 , · · · , s Am }, s B = {s B1 , s B2 , · · · , s Bn }. m and n are determined by the min-entropy and the length of raw data l. And s A , s B consisted the row and column of T , respectively.
n = l m = l × H m H 0 − 2log 2 ( )(3)
Notice that, is the secure parameter, and H 0 = log 2 (l), H m is the min-entropy of the raw data. Scheme of postprocessing-For a raw data sequence d with length l, eligible quantum random sequence d can be obtained as follows:
d × T = d (d 1 , · · · , d l ) l × s A1 · · · s Am . . . . . . . . . s Bn · · · s A1 m = d 1 , · · · , d m(4)
Here, we noticed a systematic bias ascribed to the low peak-peak value of our QRNG. Briefly, the lower level in our experiment was supposed to be low enough so that no tunneling could occur and the after-pulse could be relieved. Unfortunately, restricted by the inner set of the driven module in InGaAs/InP trigger module, the difference between high level and low level is fixed at ∆U = 4V , which means that even the low levelU L = U H − 4V , and could result in tunneling current. Inevitably, the InGaAs/InP APD self-protecting program, which compels the InGaAs/InP APD out of avalanche effect, was activated automatically as the In-GaAs/InP APD works at avalanche mode in a prolonged period. Thus, we could see a set of 0s in our raw data periodically.
V.DATA ANALYSIS
In our final experiment, we set the frequency of trigger pulse to 50M Hz, with 0ns deadtime. And the bias voltage was fixed to 49.40V . As showed above, the minentropy H m = 5.12, for 8 − bits sequences. Combined with a secure parameter = 2 −100 , data length l = 3000, the random bits generation rate was 8.3M b/s.
After a set of 5Gbfinal data, we applied the N IST −sts Test and Diehard Test. See Figure5 and Figure6. Aside from these two tests, the auto-correlation function is also drew from the data sheet, as shown by Figure7. And more details of the test data is provided in the Appendix.
It is apparently that all our random number passed these two tests and the auto-correlation got a dramatic decline after postprocessing with Toeplitz-hashing extractor. Noteworthy, that the improvement on the data collection module and the optimization of the trigger module is approachable certifies realization of higher generation rate.
VI. CONCLUSION
In this paper, we proposed a QRNG protocol based on the tunneling effect in InGaAs/InP avalanche diode. And thus no photonic sources is required in our experiments. Moreover, with the application of integrated module in InGaAs/InP single photon detector, we implemented a photonic-source-free QRNG, whose generation rate could reach 8.3M b/s. Moreover, this rate can be lifted up to 20M b/s with the facilities we have. Our further study will focus on following questions:
1. Designing a trigger source with higher frequency, shorter pulse width and larger peak-peak voltage value.
2. Seeking a more stable and robust physical system as the tunneling source, in light of the disadvantages of our InGaAs/InP avalanche diode system.
3. Combination of this tunneling protocol and other QRNGs, as mentioned in [13] [15][16] [31].
ACKNOWLEDGMENTS
We acknowledge Weixing Zhang and Hua Yuan for their assistance on the hardware designing. And we thank Pro.Xiongfeng Ma and Dr.Zhen Zhang for crucial discussions and Xinyu Liu, Nan Jiang for their guidance on the application of several sets of test software. Also, we thank the NSFC for its financial support.
APPENDIX: DETAILED RESULT OF RANDOMNESS TESTS
The detailed data analysis by NIST test is obtained by the official program 'sts' version 2.1.2, as shown in
FIG. 1 :
1Brief summary of the tunneling-based QRNG FIG. 2: Experimental setup of tunneling-based QRNG. 1: Optical input channel; 2: External clock input channel(trigger signal input channel);3: Final random number output channel.
FIG. 3 :
3Circuit of tunneling-based QRNG.characterized into 3 kinds[28] 1-Dark counts induced by heat motion of electrons.2-Dark counts induced by quantum tunneling effect.3-Dark counts induced by after-pulse effect.
FIG. 4 :
4Mean of output data under different voltage. R 2 = 0.964 credited with quantum property.
FIG. 5 :
5Mean of output data under different voltage(without 49.28V, 49.29V and 49.30V ). R 2 = 0.9996 FIG. 6: Entropy of output data under different voltage.
FIG. 7 :
7Entropy of output data under different voltage(without 49.28V, 49.29V and 49.30V ).
FIG. 8 :
8Upper is the result of NIST test of our final random sequence while the lower is the eligible rate of sequences decomposed from the original final sequence. The voltage of high level V h = 49.40V and the data size of original final sequence is 5Gb.
FIG. 9 :
9Upper is the result of Diehard test of our final random sequence . The voltage of high level V h = 49.40V and the data size of original final sequence is 5Gb. FIG. 10: Auto-correlation of the forgoing final data.
TABLE I :
IResult of NIST test for a 5Gb final data, The minimum pass rate for each statistical test with the exception of the random excursion (variant) test is approximately = 292 for a sample size = 301 binary sequences. The minimum pass rate for the random excursion (variant) test is approximately = 168 for a sample size = 174 binary sequences. As the confidence parameter α = 0.01, our data passed the NIST test.
TABLE .
.And the detailed data analysis by Dihard test is shown as the following TABLEII:StatisticalTest
P-value Assessment
BirthdayTest
0.505898 Success
OverlappingPermutation
0.368835 Success
RanksOf31 × 31matrices
0.481990 Success
RanksOf32 × 32matrices
0.714278 Success
RanksOf6 × 8matrices
0.566601 Success
BitstreamTest
0.01443
Success
OPSO
0.317900 Success
OQSO
0.592000 Success
DNA
0.883100 Success
Count1sinTheStreamOfBytes 0.648895 Success
Count 1sInTheSpecialBytes 0.790766 Success
ParkingLotTest
0.058110 Success
MinimumDistanceTest
0.762900 Success
3 − D SpheresTest
0.705579 Success
SqueezeTest
0.634944 Success
OverlappingSumsTest
0.501220 Success
Runs
0.846631 Success
Craps
0.242872 Success
TABLE II :
IIResult of Diehard test for a 5Gb final data, All of these indexes lies in(0, 1), our data passed the Diehard test.
Random number generation and Monte Carlo methods pp. J E Gentle, J. E. Gentle, Random number generation and Monte Carlo methods pp. 1-60 (2003).
. J Emerson, Y S Weinstein, M Saraceno, S Lloyd, D G Cory, Science. 3022098J. Emerson, Y. S. Weinstein, M. Saraceno, S. Lloyd, and D. G. Cory, Science 302, 2098 (2003).
. F.-G Deng, G L Long, X.-S Liu, http:/link.aps.org/doi/10.1103/PhysRevA.68.042317Phys. Rev. A. 6842317F.-G. Deng, G. L. Long, and X.-S. Liu, Phys. Rev. A 68, 042317 (2003), URL http://link.aps.org/doi/10. 1103/PhysRevA.68.042317.
. C H Bennett, F Bessette, G Brassard, L Salvail, J Smolin, Journal of cryptology. 53C. H. Bennett, F. Bessette, G. Brassard, L. Salvail, and J. Smolin, Journal of cryptology 5, 3 (1992).
M A Nielsen, I Chuang, Quantum computation and quantum information. M. A. Nielsen and I. Chuang, Quantum computation and quantum information (2002).
. T Stojanovski, L Kocarev, IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications. 48281T. Stojanovski and L. Kocarev, IEEE Transactions on Circuits and Systems I: Fundamental Theory and Appli- cations 48, 281 (2001).
. J Rarity, P Owens, P Tapster, Journal of Modern Optics. 412435J. Rarity, P. Owens, and P. Tapster, Journal of Modern Optics 41, 2435 (1994).
. A Stefanov, N Gisin, O Guinnard, L Guinnard, H Zbinden, Journal of Modern Optics. 47595A. Stefanov, N. Gisin, O. Guinnard, L. Guinnard, and H. Zbinden, Journal of Modern Optics 47, 595 (2000).
. Y.-Q Nie, H.-F Zhang, Z Zhang, J Wang, X Ma, J Zhang, J.-W Pan, Applied Physics Letters. 10451110Y.-Q. Nie, H.-F. Zhang, Z. Zhang, J. Wang, X. Ma, J. Zhang, and J.-W. Pan, Applied Physics Letters 104, 051110 (2014).
. M Wahl, M Leifgen, M Berlin, T Röhlicke, H.-J Rahn, O Benson, Applied Physics Letters. 98171105M. Wahl, M. Leifgen, M. Berlin, T. Röhlicke, H.-J. Rahn, and O. Benson, Applied Physics Letters 98, 171105 (2011).
. B Qi, Y.-M Chi, H.-K Lo, L Qian, Optics letters. 35312B. Qi, Y.-M. Chi, H.-K. Lo, and L. Qian, Optics letters 35, 312 (2010).
. F Xu, B Qi, X Ma, H Xu, H Zheng, H.-K Lo, Optics express. 2012366F. Xu, B. Qi, X. Ma, H. Xu, H. Zheng, and H.-K. Lo, Optics express 20, 12366 (2012).
. T Lunghi, J B Brask, C C W Lim, Q Lavigne, J Bowles, A Martin, H Zbinden, N Brunner, Physical review letters. 114150501T. Lunghi, J. B. Brask, C. C. W. Lim, Q. Lavigne, J. Bowles, A. Martin, H. Zbinden, and N. Brunner, Phys- ical review letters 114, 150501 (2015).
. J Bowles, M T Quintino, N Brunner, Physical review letters. 112140407J. Bowles, M. T. Quintino, and N. Brunner, Physical review letters 112, 140407 (2014).
. Y.-L Tang, H.-L Yin, Q Zhao, H Liu, X.-X Sun, M.-Q , Y.-L. Tang, H.-L. Yin, Q. Zhao, H. Liu, X.-X. Sun, M.-Q.
. W.-J Huang, S.-J Zhang, L Chen, L.-X Zhang, You, http:/link.aps.org/doi/10.1103/PhysRevX.6.011024Phys. Rev. X. 611024Huang, W.-J. Zhang, S.-J. Chen, L. Zhang, L.-X. You, et al., Phys. Rev. X 6, 011024 (2016), URL http://link. aps.org/doi/10.1103/PhysRevX.6.011024.
. F Xu, J H Shapiro, F N Wong, Optica. 31266F. Xu, J. H. Shapiro, and F. N. Wong, Optica 3, 1266 (2016).
. A O Caldeira, A J Leggett, Physical Review Letters. 46211A. O. Caldeira and A. J. Leggett, Physical Review Let- ters 46, 211 (1981).
. R Banerjee, B R Majhi, Journal of High Energy Physics. 95R. Banerjee and B. R. Majhi, Journal of High Energy Physics 2008, 095 (2008).
. D Schwartz, B Sen, C N Archie, J Lukens, Physical review letters. 551547D. Schwartz, B. Sen, C. N. Archie, and J. Lukens, Phys- ical review letters 55, 1547 (1985).
. H Kanbe, N Susa, H Nakagome, A Hiroaki, Electronics Letters. 16163H. Kanbe, N. Susa, H. Nakagome, and A. Hiroaki, Elec- tronics Letters 16, 163 (1980).
. D Renker, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 56748D. Renker, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detec- tors and Associated Equipment 567, 48 (2006).
. B F Aull, A H Loomis, D J Young, R M Heinrichs, B J Felton, P J Daniels, D J Landers, Lincoln Laboratory Journal. 13335B. F. Aull, A. H. Loomis, D. J. Young, R. M. Heinrichs, B. J. Felton, P. J. Daniels, and D. J. Landers, Lincoln Laboratory Journal 13, 335 (2002).
. J L Moll, J. L. Moll (1964).
. X Ma, F Xu, H Xu, X Tan, B Qi, H.-K Lo, Physical Review A. 8762327X. Ma, F. Xu, H. Xu, X. Tan, B. Qi, and H.-K. Lo, Physical Review A 87, 062327 (2013).
. S K Tawfeeq, Journal of Lightwave Technology. 275665S. K. Tawfeeq, Journal of Lightwave Technology 27, 5665 (2009).
. N Susa, H Nakagome, O Mikami, H Ando, H Kanbe, IEEE Journal of Quantum Electronics. 16864N. Susa, H. Nakagome, O. Mikami, H.-i. Ando, and H. Kanbe, IEEE Journal of Quantum Electronics 16, 864 (1980).
A Tosi, A Mora, F Zappa, S Cova, M Itzler, X Jiang, SPIE OPTO: Integrated Optoelectronic Devices (International Society for Optics and Photonics. A. Tosi, A. Dalla Mora, F. Zappa, S. Cova, M. Itzler, and X. Jiang, in SPIE OPTO: Integrated Optoelectronic Devices (International Society for Optics and Photonics, 2009), pp. 72221G-72221G.
. G Ribordy, J.-D Gautier, H Zbinden, N Gisin, Applied Optics. 372272G. Ribordy, J.-D. Gautier, H. Zbinden, and N. Gisin, Applied Optics 37, 2272 (1998).
. S R Forrest, R F Leheny, R E Nahory, M A Pollack, URL <GotoISI>://WOS:A1980KE04200028Applied Physics Letters. 37S. R. Forrest, R. F. Leheny, R. E. Nahory, and M. A. Pollack, Applied Physics Letters 37, 322 (1980), ISSN 0003-6951, URL <GotoISI>://WOS:A1980KE04200028.
. R Konig, R Renner, C Schaffner, IEEE Transactions on Information theory. 554337R. Konig, R. Renner, and C. Schaffner, IEEE Transac- tions on Information theory 55, 4337 (2009).
. X Ma, X Yuan, Z Cao, B Qi, Z Zhang, arXiv:1510.08957arXiv preprintX. Ma, X. Yuan, Z. Cao, B. Qi, and Z. Zhang, arXiv preprint arXiv:1510.08957 (2015).
|
[] |
[
"Butterfly hysteresis loop and dissipative spin reversal in the S=1/2, V 15 molecular complex",
"Butterfly hysteresis loop and dissipative spin reversal in the S=1/2, V 15 molecular complex"
] |
[
"I Chiorescu \nLaboratoire de Magnétisme Louis Néel\nCNRS\nBP 16638042-GrenobleFrance\n",
"W Wernsdorfer \nLaboratoire de Magnétisme Louis Néel\nCNRS\nBP 16638042-GrenobleFrance\n",
"A Müller \nFakültat für Chemie\nUniversität Bielefeld\nD-33501BielefeldGermany\n",
"H Bögge \nFakültat für Chemie\nUniversität Bielefeld\nD-33501BielefeldGermany\n",
"B Barbara \nLaboratoire de Magnétisme Louis Néel\nCNRS\nBP 16638042-GrenobleFrance\n"
] |
[
"Laboratoire de Magnétisme Louis Néel\nCNRS\nBP 16638042-GrenobleFrance",
"Laboratoire de Magnétisme Louis Néel\nCNRS\nBP 16638042-GrenobleFrance",
"Fakültat für Chemie\nUniversität Bielefeld\nD-33501BielefeldGermany",
"Fakültat für Chemie\nUniversität Bielefeld\nD-33501BielefeldGermany",
"Laboratoire de Magnétisme Louis Néel\nCNRS\nBP 16638042-GrenobleFrance"
] |
[] |
Time resolved magnetization measurements have been performed on a spin 1/2 molecular complex, so called V15. Despite the absence of a barrier, magnetic hysteresis is observed over a timescale of several seconds. A detailed analysis in terms of a dissipative two level model is given, in which fluctuations and splittings are of same energy. Spin-phonon coupling leads to long relaxation times and to a particular "butterfly" hysteresis loop.
|
10.1103/physrevlett.84.3454
|
[
"https://arxiv.org/pdf/cond-mat/9910117v2.pdf"
] | 11,500,209 |
cond-mat/9910117
|
9fc1e3d212306598d43b03a2117f389a06bc386f
|
Butterfly hysteresis loop and dissipative spin reversal in the S=1/2, V 15 molecular complex
18 Feb 2000
I Chiorescu
Laboratoire de Magnétisme Louis Néel
CNRS
BP 16638042-GrenobleFrance
W Wernsdorfer
Laboratoire de Magnétisme Louis Néel
CNRS
BP 16638042-GrenobleFrance
A Müller
Fakültat für Chemie
Universität Bielefeld
D-33501BielefeldGermany
H Bögge
Fakültat für Chemie
Universität Bielefeld
D-33501BielefeldGermany
B Barbara
Laboratoire de Magnétisme Louis Néel
CNRS
BP 16638042-GrenobleFrance
Butterfly hysteresis loop and dissipative spin reversal in the S=1/2, V 15 molecular complex
18 Feb 2000(October 6, 2018)
Time resolved magnetization measurements have been performed on a spin 1/2 molecular complex, so called V15. Despite the absence of a barrier, magnetic hysteresis is observed over a timescale of several seconds. A detailed analysis in terms of a dissipative two level model is given, in which fluctuations and splittings are of same energy. Spin-phonon coupling leads to long relaxation times and to a particular "butterfly" hysteresis loop.
In this letter we study the dynamics of the magnetization reversal of a molecular crystal made of nanometric molecules with non-interacting S = 1/2 spins. Despite the absence of energy barrier against spin reversal, this system shows hysteresis. This result are interpreted in details assuming spin rotation in a phonon bath, which is different from the situation of large spin molecules where only the spin bath is believed to be relevant [1][2][3][4]. Resonant phonon transitions are irrelevant, unless between states at different energies [5] or in the presence of a transverse field large enough to create a tunnel splitting of the order of the temperature energy scale [6].
The molecular complex K 6 [V IV 15 As 6 O 42 (H 2 O)]·8H 2 O (so-called V 15 ) [7] is made of molecules with fifteen V IV ions of spin S = 1/2, placed in a quasi-spherical layered structure formed of a triangle, sandwiched by two hexagons. The symmetry is trigonal (space group R3c, a = 14.029Å, α = 79.26 • , V = 2632Å 3 ). The unit-cell contains two V 15 clusters and it is large enough that dipolar interactions between different molecules are negligible (a few mK). All intra-molecular exchange interactions being antiferromagnetic, the total spin of this molecule is S = 1/2. Such a small spin has zero energy barrier and relatively large splitting in zero applied field (∼ 10 −2 K). Although spin entanglement results in 2 15 eigenstates per molecule, the magnetization curves will be interpreted in terms of a dissipative two level model [8][9][10].
Time-resolved magnetization measurements were performed with the micro-SQUID technique (50 − 400 mK, 0 − 0.7 T/s) [11]. In order to maximaze thermal contact with the bath, we choose a sample holder made by greece and silver powder and a small crystal of the V 15 (∼ 50 µm). As an example we give a few hysteresis loops in Fig. 1a and Fig. 2a (only the positive parts are represented, the other ones being rigorously symmetrical).
When the field increases, coming from the negative saturation, the magnetization curve passes through the origin of the coordinates, reaches a plateau and then approaches saturation. This leads to a winged hysteresis loop characterized by the absence of irreversibility near zero field. Nevertheless, the initial susceptibilities being larger the faster sweeping field, the magnetization is out of equilibrium also near zero field where it appears to be reversible. hysteresis loops for three temperatures and for a given field sweeping rate 0.14 T/s. The plateau is more pronounced at low T. The inset is a schematic representation of a two-level system SZ = ±1/2 with repulsion due to non-diagonal matrix elements. In a swept field the switching probability P is given by the Landau-Zener formulae (see text). The two levels are broadened by the hyperfine fields and the absorption or the emission of phonons can switch the polarization state of spins.
The wings depend sensitively on temperature T and field sweeping rate r. In Fig. 1a, where three hysteresis loops are presented at three different temperatures for a given sweeping rate, the plateau is higher and more pronounced at low temperature. The same tendency is observed at a given temperature and faster sweeping rates (Fig. 2a). When compared to its equilibrium limit (dotted curve in Fig. 2), each magnetization curve shows a striking feature: the plateau intersects the equilibrium curve and the magnetization becomes smaller than at equilibrium. Equilibrium is then reached in higher fields near saturation. hysteresis loops for three field sweeping rates at T = 0.1 K. The observed plateau is more pronounced at high sweeping rate. The equilibrium curve can be approximated by the median of the two branches of the low sweeping rate hysteresis loop (dotted curve). In the top inset is plotted the spin and phonon temperature TS = T ph for T = 0.1 K and r = 0.14 T/s, when the field is swept from negative values. TS decreases until zero-field and then increases linearly within the plateau region. Then it overpasses the bath temperature to finally reach the equilibrium. In the bottom inset the calculated number of phonons withhω = ∆H is plotted vs. the sweeping field modulus (note the arrows) at equilibrium (T ph = TS = T , dashed line) and out-of-equilibrium (nT ph = nT =T S , r = 0.14 T/s, black line). The difference between the two curves (thick segment ∆ω) suggests the moving hole in the phonon distribution, while their intersection gives the plateau intercept of the equilibrium magnetization curve.
In order to interpret this magnetic behavior of the V 15 molecules, we will analyse how the level occupation numbers vary in this two level system (see Fig. 1b inset) when sweeping an external field. In the absence of dissipation, a 2-level model is well described by the bare Landau-Zener model, in the adiabatic or non-adiabatic case (low or high sweeping rates). The probability for the |1/2, −1/2 ↔ |1/2, 1/2 transition is
P = 1 − exp(−π∆ 2 0 /4hµ B r).
In such a Landau-Zener transition, the plateaus of Fig. 2 should decrease if the sweeping rate increases, which is contrary to the experiments. Taking the typical value r = 0.1 T/s and the zerofield splitting ∆ 0 ∼ = 0.05 K [12][13][14][15], one gets a ground state switching probability very close to unity: in the absence of dissipation the spin 1/2 must adiabatically follow the field changes. Extremely large sweeping rates (≈ 10 9 T/s) would be needed to get into the quantum non-adiabatic regime P < 1. The mark of the V 15 system is that the dissipative spin-phonon coupling is acting also near zero applied field becausehω ≈ ∆ 0 is of the order of the bath temperature, which is not the case for large spin molecules where ∆ 0 << k B T . The spin temperature T S is such that n 1 /n 2 = exp(∆ H /k B T S ), where ∆ H = ∆ 2 0 + (2µ B B 0 ) 2 is the two levels field-dependent separation, and n 1,2 (n 1,2eq ) the out of equilibrium (equilibrium) level occupation numbers. In the magnetization curves at 0.1 K (Fig. 1,2a), the spin temperature is significantly lower than the bath temperature T (n 1 > n 1eq , T S < T ) between −0.3 T (when the magnetization curve departs from the equilibrium one) and 0.15 T (the field at which the magnetization curve intersects the equilibrium one). After this intersept T S is larger than the bath temperature (n 1 < n 1eq , T S > T ), and at sufficiently high fields (about 0.5 T) it reaches the equilibrium value (n 1 = n 1eq , T S = T ).
In a direct process, the spins at the temperature T S should relax to the phonons temperature within a timescale τ 1 , the phonons being at the bath temperature. However, even with a silver sample holder, it is not possible to maintain the phonon temperature equal to the temperature of the bath. This is because in V 15 below 0.5 K, the heat capacity of the phonons C ph is very much smaller than that of the spins C S , so that the energy exchanged between spins and phonons will very rapidly adjust the phonons temperature T ph to the spin one T S . Furtheremore, the energy is transfered from the spins only to those phonon modes withhω = ∆ H (within the resonance line width). The number of such lattice modes being much smaller than the number of spins, energy transfer between the phonons and the sample holder must be very difficult, a phenomenon known as the phonon bottleneck [16]. Following [17], the number of phonons per molecule available for such resonant transitions is n T = ∆ω σ(ω)dω/(exp(hω/kT )−1), where σ(ω)dω = 3V ω 2 dω/(2π 2 v 3 ) = number of phonon modes between ω and ω + dω per molecule of volume V , v is the phonon velocity and ∆ω is the transition linewidth due to fast hyperfine field fluctuations (they broden both en-ergy levels) [18]. Taking the typical values v ≈ 3000 m/s, T ≈ 10 −1 K and ∆ω ≈ 5 · 10 2 MHz we find n T of the order of ≈ 10 −6 to 10 −8 phonons/molecule. Such a small number of phonons is very rapidly absorbed, burning a hole of width ∆ω in the phonon density of states at the energyhω = ∆ H [16]. If this phonon density of states does not equilibrate fast enough, the hole must persist and move with the sweeping field, leading to a phonon bottleneck.
Now this description will be made quantitative. For a given splitting ∆ H , the time evolution of the two levels populations n 1,2 and of the phonon numbers n T ph at T ph obeys the set of two differential equations [17]: (i) −ṅ 1 =ṅ 2 = P 12 n 1 − P 21 n 2 and (ii)ṅ T ph = −(n T ph − n T )/τ ph − P 12 n 1 + P 21 n 2 , where P 12,21 are the transition probabilities between the two levels (they are themselves linear functions of n T ph ) and τ ph ≈ L/2v is the phonon-bath relaxation time (L is the sample size). Using the notations x = (n 1 − n 2 )/(n 1eq − n 2eq ), y = (n T ph − n T )/(n T + n/2) with n = ∆ω σ(ω)dω we get: (i)ẋ = (1 − x − xy)/τ 1 and (ii)ẏ = −y/τ ph + bẋ, where b = C S /C ph and 1/τ 1 = P 12 + P 21 the direct spin-phonon relaxation time. By solving numerically this system for typical values, e.g. τ 1 = 10 −2 s, τ ph < 10 −6 s, b > 10 5 , we can see that T ph → T S = T (phonon bottleneck) very rapidly, as expected.This leads to y = 1/x − 1 and the second equation of the differential system becomeṡ x = (x − x 2 )/(1 + bx 2 )/τ ph . In the limit b >> 1 (in our case b ≈ 10 8 − 10 10 ) this equation has the solution:
− t/bτ ph = x − x 0 + ln((x − 1)/(x 0 − 1)),(0.1)
where x 0 = x(t = 0) and bτ ph is the spin-phonon recovery relaxation time (T ph = T S → T ). When the system is not far from equilibrium (x ∼ 1), we get an exponential decay of the magnetization, with the same time constant τ H = bτ ph . For a spin 1/2 system [17]:
τ H = α tanh 2 (∆ H /2k B T ) ∆ 2 H ,(0.2)
with α = 2π 2h2 v 3 N τ ph /3∆ω (N the molecule density). The dynamical magnetization curves calculated in this model are given Fig. 1b and Fig. 2b. We started from equilibrium (x 0 = 1) in large negative fields. Then we let the system relax for a very short time δt and we calculated x(δt) using Eq. 0.1. This value was taken as the initial value for the next field (the field step is rδt). The parameters have been chosen to mimic the measured curves of Fig. 1a and Fig. 2a [19]. The obtained similarity supports the possibility of the phonon bottleneck effect at the timescale of a few 0.1 s. In the Fig. 2a inset, we show the variation of the calculated spin-phonon temperature T S for T = 0.1 K and r = 0.14 T/s. We can note a linear variation in the plateau region (small positive fields, n 1 /n 2 ≈ cst.), after a cooling in negative fields. The slope of this quasi-adiabatic linear region varies with the bath temperature and sweeping rate and gives the plateau dependence on these two parameters (see Figs. 1, 2). Fig. 1, 2b).
In the Fig. 2b inset we show the calculated field evolution of the number of phonons at energyhω = ∆ H at equilibrium (T ph = T S = T , dashed line) and out-ofequilibrium (n T ph = n T =TS , r = 0.14 T/s, black line). The difference between the two curves (thick segment ∆ω) suggests the moving hole in the phonon distribution, while their intersection gives the plateau intercept of the equilibrium magnetization curve (above which the hole dissapears and T ph = T S > T ). Let note that in zero field the system is out-of-equilibrium even if magnetization passes through the origin of coordinates (without a barrier, the switch between +1/2 and −1/2 follows the level structure shown Fig. 1 inset ). At larger fields, in the plateau region, n 1 /n 2 ≈ cst. at timescales shorter than τ H = bτ ph (Eq. 0.2), even after the plateau crosses the equilibrium curve. Equilibrium is reached when τ H becomes small enough.
Furthermore, we measured the relaxation of the magnetization of our crystal at different fields and temperatures, along the plateau region. The relaxation curves compared well to exponential decay and the obtained relaxation times are presented in Fig. 3a. The comparison with those calculated (Fig. 3b) is acceptable. But we noted that a direct fit to Eq. 0.1 would necessitate larger values for α and ∆ 0 (≈ 0.4 − 0.6 sK 2 and ≈ 0.2 − 0.3 K). Note that in V 15 we have bτ ph > τ 1 and this leads to the phonon bottle-neck regime. However, in other systems one might have bτ ph < τ 1 in which case the phonons would be at equilibrium but still with a butterfly hysteresis loop (τ H is a linear combination of τ 1 and bτ ph [17]). This type of hysteresis loop is general and characterizes dissipative spin reversal in the absence of barrier.
In conclusion, the V 15 molecular complex constitutes an example of dissipative two-levels system [8] of mesoscopic size. The total spin 1/2 being formed of a large number of interacting spins, its splitting results from the structure itself of the molecule (intra-molecular hyperfine and Dzyaloshinsky-Moriya couplings) and it is rather large (a fraction of Kelvin) [12]. In V 15 and in other low-spin systems, splittings must be much larger than in large-spin molecules where the presence of energy barriers lowers them by orders of magnitude (e.g. 10 −11 K in Mn 12 [1,2]).This is the reason why spin-phonon transitions within the tuneling gap are important in low-spin molecules and not relevant in high-spin ones, unless a large transverse field is applied [6] (it increases the tunnel splitting and probability) in which case we would also expect similar phenomena.
FIG. 1 .
1Measured (a−top) and calculated (b−bottom)
FIG. 2 .
2Measured (a−top) and calculated (b−bottom)
FIG. 3 .
3The relaxation times τH, measured (a−top) and calculated (b−bottom, same parameters as in
ACKNOWLEDGMENTSWe are very pleased to thank P.C.E. Stamp, S.
For a recent review see. L Thomas, F Lionti, R Ballou, D Gatteschi, R Sessoli, B ; J R Barbara, M P Friedman, J Sarachik, R Tejada, Ziolo, J. Magn. Magn. Mat. B. Barbara, L. Thomas, F. Lionti, I. Chiorescu, A. Sulpice383167Phys. Rev. Lett.L. Thomas, F. Lionti, R. Ballou, D. Gatteschi, R. Sessoli, B. Barbara, Nature, 383, 145, (1996). J. R. Friedman, M. P. Sarachik, J. Tejada, R. Ziolo, Phys. Rev. Lett., 76, 3830, (1996). For a recent review see: B. Barbara, L. Thomas, F. Lionti, I. Chiorescu, A. Sulpice, J. Magn. Magn. Mat., 200, 167, (1999).
. B Barbara, L Gunther, Physics World. I. Tupitsyn and B. Barbara12to be publishedB. Barbara and L. Gunther, Physics World, 12, 35, (1999). I. Tupitsyn and B. Barbara, to be published.
Nato Asi, Quantum Tunneling of Magnetization − QTM '94. A. Garg, 273 and N. V. Prokof'ev, P.C.E. StampKluwer Publishing301347Quantum Tunneling of Magnetization − QTM '94, NATO ASI, edited by L. Gunther & B. Barbara, Series E 301, (Kluwer Publishing, 1995): A. Garg, 273 and N. V. Prokof'ev, P.C.E. Stamp, 347.
Tunneling in Complex Systems. N V Prokof, P C E Stamp, cond-mat/9810350Proc. Inst. for Nuclear Theory. P.C.E. Stamp80World ScientificC.E. StampN. V. Prokof'ev and P.C.E. Stamp, Phys. Rev. Lett., 80, 5794, (1998). Tunneling in Complex Systems, Proc. Inst. for Nuclear Theory, vol.5, edited by S. Tomsovic, (World Scientific, 1998): P.C.E. Stamp, 101-197. G. Rose and P.C.E. Stamp, cond-mat/9810350.
. F Hartmann-Boutron, P Politi, J Villain, Int. J. Mod. Phys. B. 102577F. Hartmann-Boutron, P. Politi and J. Villain, Int. J. Mod. Phys. B, 10, 2577, (1996);
. M Leuenberger, D Loss, cond- mat/9907154EuroPhys. Letters. 46M. Leuenberger, D. Loss, EuroPhys. Letters, 46, 5, 692, (1999) and cond- mat/9907154.
. G Belessa, N Vernier, B Barbara, D Gatteschi, Phys. Rev. Lett. 83416G. Belessa, N. Vernier, B. Barbara, D. Gatteschi, Phys. Rev. Lett., 83, 2, 416, (1999).
A Müller, J Döring, ; D Gatteschi, L Pardi, A L Barra, A Müller, ; D Gatteschi, L Pardi, A L Barra, A Müller, J Döring, Molecular Engineering. 27NatureA. Müller, J. Döring, Angew. Chem. Int. Ed. Engl., 27, 1721, (1991). D. Gatteschi, L. Pardi, A. L. Barra, A. Müller, Molecular Engineering, 3, 157-169, (1993). D. Gatteschi, L. Pardi, A. L. Barra, A. Müller, J. Döring, Nature, 354, 465, (1991).
. A J Leggett, S Chakravarty, A T Dorsey, M P A Fisher, A Garg, W Zwerger, ; M Grifoni, Peter Hänggi, Rev. Mod. Phys. 59Phys.Rep.A. J. Leggett, S. Chakravarty, A. T. Dorsey, M. P. A. Fisher, A. Garg, W. Zwerger, Rev. Mod. Phys. 59, 1, (1987). M. Grifoni, Peter Hänggi, Phys.Rep., 304, 5-6, (oct. 1998).
. C Zener, Proc. R. Soc. London, Ser. A137. 696C. Zener, Proc. R. Soc. London, Ser. A137, 696, (1932).
. L D Landau, Phys. Z. Sowjetunion. 2L.D. Landau, Phys. Z. Sowjetunion, 2, 46, (1932).
. H De Raedt, S Miyashita, K Saito, D Garcia-Pablos, N Garcia, Phys. Rev. B. 5611761H. De Raedt, S. Miyashita, K. Saito, D. Garcia-Pablos, N. Garcia, Phys. Rev. B, 56, 18, 11761, (1997). S.
. J ; Y Miyashita, H Kayanuma, Nakayama, Phys. Soc. Jpn. E. Shimshoni, A. Stern6413099Phys. Rev. BMiyashita, J. Phys. Soc. Jpn., 64, 3207-3214, (1995). Y. Kayanuma, H. Nakayama, Phys. Rev. B, 57 (20), 13099, (1998). E. Shimshoni, A. Stern, Phys. Rev. B, 47, 9523, (1993).
. W Wernsdorfer, E Orozco, K Hasselbach, A Benoit, D Mailly, O Kubo, H Nakano, B Barbara, ; W Wernsdorfer, cond-mat/9912123Phys. Rev. Lett. 79W. Wernsdorfer, E. Bonet Orozco, K. Hasselbach, A. Benoit, D. Mailly, O. Kubo, H. Nakano and B. Barbara, Phys. Rev. Lett., 79, 4014, (1997). W. Wernsdorfer et al., cond-mat/9912123.
However, the total spin 1/2 of V15 comes from 15 coupled spins and different intra-molecular couplings such as hyperfine (A I · S, I = 7/2, A ≈ 10 mK [15]) and Dzyaloshinsky-Moriya interactions [13,14] could generate a splitting. For an isolated Kramers spin 1/2, ∆0 = 0.. In particular, the two S = 1/2 low-lying degenerate doublets and the S = 3/2 first excited quartet could be slightly mixed by D-M interactions, removing the degeneracy of the S = 1/2 doublet. A value ∆0 ≈ 50 mK is strongly supported by the experiment-to-model comparison in Figs. 1 and 2For an isolated Kramers spin 1/2, ∆0 = 0. However, the total spin 1/2 of V15 comes from 15 coupled spins and different intra-molecular couplings such as hyperfine (A I · S, I = 7/2, A ≈ 10 mK [15]) and Dzyaloshinsky- Moriya interactions [13,14] could generate a splitting. In particular, the two S = 1/2 low-lying degenerate dou- blets and the S = 3/2 first excited quartet could be slightly mixed by D-M interactions, removing the degen- eracy of the S = 1/2 doublet. A value ∆0 ≈ 50 mK is strongly supported by the experiment-to-model compar- ison in Figs. 1 and 2.
. M I Katsnelson, V V Dobrovitski, B N Harmon, cond-mat/9906375Phys. Rev. B. 59M. I. Katsnelson, V. V. Dobrovitski, B. N. Harmon, Phys. Rev. B, 59, 6919 (1999), cond-mat/9906375.
. B Barbara, L Thomas, F Lionti, A Sulpice, A Caneschi, J. Magn. Magn. Mat. B. Barbara, L. Thomas, F. Lionti, A. Sulpice, A. Caneschi, J. Magn. Magn. Mat., 177-181, 1324, (1998).
Metallic Shifts in NMR, Progr. in Mat. Sc, 20, part I. G C Carter, L H Bennett, D J Kahan, Pergamon Press Ltd364G. C. Carter, L. H. Bennett, D. J. Kahan, Metallic Shifts in NMR, Progr. in Mat. Sc, 20, part I, 364, Pergamon Press Ltd., (1977).
. J H Van Vleck, Rep. Prog. Phys. K. W. H. Stevens59189Phys. Rev.J. H. Van Vleck, Phys. Rev., 59, 724, (1941). K. W. H. Stevens, Rep. Prog. Phys., 30, 189, (1967).
Electronic Paramagnetic Resonance of Transition Ions. A Abragam, B Bleaney, Clarendon Press − OxfordA. Abragam, B. Bleaney, Electronic Paramagnetic Res- onance of Transition Ions, Clarendon Press − Oxford, chap. 10, (1970).
The fast hyperfine fluctuations are characterized by the transvers nuclear relaxation time T2 associated with the dipolar internuclear interactions and the total spread of the energy is given by hyperfine couplings. 4The fast hyperfine fluctuations are characterized by the transvers nuclear relaxation time T2 associated with the dipolar internuclear interactions and the total spread of the energy is given by hyperfine couplings [4].
9 · 10 −47 sJ 2 ), ∆0 = 50 mK within a precision range of ≈ 20%. Taking L ∼ 30 − 50 µm, N ∼ 10 27 m −3 , ∆ω ∼ 5 · 10 8 s −1 , one gets a phonon velocity v ≈ 2800 − 3600 m/s which is quite a reasonable value. 15α = 0.15 sK 2 (2.9 · 10 −47 sJ 2 ), ∆0 = 50 mK within a precision range of ≈ 20%. Taking L ∼ 30 − 50 µm, N ∼ 10 27 m −3 , ∆ω ∼ 5 · 10 8 s −1 , one gets a phonon velocity v ≈ 2800 − 3600 m/s which is quite a reasonable value.
|
[] |
[
"Non-geometric heterotic backgrounds and 6D SCFTs/LSTs Non-geometric heterotic backgrounds and 6D SCFTs/LSTs",
"Non-geometric heterotic backgrounds and 6D SCFTs/LSTs Non-geometric heterotic backgrounds and 6D SCFTs/LSTs"
] |
[
"Anamaría Font [email protected] ",
"Iñaki García-Etxebarria ",
"Dieter Lüst [email protected] ",
"Stefano Massai [email protected] ",
"Christoph Mayrhofer [email protected] ",
"Christoph Mayrhofer ",
"\nFacultad de Ciencias\nMax-Planck-Institut für Physik\nMax-Planck-Institut für Physik\nEnrico Fermi Institute\nand ASC for Theoretical Physics\nUniversidad Central de Venezuela\nFöhringer Ring 6, Föhringer Ring 6, Theresienstraße 37A.P.20513, 1020-A, 80805, 80805, 80333Caracas, München, München, MünchenVenezuela, Germany, Germany;, Germany\n",
"\nASC for Theoretical Physics\nUniversity of Chicago\n5640 S Ellis Ave, Theresienstraße 3760637, 80333Chicago, MünchenILUSA, Germany\n"
] |
[
"Facultad de Ciencias\nMax-Planck-Institut für Physik\nMax-Planck-Institut für Physik\nEnrico Fermi Institute\nand ASC for Theoretical Physics\nUniversidad Central de Venezuela\nFöhringer Ring 6, Föhringer Ring 6, Theresienstraße 37A.P.20513, 1020-A, 80805, 80805, 80333Caracas, München, München, MünchenVenezuela, Germany, Germany;, Germany",
"ASC for Theoretical Physics\nUniversity of Chicago\n5640 S Ellis Ave, Theresienstraße 3760637, 80333Chicago, MünchenILUSA, Germany"
] |
[] |
We study N = (1, 0) six-dimensional theories living on defects of non-geometric backgrounds of the E 8 × E 8 and the Spin(32)/Z 2 heterotic strings. Such configurations can be analyzed by dualizing to F-theory on elliptic K3-fibered non-compact Calabi-Yau threefolds. The majority of the resulting dual threefolds turn out to contain singularities which do not admit a crepant resolution. When the singularities can be resolved crepantly, the theories living on the defect are explicitly determined and reveal a form of duality in which distinct defects are described by the same IR fixed point. In particular, a subclass of non-geometric defects corresponds to SCFTs/LSTs arising from small heterotic instantons on ADE singularities.
|
10.22323/1.292.0123
|
[
"https://arxiv.org/pdf/1712.07083v1.pdf"
] | 119,405,830 |
1712.07083
|
fc1f065895f1a417e323ddac55d9a65c27f81a33
|
Non-geometric heterotic backgrounds and 6D SCFTs/LSTs Non-geometric heterotic backgrounds and 6D SCFTs/LSTs
19 Dec 2017 31 August -23 September, 2016
Anamaría Font [email protected]
Iñaki García-Etxebarria
Dieter Lüst [email protected]
Stefano Massai [email protected]
Christoph Mayrhofer [email protected]
Christoph Mayrhofer
Facultad de Ciencias
Max-Planck-Institut für Physik
Max-Planck-Institut für Physik
Enrico Fermi Institute
and ASC for Theoretical Physics
Universidad Central de Venezuela
Föhringer Ring 6, Föhringer Ring 6, Theresienstraße 37A.P.20513, 1020-A, 80805, 80805, 80333Caracas, München, München, MünchenVenezuela, Germany, Germany;, Germany
ASC for Theoretical Physics
University of Chicago
5640 S Ellis Ave, Theresienstraße 3760637, 80333Chicago, MünchenILUSA, Germany
Non-geometric heterotic backgrounds and 6D SCFTs/LSTs Non-geometric heterotic backgrounds and 6D SCFTs/LSTs
19 Dec 2017 31 August -23 September, 2016Corfu Summer Institute 2016 "School and Workshops on Elementary Particle Physics and Gravity" Corfu, Greece * Speaker. Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0). http://pos.sissa.it/
We study N = (1, 0) six-dimensional theories living on defects of non-geometric backgrounds of the E 8 × E 8 and the Spin(32)/Z 2 heterotic strings. Such configurations can be analyzed by dualizing to F-theory on elliptic K3-fibered non-compact Calabi-Yau threefolds. The majority of the resulting dual threefolds turn out to contain singularities which do not admit a crepant resolution. When the singularities can be resolved crepantly, the theories living on the defect are explicitly determined and reveal a form of duality in which distinct defects are described by the same IR fixed point. In particular, a subclass of non-geometric defects corresponds to SCFTs/LSTs arising from small heterotic instantons on ADE singularities.
Introduction
Since the early studies of string compactifications most work has been done in the supergravity regime. However it is well known that string vacua can be much richer. Our motivation in this article is to take a step away from vacua where the background can be understood geometrically by considering a classical supergravity compactification. The aim is both to learn more about the non-classical and non-geometrical properties of string theory, and to gain some insight about the broader set of allowed string vacua.
In this paper we consider a class of heterotic string vacua which are very non-classical, involving compactifications on "spaces" that cannot be globally described as geometries, while remaining accessible thanks to duality with F-theory [1]. In this way we can probe many of the properties of the heterotic string away from the classical regime where it is usually studied. More concretely, we will focus on cases where the compactification space for the heterotic string at a generic point is locally geometric, and described by a T 2 fibration. The non-classical nature of the background arises from the patching between local descriptions, which entails non-trivial elements of the T-duality group acting on the T 2 . Such fibrations will in general have defects where a local description in terms of the heterotic string on a smooth background is no longer possible.
For concreteness, we consider the compactification of the heterotic string to six dimensions so that we have locally a T 2 fibration over a complex one-dimensional base. At certain points of the base there are defects and our goal is to describe the low-energy dynamics living on the defects themselves. This is achieved by dualizing the configuration to F-theory, where the dynamics on the defects can be characterized by purely geometric means. Generically, the F-theory background dual to a given defect on the heterotic side are highly singular. In some cases we are able to resolve the singularity crepantly by performing a finite number of blow-ups in the base of the fibration. For all the cases where this resolution is possible we construct the resulting smooth geometry. The blow-ups correspond to giving vevs to tensor multiplets of the 6d (1,0) theory on the defect, such that it flows to a Lagrangian description in the IR. For the cases that can be resolved the emerging theories can be related to 6d SCFTs, such as the long known theories of small instantons on an ADE singularity [2,3], or 6d SCFTs that have been recently classified [4,5,6]. Actually, the resulting theories fall into configurations whose UV completions are conjectured to be 6d little string theories (LSTs) [7,8,9] since they have distinctive properties of LSTs [10] such as a mass scale and T-duality upon circle compactification. Moreover, there is a prescription to extract 6d SCFTs embedded in the LSTs [9].
The paper is organized as follows: In section 2, after recalling the basics of heterotic compactifications on T 2 , we review the formulation of heterotic/F-theory duality in terms of a map between genus-two curves and K3 surfaces. Moreover, we discuss how it can be used to study nongeometric heterotic backgrounds in terms of K3 fibered Calabi-Yau three-folds. In section 3, we first explain the procedure to resolve singularities and then apply the formalism to local heterotic degenerations which admit a geometric description in some duality frame. We also discuss truly non-geometric models and describe a kind of duality between different non-geometric and geometric defects. In section 4 we summarize the classification of all possible local heterotic models, both geometric and non-geometric, admitting F-theory duals that can be resolved crepantly into smooth Calabi-Yau three-folds. We end with some final comments.
Non-geometric heterotic vacua and F-theory
In this section we describe what we actually mean by non-geometric heterotic string vacua. We construct them in two steps: first we compactify the ten-dimensional string theory on a two-torus; then we use the duality group of the moduli to get a non-trivial, i.e. non-geometric, identification of these fields when going along a loop of non-trivial homotopy. We will also review the duality between the heterotic string and F-theory [11,12,13], in preparation for section 3 where we use the F-theory representation to get a better handle on the low-energy degrees of freedom of the non-geometric heterotic vacua.
Heterotic string on T 2
From the compactification of the heterotic string on a torus T 2 , we obtain the following moduli fields in eight dimensions:
• A complexified Kähler modulus ρ = T 2 B + ω ∧ω with B the Kalb-Ramond two-form and ω the holomorphic one-form of the torus which can be obtained from the metric on T 2 .
• The complex structure modulus τ = b ω/ a ω, where a and b denote the two generators of the non-trivial one-cycles of the torus.
• Furthermore, there are 16 complex Wilson line moduli from the Cartan generators of the non-abelian gauge group of the heterotic string, i.e. β i = a A i + i b A i .
As it is well known there are dualities among torus compactifications. Therefore, the local moduli space O(2; R) × O(2 + n W L ; R)\O(2, 2 + n W L ; R) of the heterotic T 2 compactification becomes the Narain space [14] O(2; R) × O(2 + n W L ; R)\O(2, 2 + n W L ; R)/O(2, 2 + n W L ; Z) , (2.1)
with n W L the number of Wilson line moduli switched on. The cases of interest to us are those with none or one non-vanishing Wilson line modulus. In these situations the heterotic/F-theory duality map is known explicitly and can be used to analyse the heterotic vacua.
Vacua with varying moduli fields
To set the ground for the second step of our compactification we rewrite the above moduli space. For n W L = 1 we can map 1 the Narain moduli space to the Siegel upper half plane of genustwo curves
H 2 = Ω = τ β β ρ ℑ(det(Ω)) > 0 ∧ ℑ(ρ) > 0 (2.2)
quotiented by an Sp(4, Z)-action
Ω → (AΩ + B)(CΩ + D) −1 with A B C D ∈ Sp(4, Z) . (2.
3) 1 We should note that this map is a priori only well-defined from H 2 /Sp(4, Z) to the Narain moduli space. Only The advantage of this rewriting is firstly that the above moduli ρ, τ, and β are just the entries of Ω as denoted in (2.2). Secondly, in this representation it is natural to assign to every moduli space point p a genus-two curve C p with complex structure
Ω i j = b i ω j . (2.4)
Here a i , b i , and ω j are respectively the non-trivial one-cycles and holomorphic one-forms of C p , with normalization a i ω j = δ i j . In this way we obtain a geometrification of our moduli.
In the vein of F-theory, we use this geometrification to construct six-dimensional heterotic string vacua with varying moduli fields. Therefore, we let the heterotic torus fibration vary adiabatically along two real dimensions or one complex dimension which we parametrize by t. For the moduli to fulfill the (BPS) equations of motion they must vary holomorphically in t. To obtain the wanted (globally) non-geometric configurations, we puncture the t-plane and allow for 'Sp(4, Z)patchings' of the moduli fields when encircling the punctures, i.e. we identify dual theories when going along a non-contractible loop. 2 Since every Sp(4, Z)-orbit in H 2 is identified with exactly one genus-two curve, holomorphic genus-two fibrations are the natural candidates to encode such vacua. If such a fibration is non-trivial it will degenerate in complex codimension one. These degeneration points are the punctures of the t-plane and the kind of singularity is in one-to-one relation with a certain Sp(4, Z)-duality transformation on the moduli. Now, it is a happy coincidence that all the degenerations of genus-two curves were classified by Ogg-Namikawa-Ueno [15,16]. This mathematical result gives us a huge list of non-geometric heterotic string vacua. However, to understand them we have to make sense of the singularity loci of the fibration. In string theory we are used to the appearance of new light degrees of freedom which cure the theory when we run towards a seeming singularity. Since, we do not know how to do such an analysis for these specific compactifications directly on the heterotic side, we use the duality with F-theory to study them. The localised physical objects, which lie at the center of the genus two-degeneration, are called T -fects, which is short for T -duality defect.
Mapping the setting to F-theory
Since the invention of F-theory [1] it is known that the heterotic string on T 2 is dual to F-theory compactified on K3 [11]. This duality is best understood in the large volume/stable degeneration limit [17,11]. This special point in moduli spaces means that on the heterotic side we have ρ → i ∞ and on the F-theory side the K3 degenerates into two 'del Pezzo nine' surfaces which intersect each other along a T 2 . The identification of the moduli data is now as follows: τ is the complex structure of the F-theory T 2 at the intersection, and the Wilson lines are encoded in the intersection points (spectral cover data [18]) of the respective nine exceptional curves of the two dP 9 's with the T 2 . Unfortunately such a detailed identification with all Wilson line moduli non-vanishing exists only for this special point in moduli space. However, for the case which we consider in this article, i.e. only one Wilson line non-vanishing, we can even do better. In this case the map between the moduli fields on both sides is known along the whole moduli space [19,12].
For n W L = 1, in the E 8 × E 8 heterotic string, the hypersurface describing the elliptically fibered F-theory K3 takes the following form:
y 2 = x 3 + (a u 4 v 4 + c u 3 v 5 ) x z 4 + (b u 6 v 6 + d u 5 v 7 + u 7 v 5 ) z 6 (2.5)
where x, y, z and u, v are the homogeneous coordinates of fiber ambient variety P 2,3,1 and the base P 1 , respectively. This K3 has a II * singularity at v = 0 and a III * at u = 0 which correspond to an E 8 and an E 7 gauge group, respectively-matching the remaining unbroken heterotic gauge group after switching on one Wilson line modulus. Furthermore, the Picard number of this manifold is 17. Therefore, its complex structure moduli space [20] exactly agrees with the heterotic moduli space. 3 As mentioned already, the map between the complex structures of the F-theory K3 (2.5) and the heterotic moduli (2.2) is explicitly known and given by [19,12]:
a = − 1 48 ψ 4 (Ω) , b = − 1 864 ψ 6 (Ω) , c = −4χ 10 (Ω) , d = χ 12 (Ω) . (2.6)
with ψ 4 , ψ 6 , χ 10 , and χ 12 the genus-two Siegel modular forms [21] of weight four, six, ten, and twelve, respectively. The modularity of the forms is meant with respect to the Sp(4, Z) transformation (2.3).
As we pointed out already, we are interested in an understanding of the physics of the vacua given by the list of genus-two degenerations. In their classification Namikawa and Ueno [16] give the genus-two singularities explicitly in terms of fibrations of hyperelliptic curves, i.e. sextic equations of the form
y 2 = c 6 (t) x 6 + c 5 (t) x 5 + . . . + c 1 (t) x + c 0 (2.7)
with the c i (t)'s functions (or sections) of t. Furthermore, all the hyperelliptic curve fibrations are in a canonical form, in the sense that the singularity lies at t = 0. Having the genus-two fibrations in the form of (2.7) is very convenient for us because we can use the relations between the genus-two Siegel modular forms and the Igusa-Clebsch invariants 4 I 2 , I 4 , I 6 , I 10 of the sextic [23],
I 2 (c i ) = χ 12 (Ω) χ 10 (Ω) , I 4 (c i ) = 2 −4 · 3 −2 ψ 4 (Ω) , I 6 (c i ) = 2 −6 · 3 −4 ψ 6 (Ω) + 2 −4 · 3 −3 ψ 4 (Ω)χ 12 (Ω) χ 10 (Ω) , I 10 (c i ) = 2 −1 · 3 −5 χ 10 (Ω) ,(2.8)
to write down the K3 coefficients a, b, c, d as functions of the sextic coefficients c i . In the end, we obtain for every genus-two singularity a K3 fibration over the t-plane with the K3 fibre degenerating at t = 0. In the next section we will look at these F-theory singularities and try to resolve them if possible. In this way we get some insight about the objects which live at these six-dimensional loci.
The F-theory K3 dual to the Spin(32)/Z 2 heterotic string compactified on T 2 with one Wilson line is also known [24,12]. It is described by:
y 2 = x 3 + v(u 3 + a uv 2 + b v 3 ) x 2 z 2 + v 7 (c u + d v) x z 4 .
(2.9) 3 Note that this is obviously also true for the cases with n W L > 1 and one of the reasons why these two theories are dual to each other. 4 See appendix C of [22] for the explicit form of the Igusa-Clebsch invariants in terms of the coefficients of the sextic.
Putting the equation into Weierstraß form and computing the discriminant shows that this K3 has singularities of type I * 10 (SO (28)) at v = 0, and of type I 2 (SU (2)) at cu + dv = 0, for generic coefficients. Hence, the gauge group is Spin(28) × SU (2)/Z 2 . When c ≡ 0 the group enhances to Spin(32)/Z 2 .
Before we finish this section and go on to the resolutions, we should note that the map from the F-theory side to the heterotic side is much more involved, see for instance [25] for a first step into this direction.
Resolution of singularities: procedure and examples
Having established the duality map between the heterotic vacua and the F-theory vacua, we look now at the resolution of the singularities. Put differently in terms of F-theory language, we move onto the tensor branch of the theory to get an insight into the degrees of freedom which lie at the heart of the genus-two degenerations.
General strategy
We will always work with a Weierstraß model in the following, i.e. the elliptic fibration is always represented in terms of a hypersurface equation of the form:
y 2 = x 3 + f (ξ i ) x z 4 + g(ξ i ) z 6 (3.1)
where x, y, z are again the homogeneous coordinates of P 2,3,1 and f and g are sections of some line bundles over the base B ∋ ξ i . For the elliptic fibration to be Calabi-Yau the line bundles of f and g have to be K −4 B and K −6 B , respectively, with K B the canonical bundle of the base. Throughout this article, we will call a singularity resolved if the elliptic curve has only minimal singularities [26] (or Kodaira type singularities) along the base, i.e. there are no points along the discriminant locus of (3.1) where f vanishes to order four or higher and simultaneously g to order six or higher. Furthermore, in our examples the base on the F-theory side is given by a (trivial) P 1 fibration over the t-plane. Since the coefficient in front of the u 7 v 5 term in (2.5) is constant, there is no such non-minimal singularity along v = 0. Therefore, we only have to look at the u-t-patch for such points and, as it turns out, in the beginning there is just one non-minimal singularity namely at u = t = 0. To get rid of this non-minimal point we follow [2] and blow-up the base at this point. However, we do this in a rather toric manner by introducing the maximal amount of crepant 5 blowups at once at the non-minimal point and not in a blow-up after blow-up procedure. Afterwards we search for non-minimal points along the newly introduced exceptional curves and, if necessary, apply our 'toric blow-up procedure' again. The analysis can also be applied to the dual F-theory K3 of the Spin(32)/Z 2 heterotic string, described by the equation (2.9). 5 Crepant in the sense that the proper transform of the hypersurface equation (2.5) after the base blow-up is still Calabi-Yau. We do not claim that the canonical class of the base does not change which is obviously wrong.
Toric blow-up procedure
As a first step, we choose local affine coordinates ξ i on B such that the non-minimal singularity lies at ξ 1 = ξ 2 = 0. We expand the sections f and g in these coordinates,
f = ∑ i f i ξ m 1 i 1 ξ m 2 i 2 , g = ∑ i g i ξ l 1 i 1 ξ l 2 i 2 ,(3.2)
and collect the minimal exponents m i and l i . Next we look for all 'toric' blow-up [27] directions n j which are crepant. For the elliptic fibration to remain Calabi-Yau, the blow-up n must involve the fibre coordinates x and y too:
ξ 1 , ξ 2 ,
x, y → e n 1ξ 1 , e n 2ξ 2 , e 2(n 1 +n 2 −1)x , e 3(n 1 +n 2 −1)ỹ .
(3.3)
Hence, the canonical class of the ambient variety after the blow-up is given by E times the last column in the following table:
ξ 1 ξ 2
x y e ∑ E n 1 n 2 2(n 1 + n 2 − 1) 3(n 1 + n 2 − 1) −1 6(n 1 + n 2 − 1)
,
(3.4)
where −E is the divisor class of the exceptional divisor e = 0. Since we demand that our resolution of the Weierstraß equation is crepant, e 6(n 1 +n 2 −1) must factor off the hypersurface equation ( which must be fulfilled for allm i andl i . The solutions to these inequalities is the set of toric blow-ups that we introduce. After this resolution step, we have to check whether there are no non-minimal points along the just introduced exceptional curves. If there are any of them, we have to repeat the procedure at these points. We are done if we got rid of all the non-Kodaira type singularities.
Geometric models: small instantons on ADE singularities
Throughout this article, we study configurations which already cancel their NS5-charge locally, i.e. d H 3 ≡ 0. Hence,
B 4 d H 3 = ∂ B 4 H 3 = 0 ,(3.6)
where B 4 denotes the T 2 -fibration over the disc D 2 with the singularity of the fibration at its center. Now, the modified Bianchi identity for H 3 reads
d H 3 = α ′ 4 trF e ∧ F e − trF A ∧ F A ,(3.7)
with F e and F A the curvature of the spin bundle and gauge bundle, respectively. Therefore, the three-form flux can be written as
H 3 = d B 2 − α ′ 4 (ω A 3 − ω e 3 ),
where ω A 3 and ω e 3 are the Yang-Mills and Lorentz Chern-Simons forms. Since the boundary of B 4 is a T 2 -fibration over an S 1 encircling the singularity, and due to (3.7), the equation (3.6) may be rewritten as 6
∂ B 4 d B 2 − α ′ 4 (ω A 3 − ω e 3 ) = T 2 B 2 2π 0 + α ′ 4 ∂ B 4 ω e 3 − α ′ 4 B 4 trF A ∧ F A .
(3.8) 6 Let us note here that B 2 is not an ordinary two-form but rather a gauge field otherwise the first term on the righthand side of (3.8) would vanish trivially.
The first and second term on the right-hand side of (3.8) encode the shift in the real part of ρ and τ, respectively, and the last term is just the instanton number. Hence, a monodromy in ρ and τ -if not cancelled between them [25]-has to be compensated by small instantons localized at the singularity. Now we want to consider, in this section, resolutions of heterotic models which on the genustwo side have a Namikawa-Ueno (NU) degeneration [I n−p−0 ], [I n − I * p ] and [K − I n ], with K = II * , III * , IV * [16]. Here we use the notation [K 1 − K 2 − 0] ≡ [K 1 − K 2 ] for the Namikawa-Ueno degenerations. Based on the monodromy action on the moduli and the modified Bianchi identity (3.7), these models are expected to describe heterotic compactifications with small instantons sitting at ADE singularities. For example, in the [II * − I n ] model the monodromy is
τ → − 1 1 + τ , ρ → ρ + n − β 2 1 + τ , β → β 1 + τ . (3.9)
When the Wilson line value β is turned off, this is precisely the monodromy of a II * type fiber of the τ fibration. In general, in [K − I n ] models it follows that there is a number k = µ(c) of small instantons on top of the K-type singularity. The starting point is the genus-two model given in the NU classification. The next step is to compute the Igusa-Clebsch invariants that determine the a, b, c, and d coefficients entering in the dual K3 on the F-theory side. In table 1 we collect the defining equations of the ADE NU models together with the vanishing degrees of the coefficients a, b, c, and d. On the F-theory side there are points where the vanishing orders of f , g, and ∆ in the Weierstraß model are non-minimal, so we proceed to resolve as explained in the preceding section.
sing. NU type local model µ(a) µ(b) µ(c) µ(d)
A p−1 [I n−p−0 ] t n + x 2 t p + (x − α) 2 (x − 1) 0 0 n + p n + p D p+4 [I n − I * p ] t n + (x − 1) 2 t p+2 + x 2 (x + t) 2 3 6 + n + p 6 + n + p E 6 [IV * − I n ] t 4 + x 3 t n + (x − 1) 2 4 + n 4 8 + n 8 + n E 7 [III * − I n ]
x t 3 + x 2 t n + (x − 1) 2 3 6 + n 9 + n 9 + n E 8 [II * − I n ] t 5 + x 3 t n + (x − 1) 2 5 + n 5 10 + n 10 + n Table 1: Genus-two models for ADE singularities.
As mentioned before, in general the resolution consists of a series of base blow-ups. Each curve is characterized by an integer equal to minus its self-intersection number, 7 and by the gauge algebra factor it supports. This algebra is identified after checking for the presence of monodromies following the formalism of [28]. In order to determine the matter content it is also important to give the intersection pattern of the blow-ups. Applying the resolution procedure, we obtain all these data. The results match those obtained in [2]. Below we present two typical examples where we consider both the E 8 × E 8 and the Spin(32)/Z 2 heterotic string.
[II * − I n ] model and E 8 singularity The pattern of curves and self-intersection numbers is efficiently determined using the toric geometry techniques reviewed in the preceding section. For the E 8 × E 8 heterotic we find:
sp(1) g 2 f 4 g 2 sp(1) 1 2 2 3 1 5 1 3 2 2 1 | e 8 sp(1) g 2 f 4 g 2 sp(1) 1 12 1 2 2 3 1 5 1 3 2 2 × e 8 sp(1) g 2 f 4 g 2 sp(1) 1 12 1 2 2 3 1 5 1 3 2 2 ⊕(n−1) × (3.10) × e 8 sp(1) g 2 f 4 g 2 sp(1) 1 12 1 2 2 3 1 5 1 3 2 2 1 * | 1 .
This result agrees with the theory of k = 10 + n, with n ≥ 1, pointlike instantons on the E 8 singularity as given in [2]. Deleting the node associated to t = 0, gives the tensor branch description of a 6d SCFT embedded in the LST. This is the situation which was implicitly assumed in [22].
Starting from the Spin(32)/Z 2 heterotic string the pattern of gauge factors and self-intersection numbers turns out to be: .
sp(3k-32) 1 | sp(k) so(
(3.11)
The first factor sp(k) arises from the singularity at t = 0 which, before the base blow-ups, is nonminimal only at u = t = 0. The total number of base blow-ups is eight. Notice that the structure of the intersections conforms to the extended Dynkin diagram of E 8 , in agreement with the analysis in [29]. Dropping the node corresponding to t = 0, gives the tensor branch of a 6d SCFT embedded in the LST, with the sp(k) remaining as a flavor symmetry.
[I * 0 − I n ] model and D 4 singularity
In the E 8 × E 8 case the resolution gives
sp(1) g 2 1 2 2 3 1 so(8) 4 1 ⊕(n−1) g 2 sp(1) 3 2 2 1 * . (3.12)
This result agrees with the theory of k = 6 + n, n ≥ 1, pointlike instantons on the D 4 singularity obtained in [2]. When n = 0 we instead find
sp(1) g 2 sp(1) 1 2 2 2 2 2 1 * . (3.13)
For the Spin(32)/Z 2 heterotic string, the resolution leads to n = 0 sp(6) so (7) 1* 1 , n = 1 sp(7) so (12) 1* 1
,
n ≥ 2 sp(k-8) 1 | sp(k) so(4k-16) sp(k-8) 1* 4 1 | 1 sp(k-8)
.
(3.14)
The number of blow-ups is one for n = 0, 1 and four for n ≥ 2.
Non-geometric models and duality web
As seen in the previous section, in models corresponding to small instantons on ADE singularities, the explicit formulation of heterotic/F-theory duality in terms of the map between genus-two and K3 fibrations confirms the results expected from the monodromies of the moduli fields. We now turn to heterotic models with monodromies which are non-geometric in all T-duality frames. This is the most interesting situation, since a priori it is not clear if such degenerations are allowed.
A simple example of a non-geometric degeneration is the Namikawa-Ueno [III − III] singularity which has monodromy:
τ → ρ β 2 − ρτ , ρ → τ β 2 − ρτ , β → − β β 2 − ρτ . (3.15)
When β = 0 we obtain a "double elliptic" fibration which encircling the heterotic degeneration produces the monodromy τ → −1/τ , ρ → −1/ρ. The equation for the hyperelliptic curve for the [III − III] singularity is:
y 2 = x(x − 1)(x 2 + t) (x − 1) 2 + t . (3.16)
Applying the resolution procedure gives the same six-dimensional theory as [I 0 − I * 0 ], c.f. (3.13) and (3.14), namely the theory of six small instantons on a D 4 singularity. We have found that, in several non-geometric models of type 2 in the NU list, the dual CY admits a smooth resolution and, moreover, the resulting low energy physics is described by the theory of small instantons on ADE singularities.
As explained in [22], models with the same resolution such as [III − III] and [I 0 − I * 0 ], can be related by certain duality moves. As a rule, such dual models appear when the sum of the vanishing orders of the discriminant for their two Kodaira components, or equivalently the vanishing order µ(c), is the same. In table 2 we display all the models satisfying this condition and admitting dual smooth Calabi-Yau resolutions. For all the models in table 2 we explicitly performed the F-theory resolution. For both heterotic strings, we verified that for all the degenerations in a row the same theory arises. The [IV * − II] model was originally included among the duals at µ(c) = 10. However, in the Spin(32)/Z 2 heterotic string its resolution differs from that of [I 0 − II * ] and closer inspection shows that this is also the case in the E 8 × E 8 heterotic string. Nonetheless, the theories could be connected by RG flow [4,30]. A similar situation arises for the [IV − IV] model at µ(c) = 8 [9].
µ(c)
dual models
4 [I 0 − IV] , [II − II] 5 [IV − I 1 ] , [II − III] 6 [I 0 − I * 0 ] , [III − III] , [IV − II] 7 [I * 0 − I 1 ] , [IV − III] 8 [I 0 − IV * ] , [I * 0 − II] 9 [I 0 − III * ] , [I * 0 − III] 10 [I 0 − II * ] , [I * 0 − IV] 11 [II − III * ] , [IV * − III]
A catalog of T-fects
In this section, we summarize our findings for the Namikawa-Ueno models for which we could construct the dual CY resolution. Altogether there is a total of 49 models out of the 120 entries in the NU classification. The resulting patterns in the E 8 × E 8 case were thoroughly reported in [22] and for the Spin(32)/Z 2 heterotic string they appeared in [9]. Here we only present a few examples in both heterotic string theories.
Elliptic type 1
The elliptic type 1 NU degenerations are characterized by a monodromy action that mixes the three moduli. Even though the corresponding heterotic models lack a geometric interpretation, the dual F-theory resolutions are similar to those discussed in the previous section. In table 3 we gather the models whose F-theory duals admit a smooth CY resolution. For example, the resolution for the [IX − 1] degeneration in the E 8 × E 8 heterotic is given by su(2) so (7)
Elliptic type 2
Type 2 models in the NU list comprise degenerations of type [K 1 − K 2 − m], with m ≥ 0, where K 1 and K 2 are one of the Kodaira type singularities for the two genus-one components of NU model µ(a) µ(b) µ(c) µ(d)
[I 0−0−0 ] 0 0 0 0 [V] 2 3 5 6 [VII] 2 3 5 6 [VIII − 1] ∞ ∞ 4 ∞ [IX − 1] ∞ ∞ 8 ∞NU model µ(a) µ(b) µ(c) µ(d) NU model µ(a) µ(b) µ(c) µ(d) [I 0 − I 0 ] 0 0 0 0 [II − IV] 3 3 6 6 [I 0 − II] 1 1 2 2 [I * 0 − II] 3 4 8 8 [I 0 − III] 1 2 3 3 [II − IV * ]5 5 10− K 2 − α].
None of the latter, nor any of the models with m = 0, lead to a dual CY admitting a smooth crepant resolution. We find 20 models that can be resolved. They are displayed in table 4 using again the
notation [K 1 − K 2 − 0] ≡ [K 1 − K 2 ].
The models of type [I 0 − K 2 ] correspond to a configuration of k = µ(c) pointlike instantons on the K 2 singularity. The remaining models are non-geometric since their monodromies involve non-trivial actions on the torus volume. However, as discussed in section 3.3, many of these models lead to the same resolutions as the geometric ones.
Parabolic type 3
In this class there are additional models for which the monodromy factorizes into the product of two monodromies-one Kodaira type for each handle of Σ-one of which is either I n or I * n (the only parabolic elements in the Kodaira list) and the other is of elliptic type. There are also models labeled [K 1 − II n ] or [K 1 − II * n ] that mix all moduli but have a Kodaira type K 1 monodromy for τ. The 19 models that can be resolved are listed in table 5. They admit a resolution for all n. The models of type [I n − K 2 ] or [K 1 − I n ] again correspond to k = µ(d) pointlike instantons on the K i singularity. In this class we also discover dual models. Concretely, starting with the fifth row in table 5, the models in the same row have the same resolution. We illustrate the resolutions in this class with the [IV * − II n ] model. In the E 8 × E 8 heterotic string we obtain su(2) so (7) This result is similar to the resolution of k > 12 instantons on a D 7 singularity [3]. Such similarity is also observed in the Spin(32)/Z 2 heterotic string. For example, in the case of the same [II n−3 ], n > 3, degeneration we obtain
NU model µ(a) µ(b) µ(c) µ(d) NU model µ(a) µ(b) µ(c) µ(d) [I n−0−0 ] 0 0 n n [II − I n ] 1 + n 1 2 + n 2 + n [III − I n ] 1 2 + n 3 + n 3 + n [III − II n ] 1 2 + n 3 + n 4 + n [IV − I n ] 2 + n 2 4 + n 4 + n [IV − II n ] 2 + n 2 4 + n 5 + n [II n−0 ] 2 3 5 + n 6 + n [I n − I * 0 ] 2 3 6 + n 6 + n [I 0 − I *sp(n+2) 1 | sp(n+8) so(4n+20) sp(2n+2) so(4n+4) sp(2n-6) su(2n-6) 1* 4 1 4 1 2 . (4.6) NU model µ(a) µ(b) µ(c) µ(d) [I n−p−0 ] 0 0 n + p n + p [I n − I * p ] 2 3 6 + n + p 6 + n + p [II n−p ] 2 3 5 + n + p 6 + n + p
Parabolic type 5
The final class in the NU list is that of parabolic type 5 models, which includes just 6 degenerations. Only the two of them collected in table 7 admit a smooth resolution presented in the following. The parabolic type 5 [II n−p ] is not the same as the one listed in table 6. Their sextics are distinct and lead to different resolutions. For the E 8 × E 8 heterotic string, the resolution of the type 5 [II n−3 ], n > 3, yields su(2) so(7) su(2) so (12) whereas the resolution of the same model in the Spin(32)/Z 2 heterotic string gives sp(n+1) 1 | sp(n+8) so(4n+20) sp(2n+3) so(4n+8) sp(2n-3) su(2n-2) 1* 4 1 4 1 2
.
(4.8)
The above results are evidently different from the resolutions of the type 4 [II n−3 ] displayed in (4.5) and (4.6).
NU model µ(a) µ(b) µ(c) µ(d)
[I n−p−q ] 0 0 n + p + q n + p + q [II n−p ] p = 2k + l, l = 0,1 2 3 5 + l + 2k + n 6 + l + 2k + n Table 7: Parabolic type 5 models.
Final comments
In this work, we have studied heterotic compactifications with six dimensional T-fects leaving an E 8 × E 7 or a Spin(28) × SU (2)/Z 2 subgroup unbroken. We have focused on configurations which are locally described by a T 2 fibration over a complex one-dimensional base with a smooth, up to the degeneration points, SU (2) structure bundle, patched together using arbitrary elements of SO + (2, 3, Z)-an order four subgroup of the T-duality group O(2, 3, Z). Generically, this gives rise to backgrounds without a global classical geometric interpretation. At certain points in the base, the fibration, or bundle data on it, will degenerate and will no longer have-in any T-duality frame-an interpretation in terms of the heterotic string on a smooth T 2 with a smooth vector bundle on top. Our goal in this paper has been to characterize the physics arising from such singular points.
We have exploited the fact that for backgrounds preserving E 8 × E 7 or Spin(28) × SU (2)/Z 2 , the geometric data of the heterotic string on T 2 can be encoded in the geometry of a genus-two (sextic) Riemann surface. One can then define vacua by fibering this genus-two Riemann surface over a complex one-dimensional base. For monodromies in SO + (2, 3, Z), or equivalently Sp(4, Z), one can classify the ways in which such fibration can degenerate [15,16]. Using heterotic/F-theory duality to reinterpret these degenerations of the sextic as degenerations of dual F-theory K3s fibered over the same base, we are able to read off the low-energy physics at the degeneration point.
We performed a systematic analysis on the full set of sextic degenerations. Remarkably, we found that many (non-)geometric degenerations are described by the same low-energy physics. Often these are given by the long-understood configurations of pointlike instantons sitting on ADE singularities. It would be very interesting to understand the origin of this phenomenon in heterotic language without going to F-theory.
A second notable finding is that not all of the possible sextic degenerations admit an F-theory dual that can be smoothed out in a crepant way by a finite number of blow-ups. As explained in section 3, this follows from the fact that in these cases the F-theory configuration is associated with non-minimal Weierstraß models in complex codimension one (after we performed already some base blow-ups). In these cases, we cannot determine the low-energy physics using F-theory techniques, since the dynamics of F-theory on such backgrounds is unknown. Assuming that these vacua are consistent too, it would be very important to find out which kind of theories arise from these backgrounds in the IR. They may give rise to free or trivial theories, or alternatively to interacting SCFTs without a tensor branch-at least no geometrically manifest tensor branch. Understanding these 'non-minimal models' is an open problem that deserves further attention.
on O(2; R) × O(3; R)\O(2, 3; R)/SO + (2, 3; Z), where SO + (2, 3; Z) is an order four subgroup of O(2, 3; Z), it becomes a well-defined bijective map.
3.1) when we take its proper transform after applying (3.3). This amounts then to the constrains (m 1 i − 4)n 1 + (m 2 i − 4)n 2 =:m i · n ≥ −4 and (l 1 i − 6)n 1 + (l 2 i − 6)n 2 =:l i · n ≥ −6 (3.5)
4k-16) sp(3k-24) so(8k-64) sp(5k-48) so(12k-112) sp(4k-40) so(4k-32)
Table 2 :
2Dual models: the NU degenerations in the same row give rise to the same theories after resolution of the dual F-theory model.
Table 3 :
3Elliptic type 1 models.
Table 4 :
4Elliptic type 2 models. the genus-two surface Σ, plus additional sporadic models denoted as [2K − m] and [K 1
+ n 8 + n 11 + n [III − II * n ] 3 5 + n 8 + n 10 + 2nn ]
2
3
6 + n
6 + n
[IV * − I n ] 4 + n
4
8 + n
8 + n
[II − I *
n ]
3
4
8 + n
8 + n
[III * − I n ]
3
6 + n 9 + n
9 + n
[III − I *
n ]
3
5
9 + n
9 + n
[II * − I n ]
5 + n
5
10 + n 10 + n
[IV − I *
n ]
4
5
10 + n 10 + n
[IV * − II n ] 3 + n
4
7 + n
9 + n
[II − II *
n ]
3 + n
4
7 + n
9 + 3n
[III * − II n ]
3
5
Table 5 :
5Parabolic type 3 models.
This class includes degenerations associated to parabolic Kodaira singularities for both the genus-one components of Σ, of type[K 1 − K 2 − m] with K 1,2 = I n , I * n ,plus additional degenerations of type [2K 1 − m], [II n−p ], and [III n ]. Only the three models shown in table 6 admit a dual smooth resolution. For instance, the resolution of [II n−3 ], n > 3, singularity for the E 8 × E 8 heterotic stringsu(2)
1
2
3
2
1
e 6
su(3)
6 1
3
1
⊕n
f 4
g 2 sp(1)
5 1 3
2
2 1*
. (4.3)
In the Spin(32)/Z 2 heterotic string we deduce
sp(n+7) so(4n+16) sp(3n+1) su(4n+2) su(2n+2)
1*
4
1
2
2
.
(4.4)
4.4 Parabolic type 4
reads
su(2) so(7)
so(9) sp(1) so(11) sp(2) so(13) sp(3)
1
2
3
1
4
1
4
1
4
1
so(14) sp(3)
4
1
⊕(n−4)
×
×
so(13) sp(2) so(11) sp(1) so(9)
g 2 sp(1)
4
1
4
1
4
1 3
2
2 1*
.
(4.5)
Table 6 :
6Parabolic type 4 models.
sp(3) so(14) sp(3) so(14) sp(3)1
2
3
1
4
1
4
1
4
1
so(14) sp(3)
4
1
⊕(n−4)
×
×
so(13) sp(2) so(11) sp(1) so(9)
g 2 sp(1)
4
1
4
1
4
1
3
2
2 1*
,
(4.7)
Non-geometric configurations usually refer to situations where the metric is identified with its inverse along a non-trivial path. However, in our case generically we have mixings of all three moduli of which ρ → 1/ρ is just a subgroup.
After the resolutions, the curve t = 0 has self-intersection −1. We label this curve by 1 * , instead of 1, to indicate that it existed already before the base blow-ups.
Evidence for F theory. C Vafa, hep-th/9602022Nucl.Phys. 469C. Vafa, "Evidence for F theory," Nucl.Phys. B469 (1996) 403-418, hep-th/9602022.
Point -like instantons on K3 orbifolds. P S Aspinwall, D R Morrison, hep-th/9705104Nucl. Phys. 503P. S. Aspinwall and D. R. Morrison, "Point -like instantons on K3 orbifolds," Nucl. Phys. B503 (1997) 533-564, hep-th/9705104.
New string theories in six-dimensions via branes at orbifold singularities. K A Intriligator, hep-th/9708117Adv. Theor. Math. Phys. 1K. A. Intriligator, "New string theories in six-dimensions via branes at orbifold singularities," Adv. Theor. Math. Phys. 1 (1998) 271-282, hep-th/9708117.
On the Classification of 6D SCFTs and Generalized ADE Orbifolds. J J Heckman, D R Morrison, C Vafa, 1312.5746JHEP. 0528Erratum: JHEP06,017(2015)J. J. Heckman, D. R. Morrison, and C. Vafa, "On the Classification of 6D SCFTs and Generalized ADE Orbifolds," JHEP 05 (2014) 028, 1312.5746. [Erratum: JHEP06,017(2015)].
Atomic Classification of 6D SCFTs. J J Heckman, D R Morrison, T Rudelius, C Vafa, 1502.05405Fortsch. Phys. 63J. J. Heckman, D. R. Morrison, T. Rudelius, and C. Vafa, "Atomic Classification of 6D SCFTs," Fortsch. Phys. 63 (2015) 468-530, 1502.05405.
. M Zotto, J J Heckman, A Tomasiello, C Vafa, JHEP. 026d Conformal MatterM. Del Zotto, J. J. Heckman, A. Tomasiello, and C. Vafa, "6d Conformal Matter," JHEP 02 (2015) 054, 1407.6359.
Classification of 6d N = (1, 0) gauge theories. L Bhardwaj, 1502.06594JHEP. 112L. Bhardwaj, "Classification of 6d N = (1, 0) gauge theories," JHEP 11 (2015) 002, 1502.06594.
F-theory and the Classification of Little Strings. L Bhardwaj, M Zotto, J J Heckman, D R Morrison, T Rudelius, C Vafa, 1511.05565Phys. Rev. 938L. Bhardwaj, M. Del Zotto, J. J. Heckman, D. R. Morrison, T. Rudelius, and C. Vafa, "F-theory and the Classification of Little Strings," Phys. Rev. D93 (2016), no. 8 086002, 1511.05565.
Non-Geometric Vacua of the Spin(32)/Z 2 Heterotic String and Little String Theories. A Font, C Mayrhofer, 1708.05428A. Font and C. Mayrhofer, "Non-Geometric Vacua of the Spin(32)/Z 2 Heterotic String and Little String Theories," 1708.05428.
N Seiberg, hep-th/9705221New theories in six-dimensions and matrix description of M theory on T**5 and T**. 5N. Seiberg, "New theories in six-dimensions and matrix description of M theory on T**5 and T**5 / Z(2)," Phys. Lett. B408 (1997) 98-104, hep-th/9705221.
Compactifications of F theory on Calabi-Yau threefolds. 1. D R Morrison, C Vafa, hep-th/9602114Nucl.Phys. 473D. R. Morrison and C. Vafa, "Compactifications of F theory on Calabi-Yau threefolds. 1," Nucl.Phys. B473 (1996) 74-92, hep-th/9602114.
K3 surfaces, modular forms, and non-geometric heterotic compactifications. A Malmendier, D R Morrison, 1406.4873Lett. Math. Phys. 1058A. Malmendier and D. R. Morrison, "K3 surfaces, modular forms, and non-geometric heterotic compactifications," Lett. Math. Phys. 105 (2015), no. 8 1085-1118, 1406.4873.
Nongeometric F-theory-heterotic duality. J Gu, H Jockers, 1412.5739Phys.Rev. 9186007J. Gu and H. Jockers, "Nongeometric F-theory-heterotic duality," Phys.Rev. D91 (2015) 086007, 1412.5739.
New Heterotic String Theories in Uncompactified Dimensions < 10. K S Narain, Phys. Lett. 169K. S. Narain, "New Heterotic String Theories in Uncompactified Dimensions < 10," Phys. Lett. B169 (1986) 41-46.
On pencils of curves of genus two. A P Ogg, Topology. A. P. Ogg, "On pencils of curves of genus two," Topology (1966) 355-362.
The complete classification of fibres in pencils of curves of genus two. Y Namikawa, K Ueno, Manuscripta Math. 92Y. Namikawa and K. Ueno, "The complete classification of fibres in pencils of curves of genus two," Manuscripta Math. 9 (1973), no. 2 143-186.
Heterotic string / F theory duality from mirror symmetry. P Berglund, P Mayr, hep-th/9811217Adv. Theor. Math. Phys. 2P. Berglund and P. Mayr, "Heterotic string / F theory duality from mirror symmetry," Adv. Theor. Math. Phys. 2 (1999) 1307-1372, hep-th/9811217.
Vector bundles and F theory. R Friedman, J Morgan, E Witten, hep-th/9701162Commun.Math.Phys. 187R. Friedman, J. Morgan, and E. Witten, "Vector bundles and F theory," Commun.Math.Phys. 187 (1997) 679-743, hep-th/9701162.
Lattice Polarized K3 Surfaces and Siegel Modular Forms. A Clingher, C F Doran, arXiv:1004.3503~Adv. Math. 231172math.AGA. Clingher and C. F. Doran, "Lattice Polarized K3 Surfaces and Siegel Modular Forms," Adv. Math. 231 (2012) 172, arXiv:1004.3503~[math.AG].
K3 Surfaces and String Duality. P S , 9611137P. S. Aspinwall, "K3 Surfaces and String Duality," 9611137.
The 1-2-3 of Modular Forms. J H Bruinier, G Van Der Geer, G Harder, D Zagier, SpringerBerlin HeidelbergJ. H. Bruinier, G. van der Geer, G. Harder, and D. Zagier, The 1-2-3 of Modular Forms. Springer Berlin Heidelberg, 2008.
Heterotic T-fects, 6D SCFTs, and F-Theory. A Font, I García-Etxebarria, D Lust, S Massai, C Mayrhofer, 1603.09361JHEP. 17508A. Font, I. García-Etxebarria, D. Lust, S. Massai, and C. Mayrhofer, "Heterotic T-fects, 6D SCFTs, and F-Theory," JHEP 08 (2016) 175, 1603.09361.
On Siegel Modular Forms of Genus Two. J.-I Igusa, American Journal of Mathematics. 841J.-I. Igusa, "On Siegel Modular Forms of Genus Two," American Journal of Mathematics 84 (1962), no. 1 175-200.
Geometries, Non-Geometries, and Fluxes. J Mcorist, D R Morrison, S Sethi, Adv.Theor.Math.Phys. 145447J. McOrist, D. R. Morrison, and S. Sethi, "Geometries, Non-Geometries, and Fluxes," Adv.Theor.Math.Phys. 14 (2010) 1004.5447.
Ubiquity of non-geometry in heterotic compactifications. I García-Etxebarria, D Lust, S Massai, C Mayrhofer, 1611.10291JHEP. 0346I. García-Etxebarria, D. Lust, S. Massai, and C. Mayrhofer, "Ubiquity of non-geometry in heterotic compactifications," JHEP 03 (2017) 046, 1611.10291.
On compact analytic surfaces I-III. K Kodaira, Ann. of Math. 71K. Kodaira, "On compact analytic surfaces I-III," Ann. of Math., 71 (1960), 111-152; 77 (1963), 563-626; 78 (1963), 1-40.
Introduction to toric varieties. W Fulton, Princeton University PressW. Fulton, Introduction to toric varieties. No. 131. Princeton University Press, 1993.
Anomalies and the Euler characteristic of elliptic Calabi-Yau threefolds. A Grassi, D R Morrison, 1109.0042Commun.Num.Theor.Phys. 6A. Grassi and D. R. Morrison, "Anomalies and the Euler characteristic of elliptic Calabi-Yau threefolds," Commun.Num.Theor.Phys. 6 (2012) 51-127, 1109.0042.
New phases of string theory and 6-D RG fixed points via branes at orbifold singularities. J D Blum, K A Intriligator, hep-th/9705044Nucl. Phys. 506J. D. Blum and K. A. Intriligator, "New phases of string theory and 6-D RG fixed points via branes at orbifold singularities," Nucl. Phys. B506 (1997) 199-222, hep-th/9705044.
Geometry of 6D RG Flows. J J Heckman, D R Morrison, T Rudelius, C Vafa, 1505.00009JHEP. 0952J. J. Heckman, D. R. Morrison, T. Rudelius, and C. Vafa, "Geometry of 6D RG Flows," JHEP 09 (2015) 052, 1505.00009.
|
[] |
[
"STABILITY OF ANTI-CANONICALLY BALANCED METRICS",
"STABILITY OF ANTI-CANONICALLY BALANCED METRICS"
] |
[
"Shunsuke Saito ",
"Ryosuke Takahashi "
] |
[] |
[] |
We study the asymptotic behavior of quantized Ding functionals along Bergman geodesic rays and prove that the slope at infinity can be expressed in terms of Donaldson-Futaki invariants and Chow weights. Based on the slope formula, we introduce a new algebrogeometric stability on Fano manifolds and show that the existence of anti-canonically balanced metrics implies our stability. The relation between our stability and others is also discussed. As another application of the slope formula, we get the lower bound estimate on the Calabi like functionals on Fano manifolds.2010 Mathematics Subject Classification. 53C25.
|
10.4310/ajm.2019.v23.n6.a9
|
[
"https://arxiv.org/pdf/1607.05534v2.pdf"
] | 119,590,596 |
1607.05534
|
e26a55723221212421153dd7d0cc3a6e04fc8131
|
STABILITY OF ANTI-CANONICALLY BALANCED METRICS
1 Jan 2017
Shunsuke Saito
Ryosuke Takahashi
STABILITY OF ANTI-CANONICALLY BALANCED METRICS
1 Jan 2017
We study the asymptotic behavior of quantized Ding functionals along Bergman geodesic rays and prove that the slope at infinity can be expressed in terms of Donaldson-Futaki invariants and Chow weights. Based on the slope formula, we introduce a new algebrogeometric stability on Fano manifolds and show that the existence of anti-canonically balanced metrics implies our stability. The relation between our stability and others is also discussed. As another application of the slope formula, we get the lower bound estimate on the Calabi like functionals on Fano manifolds.2010 Mathematics Subject Classification. 53C25.
Introduction
In this paper, we study anti-canonically balanced metrics on Fano manifolds, introduced by Donaldson in [7, Section 2.2.2] as a finite dimensional analogue of Kähler-Einstein metrics.
Let X be an n-dimensional Fano manifold and fix k ≥ 1 so that −kK X is very ample. Let H(X, −K X ) be the space of smooth fiber metrics φ on −K X with the curvature ω φ := ( √ −1/2π)∂∂φ positive and B k the space of Hermitian metrics on the finite dimensional vector space H 0 (X, −kK X ), which is a finite dimensional symmetric space of non-compact type. Following Donaldson [7], we define the quantization map Hilb k,ν : H(X, −K X ) → B k with respect to a volume form ν (with unit volume) to be · , · Hilb k,ν (φ) := X · , · kφ dν and the dequantization map F S k : B k → H(X, −K X ) by
F S k (H) := 1 k log 1 N k N k α=1 |s α | 2 ,
where N k is the dimension of H 0 (X, −kK X ) and (s α ) is an H-orthonormal basis for H 0 (X, −kK X ). A Hermitian metric H ∈ B k is called k-balanced metric with respect to ν if H satisfies (Hilb k,ν • F S k )(H) = H.
The most well-understood balanced metrics are those with respect to the (normalized) Monge-Ampère measure
M A(φ) := ω n φ (−K X ) n ,
where (−K X ) n is the top intersection number of −K X . These metrics are called k-balanced metrics. The important fact is that the existence of kbalanced metrics is equivalent to the Chow polystability of (X, −K X ) at level k (See [17,Theorem 4]). Note that these balanced metrics can be defined on general polarized manifolds and the theorem also holds. On a Fano manifold, a Hermitian metric φ ∈ H(X, −K X ) defines another volume form e −φ under the identification of fiber metrics on −K X with volume forms on X. Normalize e −φ to be
µ φ := e −φ X e −φ
and simply write Hilb k (φ) := Hilb k,µ φ (φ). As introduced in [7], a balanced metric defined by Hilb k , that is, a Hermitian metric H ∈ B k satisfying (Hilb k • F S k )(H) = H is called an anti-canonically k-balanced metric. We stress the point that anti-canonically balanced metrics make sense only on Fano manifolds as the name suggests. The first study on anti-canonically balanced metrics is given by Berman-Boucksom-Guedj-Zeriahi [2]. They characterized anti-canonically balanced metircs as the critical points of the quantized Ding functional and show that on Kähler-Einstein manifolds with a discrete automorphism group, the existence of the anti-canonically k-balanced metric for sufficiently large k and the convergence of this sequence to the Kähler-Einstein metric at least L 1topology. Later, Berman-Witt Nyström [5] treated the case of a continuous automorphism group and proved the same conclusion under the vanishing of all the higher Futaki invariants.
We want to relate anti-canonically balanced metrics and algebro-geometric stability as in the case of balanced metrics. To do so, we study the slope at infinity of the quantized Ding functional along geodesic rays on B k . Theorem 1.1. Let X be a Fano manifold, (X , L) a normal test configuration for (X, −K X ) of exponent k and H ∈ B k a Hermitian metric on H 0 (X, −kK X ). Denoting by (H t ) t the Bergman geodesic ray associated with (X , L) and H as explained in Section 2.1, we have
lim t→∞ d dt D (k) (H t ) + q = F ut k (X , L) kN k ,
where q is a non-negative rational number determined by the central fiber.
The quantity q vanishes if and only if X is Q-Gorenstein with L isomorphic to −kK X /C , and X 0 is reduced, and its normalization has at worst log terminal singularities.
The quantity in the right hand side is defined to be the sum of the Donaldson-Futaki invariant and the Chow weight of (X , L), and F ut k (X , L) is called the quantized Futaki invariant. Then, we introduce a new stability on a Fano manifold X, F-stability, using the quantized Futaki invariant and show the following: Theorem 1.2. Let X be a Fano manifold admitting an anti-canonically k-balanced metric. Then, X is F-polystable at level k.
We next compare Chow stability and our F-stability. Theorem 1.3. Asymptotic Chow polystability (resp. stability or semistability) implies asymptotic F-polystability (resp. stability or semistability).
We also discuss the relations between asymptotic F-stability with uniform K-stability and K-semistability in Section 5.
As an application of Theorem 1.1, we prove the lower bound estimate on the L q -norm of the function
B(φ) := n!µ φ ω n φ − n! (−K X ) n , φ ∈ H(X, −K X ).
Note that φ is a Kähler-Einstein metric if and only if B(φ) = 0. In other words, B(φ) measures the deviation from φ being a Kähler-Einstein metric. Theorem 1.4. Let p be a positive even integer and q the Hölder conjugate of p. Given a Hermitian metirc φ ∈ H(X, −K X ) and a normal test configuration (X , L) for (X, −K X ) with non-zero p-norm, we have
||B(φ)|| L q (ω n φ /n!) ≥ − DF (X , L) ||(X , L)|| p ,
where || · || L q (ω n φ /n!) denotes the L q -norm with respect to ω n φ /n!.
This is an analogue of the Donaldson's result [6,Theorem 2] in Fano case. Although this result was already proved by Hisamoto [11,Theorem 1.3] for any p ∈ [1, ∞] (see also [1,Theorem 4.3]), the viewpoints are different. We will prove it via a finite-dimensional argument following Donaldson, while he took an energy theoretic approach.
Acknowledgements. The first author is grateful to Dr. Yoshinori Hashimoto and Professor Yasufumi Nitta for stimulating discussion. The second author would like to thank Professor Shigetoshi Bando and Professor Ryoichi Kobayashi for useful conversations on this article. The first author is supported by JSPS KAKENHI Grant Number 15J06855 and the Program for Leading Graduate Schools, MEXT, Japan. The second author is supported by Grant-in-Aid for JSPS Fellows Number 25-3077 and 16J01211.
Preliminaries
2.1. Test configurations and Bergman geodesic rays. We assume in this section that (X, L) be a polarized manifold.
Definition 2.1. A test configuration for (X, L) of exponent k consists of the following data (a) a scheme X with a C * -action ρ; (b) a C * -equivariant flat and proper morphism π : X → C, where C * acts on C by the standard multiplication; (c) a C * -linearized π-very ample line bundle L on X ; (d) an isomorphism (X 1 , L 1 ) ∼ = (X, kL). A test configuration (X , L) is called a product configuration if X ∼ = X × C, and a trivial configuration if in addition C * acts only on the second factor. A test configuration (X , L) is called normal if X is a normal variety. For a Fano manifold (X, −K X ) with the anti-canonical polarization, a test configuration (X , L) is said to be special if the central fiber X 0 is a normal variety with at worst log terminal singularities.
Fix k ≥ 1 so that kL is very ample. The following proposition relates test configurations with fixed exponent to finite dimensional objects. Proof. Let σ : C * → GL(H 0 (X, kL)) be a one-parameter subgroup and Φ |kL| : X ֒→ PH 0 (X, kL) * the closed embedding defined by |kL|. We define X by the Zariski closure of the image under the embedding X × C * ֒→ PH 0 (X, kL) * ×C defined by (x, τ ) → (σ * (τ )Φ |kL| (x), τ ), that is, X 0 is defined as the flat limit of the image of X under σ * as τ → 0, and put L := O X (1). This gives a test configuration for (X, L) of exponent k. The converse direction is spelled out below.
Let (X , L) be a test configuration for (X, L) of exponent k. The C * -action ρ on (X , L) induces an isomorphism ρ(τ, w) : H 0 (X w , L w ) → H 0 (X τ w , L τ w ) for any τ ∈ C * and w ∈ C. Put ρ(τ ) := ρ(τ, 1) : H 0 (X, kL) → H 0 (X τ , L τ ) and ρ 0 (τ ) := ρ(τ, 0) : H 0 (X 0 , L 0 ) → H 0 (X 0 , L 0 ). Let A k denote the infinitesimal generator of ρ 0 . Fix a Hermitian metric H ∈ B k on H 0 (X, kL).
Θ k : H 0 (X 0 , L 0 ) → H 0 (X, kL) satisfying (a) Θ k is derived from a C * -equivariant embedding (X , L) ֒→ (PH 0 (X 0 , L 0 ) × C, O(1))
whose restriction on the central fiber gives the closed embedding defined by |L 0 |; (b) A k is Hermitian with respect to H k := Θ * k H. The Hermitian metric H k is independent of Θ k . Moreover, such a Θ k is unique up to an isometry on (H 0 (X 0 , L 0 ), H k ) commuting with ρ 0 . Θ k is called a regular Hermitian generator.
Using a regular Hermitian generator Θ k , we can define a one-parameter subgroup λ : C * → GL(H 0 (X, kL)), so that Θ k is a C * -equivariant isomorphism. More concretely, for τ ∈ C * , we define
λ(τ ) := Θ k • ρ 0 (τ ) • Θ −1
k . Note that λ is independent of the choice of a regular Hermitian generator Θ k . Indeed, for another regular Hermitian generator Θ ′ k , there exists a unitary endomorphism U k commuting with ρ 0 such that
Θ k = Θ ′ k • U k . Then, Θ k • ρ 0 (τ ) • Θ −1 k = (Θ ′ k • U k ) • ρ 0 (τ ) • (U −1 k • Θ ′ k −1 ) = Θ ′ k • ρ 0 (τ ) • Θ ′ k −1 .
This λ is the desired one-parameter subgroup corresponding to (X , L). Next, we show the way to associate a Bergman geodesic ray (i.e., a geodesic ray on B k ) with a test configuration (X , L) of exponent k. Let H, Θ k be as above. For τ ∈ C * , we define a Hermitian metirc H τ on H 0 (X τ , L τ ) by
H τ := ((ρ(τ ) • Θ k • ρ 0 (τ −1 )) −1 ) * H k = (ρ(τ ) −1 ) * λ(τ ) * H.
Since λ is independent of Θ k , so is H τ . Note that according to Theorem 2.3 (b), H is S 1 -invariant. Then, we can use the real logarithmic coordinate t = − log |τ | 2 on the punctured unit disc ∆ * ⊂ C centered at the origin. By means of the isomorphism ρ(τ ), we get a geodesic
H t := ρ(τ ) * H τ = λ(e − 1 2 t ) * H = e −tA H on B k parametrized by t ∈ [0, ∞),
where A denotes the infinitesimal generator of λ. We will call it the Bergman geodesic ray associated with (X , L) and H.
Finally, we prove the following lemma for later use, which says that the Bergman type metric defined by (H τ ) τ extends to τ = 0. Put (X ∆ ,
L ∆ ) := (π −1 (∆), L| π −1 (∆) ), and (X ∆ * , L ∆ * ) := (π −1 (∆ * ), L| π −1 (∆ * ) ).
Lemma 2.4. Let (X , L), H, and (H τ ) τ be as above. A locally bounded metric φ on L ∆ * defined by
φ τ := kF S k (H τ ), τ ∈ ∆ *
gives an S 1 -invariant locally bounded metric on L ∆ with positive curvature current.
Proof. Fix an H-orthonormal basis (s α ) for H 0 (X, −kK X ) consisting of the weight vectors of λ:
λ(τ )s α = τ mα s α , where m α is a weight of λ. Then, (τ −mα ρ(τ )s α ) is an H τ -orthonomal basis. Hence, φ τ = log 1 N k N k α=1 |τ −mα ρ(τ )s α | 2 .
It suffices to show that each τ −mα ρ(τ )s α extends holomorphically to τ = 0. We follow the argument of [21, Lemma 6.1]. To begin with, we prepare the notations. Define a holomorphic section s α ∈ H 0 (X \ X 0 , L) by
s α (ρ(τ )x) := ρ(τ )s α (x), τ ∈ C * , x ∈ X.
Let w denote the global coordinate on C and be identified with the projection X → C. We also regard it as a section of the trivial line bundle over X . Then w −mα s α is a holomorphic section of L over X \ X 0 and w −mα s α = w −mα ρ(w)s α for any w ∈ C * . We now prove the claim. Since π * L → C is C * -equivariantly trivial, there exist global sections σ 1 , . . . , σ N k of π * L such that (a) for each w ∈ C, (σ 1 (w), . . . , σ N k (w)) is a basis for H 0 (X w , L w );
(b) there exists an invertible matrix (f αβ (τ )) with coefficients in C[τ, τ −1 ] satisfying ρ(τ )σ α = β f αβ (τ )σ β ,(1)
for any τ ∈ C * . We may assume that σ α (1) = s α for α = 1, . . . , N k . Then,
ρ 0 (τ )σ α (0) = τ mα σ α (0).
On the other hand, restricting (1) to the central fiber gives
ρ 0 (τ )σ α (0) = β f αβ (τ )σ β (0). 6
Combining them, we have f αβ (τ ) = τ mα δ αβ . Hence, for any w ∈ C * ,
s α (w) = (ρ(w)s α )(τ ) = (w mα σ α )(w), so that w −mα s α = σ α . σ α is holomorphic on C, so is w −mα s α as desired.
2.2. Chow weights and Donaldson-Futaki invariants. In this section, we recall the definition of Chow weights and Donaldson-Futaki invariants.
Let (X, L) be an n-dimensional polarized manifold and (X , L) a test configuration for (X, L) of exponent k. Denote by N km the dimension of H 0 (X 0 , mL 0 ) and by w km the total weight of the C * -action on H 0 (X 0 , mL 0 ) induced by that on (X , L). For large m, we have expansions:
N km = a 0 (km) n + a 1 (km) n−1 + · · · + a n ,
w km = b 0 (km) n+1 + b 1 (km) n + · · · + b n+1 .
The Chow weight of (X , L) is defined by
Chow k (X , L) := b 0 a 0 − w k kN k . The Donaldson-Futaki invariant of (X , L) is DF (X , L) := 2 a 1 b 0 − a 0 b 1 a 2 0 .
Note that these invariants are independent of the choice of a C * -linearization of L. We also note that the Donaldson-Futaki invariant is unchanged by replacing L with a tensor power, while the Chow weight is not. In fact, we have
Chow km (X , mL) = b 0 a 0 − w km kmN km ,
from which one can easily get
lim m→∞ km Chow km (X , mL) = 1 2 DF (X , L).(2)
With these invariants above, we can define Chow stability and K-semistability of polarized manifolds.
Definition 2.5. A polarized manifold (X, L) is said to be (i) (a) Chow semistable at level k if Chow k (X , L) ≥ 0 holds for any test configuration (X , L) for (X, −K X ) of exponent k. (b) Chow polystable at level k if (X, L) is Chow semistable at level k and Chow k (X , L) = 0 if and only if (X , L) is product. (c) Chow stable at level k if (X, L) is Chow semistable at level k and Chow k (X , L) = 0 if and only if (X , L) is trivial. (d) asymptotically Chow polystable (resp. stable or semistable) if there exists a k 0 > 0 such that (X, L) is Chow polystable (resp. stable or semistable) at level k for all k ≥ k 0 . (ii) K-semistable if DF (X , L) ≥ 0 holds for any test configuration (X , L)
for (X, −K X ).
Note that (2) gives the following relation between two semistabilities:
Proposition 2.6 ([19, Theorem 3.9]). Asymptotic Chow semistability implies K-semistability.
We briefly explain how higher Futaki invariants F Td (1) , . . . , F Td (n) , obstructions to asymptotic Chow semistability, are related to Chow weights, following Della Vedova-Zuddas [8,Proposition 2.2]. Let V be a holomorphic vector field on X whose real part generates S 1 . As explained in Proposition 2.2, V defines a product configuration (X , L) for (X, L) of exponent k. Using the equivariant Riemann-Roch theorem, we get
Chow km (X , mL) = − b n+1 kmN km − a 0 kmN km n p=1 a 0 b p − a p b 0 a 2 0 (km) n+1−p (3) = − 1 kmN km n p=1 (km) n+1−p (n + 1 − p)! F Td (p) (V ),
for sufficiently large m. Note that the smoothness of X implies b n+1 = 0. We end this section by defining norms of test configurations for later use. Let p ≥ 1. Given a test configuration as above, denote by A km the infinitesimal generator of the C * -action on H 0 (X 0 , mL 0 ) and by A km the trace-free part of A km . We define the p-norm ||(X , L)|| p to be the p-th root of the leading coefficient in tr(A p km ) = ||(X , L)|| p p (km) n+p + O(m n+p−1 ) for large m. This is unchanged if we replace L by a power.
2.3.
Kempf-Ness type functionals and its quantizations. The aim of this section is to recall the definition of energy functionals. Let X be an n-dimensional Fano manifold. Fix a Hermitian metric φ 0 ∈ H(X. − K X ) and put ω 0 := ω φ 0 . For a smooth Hermitian metric φ ∈ H(X, −K X ), we define the Monge-Ampère energy E and the Ding functional D by
E(φ) := 1 n + 1 n i=0 X (φ − φ 0 ) ω n−i φ ∧ ω i 0 , D(φ) := − 1 (−K X ) n E(φ) + L(φ), L(φ) := − log X e −φ .
For a Hermitian metric H ∈ B k , we also define the quantized Monge-Ampère energy E (k) , the balancing energy Z k , and the quantized Ding functional D (k) by
E (k) (H) := − 1 kN k log det H, Z k (H) := (−K X ) n n! k n+1 1 (−K X ) n E(F S k (H)) − E (k) (H) , D (k) := −E (k) (H) + L(F S k (H)),
where the determinant is taken with respect to Hilb k (φ 0 ).
We collect some properties of these functionals.
D (k) (H) = D(F S k (H)) + n! k n+1 (−K X ) n Z k (H).
(d) Let (X , L) be a test configuration for (X, −K X ) of exponent k and (H t ) t the Bergman geodesic ray associated with (X , L) and H. If
D (k) (H t ) is affine in t on [0, ∞), then (X , L) is a product configura- tion.
Proof. (a) and (b) were proved in Lemma 7.4 and Lemma 6.5 of [2], respectively. One could also prove them using Proposition 6.1. (c) is trivial. We now start the proof of (d). By combining our assumption on D (k) (H t ) with the convexity of D • F S k and Z k , (c) shows that Z k (H t ) is affine in t. To complete the proof, we need the explicit formula for the second derivative of Z k (H t ). Denote H t = e −tA H. Let V A be the holomorphic vector field on PH 0 (X, −kK X ) * defined by the Hermitian matrix A and V ⊥ A the normal part of V A with respect to the Fubini-Study metric induced by H t . It was proved in [9, Lemma 17] that
d 2 dt 2 Z k (H t ) = k n n! X |V ⊥ A | 2 kω F S k (H t ) ω n F S k (Ht) .
This implies that V A is tangent to the image of X under the closed embedding X ֒→ PH 0 (X, −kK X ) * , so that the central fiber X 0 is isomorphic to X by the proof of Proposition 2.2.
Quantized Futaki invariants and F-stability
In this section we introduce quantized Futaki invariants and F-stability. Let X be an n-dimensional Fano manifold. Fix k ≥ 1 so that −kK X is very ample. We remark that this invariant is independent of the choice of a C *linearization of L, since so are the Donaldson-Futaki invariant and the Chow weight.
The following lemma explains why we call F ut k (X , L) the quantized Futaki invariant. Remark 3.3. Before giving the proof, we should recall the definition of quantized Futaki invariants by Berman-Witt Nyström. Let (X , L) be a special test configuration of exponent k. By [1, Lemma 2.2], X is a normal Q-Gorenstein variety, and L is isomorphic to the relative pluri-anti-canonical divisor −kK X /C . Then, we can lift the C * -action on X automatically to the tangent bundle of the regular part of X , and eventually to −kK X /C . This particular linearization of L = −kK X /C is called "canonical". Note that this is not necessarily the same as the a priori linearization of L. Given a special test configuration (X , L) with the canonical linearization, Berman-Witt Nyström defined the quantized Futaki invariant at level k to be the opposite sign of the total weight of the C * -action on H 0 (X 0 , −kK X 0 ). Let us stress the point that they only considered special test configurations with the canonical linearization. Our definition is considered as a generalization of theirs.
Proof. We use the notation as used in Section 2.2. Since our F ut k (X , L) is independent of the linearization, we may choose the canonical linearization. Then, −w k is the quantized Futaki invariant defined by Berman-Witt Nyström.
The key point in the proof is the formula
DF (X , L) = − b 0 a 0 .(4)
Once we have established (4), we have F ut k (X , L) = kN k (DF (X , L) + Chow k (X , L))
= kN k − b 0 a 0 + b 0 a 0 + −w k kN k = −w k .
To prove (4), there are two ways. The first approach is to apply the equivariant Riemann-Roch formula to a normal variety X 0 . Consult for example [22,Lemma 1.2]. The second one is to consider the compactification (X , L) → P 1 of (X , L) whose ∞-fiber has the trivial C * -action and apply the two-term asymptotic Riemann-Roch theorem to a normal variety X . See for example [4, Proposition 3.12 (iv)]. The latter approach gives
w km = L n+1 k n+1 (n + 1)! (km) n+1 + (−K X /P 1 · L n ) 2k n n! (km) n + O(m n−1 ) = (−K X /P 1 ) n+1 (n + 1)! (km) n+1 + (−K X /P 1 ) n+1 2n! (km) n + O(m n−1 )
for large m. On the other hand, the two-term asymptotic Riemann-Roch theorem on (X, −kK X ) yields
N km = (−K X ) n n! (km) n + (−K X ) n 2(n − 1)! (km) n−1 + O(m n−2 ).
It follows that
b 1 = n + 1 2 b 0 , a 1 = n 2 a 0 ,
which proves (4).
Finally, we introduce a new stability of Fano manifolds.
Definition 3.4. A Fano manifold X is said to be (a) F-semistable at level k if F ut k (X , L) ≥ 0 holds for any test configu- ration (X , L) for (X, −K X ) of exponent k. (b) F-polystable at level k if X is F-semistable at level k and F ut k (X , L) = 0 if and only if (X , L) is a product configuration. (c) F-stable at level k if X is F-semistable at level k and F ut k (X , L) = 0 if and only if (X , L) is trivial. (d) asymptotically F-polystable (resp. stable or semistable) if there exists a k 0 > 0 such that X is F-polystable (resp. stable or semistable) at level k for all k ≥ k 0 .
We conclude by pointing out that to test F-stability, we only need to consider normal test configurations. Proof. This follows from [19,Proposition 5.1], which says that DF (X , L) ≥ DF ( X , L), Chow k (X , L) ≥ Chow k ( X , L).
Slope formula and F-stability of anti-canonically k-balanced metrics
In this section we prove Theorem 1.1 and Theorem 1.2. In view of Proposition 2.7 (c), we need the slope formulae of D and Z k . . Let (X , L) be a normal test configuration for (X, −K X ) of exponent k and φ an S 1 -invariant locally bounded metric on (X , L) → ∆ with positive curvature current, where ∆ ⊂ C denotes the unit disc centerd at the origin. Then, setting φ t := ρ(τ ) * φ τ /k, identified with a ray of metrics on −K X using the C * -action ρ on (X , L), we have
DF (X , L) = lim t→∞ d dt D(φ t ) + q,
where q is a non-negative rational number determined by the central fiber.
The quantity q vanishes if and only if X is Q-Gorenstein with L isomorphic to −kK X /C , and X 0 is reduced, and its normalization has at worst log terminal singularities.
Proof of Theorem 1.1. Let (X , L) be a normal test configuration for (X, −K X ) of exponent k, H ∈ B k a Hermitian metric and (H t ) t the Bergman geodesic ray associated with (X , L) and H. As proved in Lemma 2.4, (H t ) t defines an S 1 -invariant locally bounded metric φ on (X ∆ , L ∆ ) with positive curvature current. Note that
φ t = 1 k ρ(τ ) * φ τ = ρ(τ ) * F S k (H τ ) = F S k (ρ(τ ) * H τ ) = F S k (H t ).
Applying Theorem 4.1 and Theorem 4.2 , we have
lim t→∞ d dt D (k) (H t ) = lim t→∞ d dt D(F S(H t )) + n! k n+1 (−K X ) n lim t→∞ Z k (H t ) = lim t→∞ d dt D(φ t ) + Chow k (X , L) = F ut k (X , L) kN k − q.
Proof of Theorem 1.2. Suppose that X admits an anti-canonically k-balanced metric H ∈ B k . Let (X , L) be a normal test configuration for (X, −K X ) of exponent k, and (H t ) t the Bergman geodesic ray associated with H and (X , L). Since H is a critical point of D (k) , D (k) is convex along (H t ) and q is non-negative, we have
F ut k (X , L) kN k = lim t→∞ d dt D (k) (H t ) + q ≥ lim t→+0 d dt D (k) (H t ) ≥ 0.
This proves the F-semistability of X. Assume F ut k (X , L) = 0. Since q is nonnegative, D (k) (H t ) is affine in t. Then, Proposition 2.7 (d) forces (X , L) to be a product configuration.
Example 4.3. Let X 0 be the Mukai-Umemura 3-fold, which is a compactification of the quotient of SL(2, C) by the icosahedral group and X a suitable small deformation of X 0 . Both of them are Fano manifolds and h(X 0 ) = sl(2, C) but X does not admit non-trivial holomorphic vector fields, where h(X 0 ) denotes the Lie algebra of all holomorphic vector fields on X 0 . Tian constructed in [20, Section 7] a special test configuration (X , L) for (X, −K X ) of exponent 1 whose central fiber is (X 0 , −K X 0 ). Let V be a holomorphic vector field on X 0 induced by the C * -action of (X , L). Fix a sufficiently large integer m, and consider a test configuration (X , mL). The expression (3) shows that
F ut m (X , mL) = DF (X , L)mN m − n p=1 m n+1−p (n + 1 − p)! F Td (p) (V ),(5)
where F Td (p) denotes the p-th higher Futaki invariant on X 0 . Since all the higher Futaki invariants are Lie algebra characters, h(X 0 ) = sl(2, C) is semisimple, and DF (X , L) is a multiple of F Td (1) (V ), we have F ut m (X , mL) = 0. Hence, X is not asymptotically F-polystable and consequently does not admit any sequence of anti-canonically balanced metrics by Theorem 1.2. Although we will show in Proposition 5.4 that higher Futaki invariants are obstructions to asymptotic F-polystability, they do not work well in this example because of the absence of non-trivial holomorphic vector fields.
F-stability and other stabilities
The aim of this section is to clarify the relation between asymptotic Fstability and other stabilities such as K-semistability, uniform K-stability, and asymptotic Chow stability.
Theorem 5.1. Asymptotic F-semistability implies K-semistability.
Proof. This is proved in the same line as Proposition 2.6. Since the Chow weight converges to 0 as raising the exponent, we have lim m→∞ F ut km (X , mL) kmN km = DF (X , L).
Theorem 5.2. Let X be a Fano manifold. Suppose that the Ding functional of X is J-coercive modulo Aut 0 (X) and all the higher Futaki invariants of X vanish. Then, X is asymptotically F-polystable.
Indeed, Berman-Witt Nyström proved that under the same assumption, X admits an anti-canonically k-balanced metric for sufficiently large k ([5, Theorem 1.7]). Combining Theorem 1.2, we get the conclusion.
In [3, Theorem A], it was proved that a uniformly K-stable Fano manifold satisfies the assumpotion of Theorem 5.2, and so we have the following:
Corollary 5.3. If a Fano manifold (X, −K X ) is uniformly K-stable, then X is asymptotically F-stable.
For the definition of uniform K-stability, see [4]. This is an analogue of [13,Main Theorem], in which strong K-stability and asymptotic Chow stability are treated.
We turn to asymptotic Chow stability.
Proof of Therem 1.3. This actually follows from the very definition of quantized Futaki invariants. Suppose that (X, −K X ) is asymptotically Chow semistable. By Proposition 2.6, this implies the K-semistability of (X, −K X ). 13 Then for any test configuration (X , L) with sufficiently large exponent k, we have F ut k (X , L) = kN k (DF (X , L) + Chow k (X , L)) ≥ 0 and the equality holds if and only if DF (X , L) = Chow k (X , L) = 0. This proves the asymptotic F-semistability of X. If we further assume that (X, −K X ) is asymptotically Chow polystable (resp. stable), then (X , L) is a product (resp. trivial) configuration. This completes the proof.
The following proposition says that higher Futaki invariants also obstruct at least asymptotic F-polystability.
Proposition 5.4. If X is asymptotically F-polystable, then all the higher Futaki invariants vanish on a maximal reductive subalgebra h r (X) of h(X)
Proof. Let V be a holomorphic vector field X whose real part generates S 1 . Consider a product configuration (X , L) defined by V with exponent k. The asymptotic F-polystability of X forces F ut km (X , mL) = 0 for sufficiently large m. Using the expression (5), we get
DF (X , L)kmN km − n p=1 (km) n+1−p (n + 1 − p)! F Td (p) (V ) = 0,
which proves the proposition.
In the presence of Kähler-Einstein metircs, the converse of Theorem 1.3 is also true.
Theorem 5.5. X be a Fano manifold. Suppose that X admits a Kähler-Einstein metric. Then, the following are equivalent:
(a) (X, −K X ) is asymptotically Chow polystable.
(b) X is asymptotically F-polystable.
(c) All the higher Futaki invariants on X vanish on a maximal reductive subalgebra h r (X) of h(X). Example 5.6. In [15], Ono-Sano-Yotsutani proved that there exists a toric Fano 7-manifold X with Kähler-Einstein metrics, whose p-th higher Futaki invariant F Td (p) does not vanish for p = 2, . . . , 7. By Theorem 5.5, X is not asymptotically F-polystable and does not admit any sequence of anticanonically balanced metrics by Theorem 1.2.
Lower bounds on the Calabi like functionals
We devote this section to proving Theorem 1.4 as an application of Theorem 1.1. Our approach is based on [6].
Let X be an n-dimensional Fano manifold, and fix k ≥ 1 so that −kK X is very ample.
To begin with, we collect definitions. Let H ∈ B k and (s α ) be an Horthonormal basis for H 0 (X, −kK X ). We define a self-adjoint matrix M (H) with entries
M (H) αβ := k n X s α , s β kF S k (H) µ F S k (H) = k n X s α , s β kφ |s γ | 2 kφ µ F S k (H) ,
where φ ∈ H(X, −K X ) is any Hermitian metric on −K X . Let M (H) be the trace-free part of M (H), that is,
M (H) = M (H) − k n N k id .
This matrix appears in the derivative of quantized Ding functional:
Proposition 6.1. The derivative of D (k) along a Bergman geodesic ray
(H t = e −tA H) t is given by d dt D (k) (H t ) = 1 k n+1 tr(AM (H t )). Proof. Define s t α := e (t/2)A s α , so that (s t α )
is an H t -orthonormal basis. Let (a αβ ) denote the matrix representation of A with respect to (s α ). Then,
d dt L(F S k (H t )) = X d dt F S k (H t ) µ F S k (Ht) = 1 k X a αβ s t β , s t α kφ |s t γ | 2 kφ µ F S k (Ht) = 1 k n+1 tr(AM (H t )). On the other hand, d dt E (k) (H t ) = − 1 kN k d dt log det e −tA = tr(A) kN k .
Combining them, we get the conclusion.
Note that this proves Proposition 2.7 (a). We recall the definition of qnorm of self-adjoint matrices for q ≥ 1. For a self-adjoint matrix A, we define
||A|| q := |λ α | q 1/q ,
where λ α denote the eigenvalues of A, repeated according to multiplicity.
Proposition 6.2. For any q > 1 and any Hermitian metric φ ∈ H(X, −K X ), we have ||M (Hilb k (φ))|| q ≤ k n/q ||B(φ)|| L q (ω n φ /n!) + O(k (n/q)−1 )
Given a Hermitian metric φ ∈ H(X, −K X ) on −K X , we define the Bergman kernel to be
ρ k (ω φ ) := N k α=1 |s α | 2 kφ ,
where (s α ) is a Hilb k (φ)-orthonormal basis for H 0 (X, −kK X ). We also use a scaled version of ρ k (ω φ ) defined by
ρ k (ω φ ) := 1 N k ρ k (ω φ ).
One of the key ingredient in the proof of Proposition 6.2 is the asymptotic expansions of the Bergman kernels:
ρ k (ω φ ) = (k n + O(k n−1 )) ω n φ n!µ φ , ρ k (ω φ ) = (1 + O(k −1 )) ω n φ n!µ φ ,
valid in C l for any positive integer l.
We define T k := F S k • Hilb k . Since e −T k (φ) = ρ k (ω φ ) −1/k e −φ , we have X e −T k (φ) = X e −φ + O(k −1 )
and
µ T k (φ) = e −T k (φ) X e −T k (φ) = ρ k (ω φ ) −1/k X e −φ + O(k −1 ) e −φ = (1 + O(k −1 ))µ φ as k → ∞.
Proof of Proposition 6.2. Let (s α ) be an Hilb k (T k (φ))-orhthogonal, Hilb k (φ)orthonormal basis for H 0 (X, −kK X ). Then, M (Hilb k (φ)) is a diagonal matrix with entries
M (Hilb k (φ)) αα = k n X |s α | 2 kφ ρ k (ω φ ) µ T k (φ) − k n N k X |s α | 2 kφ µ φ = X |s α | 2 kφ k n ρ k (ω φ ) (1 + O(k −1 ))µ φ − k n N k X |s α | 2 kφ µ φ = X |s α | 2 kφ k n ρ k (ω φ ) − k n N k µ φ + O(k −1 ) = X |s α | 2 kφ n!µ φ ω n φ − n! (−K X ) n + O(k −1 ) µ φ + O(k −1 ) = X |s α | 2 kφ B(φ) µ φ + O(k −1 ),
where we have used the uniform boundedness of k n /ρ k (ω φ ) in k. Let η and ν be diagonal matrices with entries η αα := X |s α | 2 kφ B(φ) µ φ , ν αα := M (Hilb k (φ)) αα − η αα = O(k −1 ).
Write
|s α | 2 kφ |B(φ)| = |s α | 2/p kφ |s α | 2/q kφ |B(φ)|,
where p is the Hölder conjugate of q. Applying the Holder inequality we have
|η αα | ≤ X |s α | 2 kφ µ φ 1/p X |s α | 2 kφ |B(φ)| q µ φ 1/q .
Since (s α ) is Hilb k (φ)-orthonormal, this shows
||η|| q q = α |η αα | q ≤ X ρ k (ω φ )|B(φ)| q µ φ ≤ k n ||B(φ)|| q L q (ω φ /n!) + O(k n−1 ).
On the other hand, since ν αα = O(k −1 ), we get ||ν|| q q = N k · O(k −q ) = O(k n−q ), and so ||ν|| q = O(k (n/q)−1 ). Consequently, we have ||M (Hilb k (φ))|| q ≤ ||η|| q + ||ν|| q ≤ k n/q ||B(φ)|| L q (ω n φ /n!) + O(k (n/q)−1 ).
Proposition 6.4. Let p be the Hölder conjugate of q. Given a normal test configuration (X , L) of exponent k and H ∈ B k , we have
||A|| p · ||M (H)|| q ≥ −k n+1 F ut k (X , L) kN k ,
where A denotes the infinitesimal generator of the C * -action on H 0 (X, −kK X ) corresponding to (X , L).
Proof. Put H t := e −tA H, so that (H t ) is the Bergman geodesic ray associated with (X , L) and H. By Theorem 1.1 and Proposition 2.7 (b), we get F ut k (X , L) Proof of Theorem 1.4. Let k be an exponent of (X , L), and set H km := Hilb km (φ) ∈ B km for large m. Denote by A km the infinitesimal generator of the C * -action on H 0 (X, −kmK X ) corresponding to (X , mL). Applying Proposition 6.4 to them, we get ||A km || p · ||M (H km )|| q ≥ −(km) n+1 F ut km (X , mL) kmN km = −(km) n+1 (DF (X , L) + O(m −1 )).
kN k ≥ F ut k (X , L) kN k − q = lim t→∞ d dt D (k) (H t ) ≥ lim
By Proposition 6.2,
||M (H km )|| q ≤ (km) n/q ||B(φ)|| L q (ω n φ ) + O(m (n/q)−1 ). Since p is even, the definition of p-norm of test configurations gives ||A km || p = tr(A p km ) 1/p = ||(X , L)|| p (km) (n/p)+1 + O(m n/p ). Putting the pieces above together, we have ||(X , L)|| p · ||B(φ)|| L q (ω n φ /n!) ≥ −DF (X , L) + O(m −1 ). Taking a limit as m → ∞ finishes the proof.
Proposition 2.2 ([19, Proposition 3.7]). A one-parameter subgroup of GL(H 0 (X, kL)) is equivalent to the data of a test configuration for (X, L) of exponent k.
There exists an isomorphism
Proposition 2. 7 .
7Let H ∈ B k be a Hermitian metric on H 0 (X, −kK X ). (a) H is a critical point of D (k) if and only if H is an anti-canonically k-balanced metric. (b) D (k) is convex along Bergman geodesic rays. (c) We have
Definition 3. 1 .
1Given a test configuration (X , L) for (X, −K X ) of exponent k, the quantized Futaki invariant at level k is defined to be F ut k (X , L) := kN k (DF (X , L) + Chow k (X , L)).
Lemma 3 . 2 .
32If (X , L) is a special test configuration of exponent k, then F ut k (X , L) coincides with the quantized Futaki invariant introduced by Berman-Witt Nyström in[5, Section 4.4].
Lemma 3 . 5 .
35Let (X , L) be a test configuration for (X, −K X ) of exponent k and ( X , L) its normalization. Then, we have F ut k (X , L) ≥ F ut k ( X , L).
Let (X , L) be a test configuration for (X, −K X ) of exponent k, H ∈ B k a Hermitian metric, and (H t ) t the Bergman geodesic ray associated with (X , L) and H. Then, we have lim t→∞ Z k (H t ) = (−K X ) n n! k n+1 Chow k (X , L).
Proof. The implication (a) ⇒ (b) has been proved by Theorem 1.3 and (b) ⇒ (c) by Proposition 5.4. Note that these proofs do not use the existence of Kähler-Einstein metrics. (c) ⇒ (a) is proved in [10, Corollary 4.2].
Theorem 6. 3 ([ 12 ,
312Theorem 4.1.1]). We have the asymptotic expansions
n+1 ||A|| p ||M (H)|| q ,where in the last line we have used the Hölder inequarity.
K-polystability of Q-Fano varieties admitting Kähler-Einstein metrics. R J Berman, Invent. math. 2033R. J. Berman, K-polystability of Q-Fano varieties admitting Kähler-Einstein metrics, Invent. math., 203 (2016), no. 3, 973-1025.
A variational approach to complex Monge-Ampère equations. R J Berman, S Boucksom, V Guedj, A Zeriahi, Publ. Math. de l'IHÈS. 117R. J. Berman, S. Boucksom, V. Guedj and A. Zeriahi, A variational approach to complex Monge-Ampère equations, Publ. Math. de l'IHÈS, 117 (2013), 179-245.
R J Berman, S Boucksom, M Jonsson, arXiv:1509.04561A variational approach to the Yau-Tian-Donaldson conjecture. arXiv preprintR. J. Berman, S. Boucksom and M. Jonsson, A variational approach to the Yau-Tian- Donaldson conjecture, arXiv preprint, arXiv:1509.04561 (2015).
S Boucksom, T Hisamoto, M Jonsson, K- Uniform, Stability, arXiv:1504.06568Duistermaat-Heckman measures and singularities of pairs. arXiv preprintTo appear inS. Boucksom, T. Hisamoto and M. Jonsson, Uniform K-stability, Duistermaat- Heckman measures and singularities of pairs, arXiv preprint, arXiv:1504.06568 (2015), To appear in Ann. Inst. Fourier.
R J Berman, D Witt Nyström, arXiv:1401.8264Complex optimal transport and the pluripotential theory of Kähler-Ricci solitons. arXiv preprintR. J. Berman and D. Witt Nyström, Complex optimal transport and the pluripotential theory of Kähler-Ricci solitons, arXiv preprint, arXiv:1401.8264 (2014).
Lower bounds on the Calabi functional. S K Donaldson, J. Diff. Geom. 70S. K. Donaldson, Lower bounds on the Calabi functional, J. Diff. Geom., 70 (2005), 453-472.
Some numerical results in complex differential geometry. S K Donaldson, Pure Appl. Math. 5S. K. Donaldson, Some numerical results in complex differential geometry, Pure Appl. Math., 5 (2009), 571-618.
Scalar curvature and asymptotic Chow stability of projective bundles and blowups. A , Della Vedova, F Zuddas, Trans. Amer. Math. Soc. 36412A. Della Vedova and F. Zuddas, Scalar curvature and asymptotic Chow stability of projective bundles and blowups, Trans. Amer. Math. Soc., 364 (2012), no. 12, 6495- 6511.
Calabi flow and projective embeddings. J Fine, J. Diff. Geom. K. Liu, and X. Ma843J. Fine, Calabi flow and projective embeddings, J. Diff. Geom., 84 (2010), no.3, 489- 523, with an appendix by K. Liu, and X. Ma.
Asymptotic Chow semistability and integral invariants. A Futaki, Int. J. Math. 15A. Futaki, Asymptotic Chow semistability and integral invariants, Int. J. Math., 15 (2004), 967-979.
On the limit of spectral measures associated to a test configuration of a polarized Kähler manifold. T Hisamoto, J. reine angew. Math. 713T. Hisamoto, On the limit of spectral measures associated to a test configuration of a polarized Kähler manifold, J. reine angew. Math., 713 (2016), 129-148
Holomorphic Morse Inequalities and Bergman Kernels. X Ma, G Marinescu, Progr. Math. 254Birkhäuser VerlagX. Ma and G. Marinescu, Holomorphic Morse Inequalities and Bergman Kernels, Progr. Math., vol. 254, Birkhäuser Verlag, Basel, 2007.
T Mabuchi, Y Nitta, K-Stability Strong, Chow-Stability, Memory of Professor Shoshichi Kobayashi. Geometry and Analysis on ManifoldsT. Mabuchi and Y. Nitta, Strong K-stability and asymptotic Chow-stability, in Geom- etry and Analysis on Manifolds, In Memory of Professor Shoshichi Kobayashi, (eds.
. T Ochiai, Progress in Mathematics. 308BirkhauserT. Ochiai et al.), Progress in Mathematics, 308 (2015), 405-411, Birkhauser.
Stability of projective varieties. D Mumford, Enseignement Math. 23D. Mumford, Stability of projective varieties, Enseignement Math., 23 (1977), 39-110.
An example of an asymptotically Chow unstable manifold with constant scalar curvature. H Ono, Y Sano, N Yotsutani, Annales de l'Institut Fourier. 624H. Ono, Y. Sano, and N. Yotsutani, An example of an asymptotically Chow unstable manifold with constant scalar curvature, Annales de l'Institut Fourier, 62 (2012), no. 4, 1265-1287
Geometric analysis of Chow Mumford stability. S T Paul, Adv. Math. 182S. T. Paul, Geometric analysis of Chow Mumford stability, Adv. Math., 182 (2004), 333-356.
. D H Phong, J Sturm, Stability, Energy Functionals, and Kähler-Einstein Metrics. 113Commun. Anal. Geom.D. H. Phong and J. Sturm, Stability, Energy Functionals, and Kähler-Einstein Met- rics, Commun. Anal. Geom., 11 (2003), no. 3, 565-597.
Test configurations for K-stability and geodesic rays. D H Phong, J Sturm, J. Symplectic Geom. 5D. H. Phong and J. Sturm, Test configurations for K-stability and geodesic rays, J. Symplectic Geom., 5 (2007), 221-247.
A study of the Hilbert-Mumford criterion for the stability of projective varieties. J Ross, R P Thomas, J. Algebraic Geom. 16J. Ross and R. P. Thomas, A study of the Hilbert-Mumford criterion for the stability of projective varieties, J. Algebraic Geom., 16 (2007), 201-255.
Kähler-Einstein metrics with positive scalar curvature. G Tian, Invent. Math. 130G. Tian, Kähler-Einstein metrics with positive scalar curvature, Invent. Math., 130 (1997), 1-39.
Test configurations and Okounkov bodies. D Witt Nyström, Compositio Math. 1486D. Witt Nyström, Test configurations and Okounkov bodies, Compositio Math., 148 (2012), no. 6, 1736-1756.
Modified Futaki invariant and equivariant Riemann-Roch formula. F Wang, B Zhou, X H Zhu, Adv. Math. 289F. Wang, B. Zhou and X. H. Zhu, Modified Futaki invariant and equivariant Riemann- Roch formula, Adv. Math., 289 (2016), 1205-1235.
. Graduate School of Mathematical Sciences. 3The University of TokyoGraduate School of Mathematical Sciences, The University of Tokyo, 3-
Japan E-mail address: [email protected]. Komaba Meguro-Ku Tokyo, 153-8914Aoba, Aramaki, Aoba-ku, Sendaijp Mathematical Institute, Tohoku UniversityJapan E-mail address: [email protected] Meguro-ku Tokyo 153-8914, Japan E-mail address: [email protected] Mathematical Institute, Tohoku University, 6-3, Aoba, Aramaki, Aoba-ku, Sendai, 980-8578, Japan E-mail address: [email protected]
|
[] |
[
"Single-and few-electron dynamic quantum dots in a perpendicular magnetic field",
"Single-and few-electron dynamic quantum dots in a perpendicular magnetic field"
] |
[
"S J Wright \nCavendish Laboratory\nUniversity of Cambridge\nJ. J. Thomson AvenueCB3 0HECambridgeUK\n\nCambridge Research Laboratory\nToshiba Research Europe Ltd\n208 Science Park, Milton RoadCB4 0WECambridgeUK\n",
"A L Thorn \nCavendish Laboratory\nUniversity of Cambridge\nJ. J. Thomson AvenueCB3 0HECambridgeUK\n\nNational Physical Laboratory\nHampton RoadTW11 0LWTeddingtonUK\n",
"M D Blumenthal \nCavendish Laboratory\nUniversity of Cambridge\nJ. J. Thomson AvenueCB3 0HECambridgeUK\n\nNational Physical Laboratory\nHampton RoadTW11 0LWTeddingtonUK\n",
"S P Giblin \nNational Physical Laboratory\nHampton RoadTW11 0LWTeddingtonUK\n",
"M Pepper \nUniversity College London\nTorrington PlaceWC1E 7JELondonUK\n",
"T J B M Janssen \nNational Physical Laboratory\nHampton RoadTW11 0LWTeddingtonUK\n",
"M Kataoka \nNational Physical Laboratory\nHampton RoadTW11 0LWTeddingtonUK\n",
"J D Fletcher \nNational Physical Laboratory\nHampton RoadTW11 0LWTeddingtonUK\n",
"G A C Jones \nCavendish Laboratory\nUniversity of Cambridge\nJ. J. Thomson AvenueCB3 0HECambridgeUK\n",
"C A Nicoll \nCavendish Laboratory\nUniversity of Cambridge\nJ. J. Thomson AvenueCB3 0HECambridgeUK\n",
"Godfrey Gumbs \nDepartment of Physics and Astronomy\nHunter College of the City University of New York\n695 Park Avenue10065New YorkNew YorkUSA\n",
"D A Ritchie \nCavendish Laboratory\nUniversity of Cambridge\nJ. J. Thomson AvenueCB3 0HECambridgeUK\n"
] |
[
"Cavendish Laboratory\nUniversity of Cambridge\nJ. J. Thomson AvenueCB3 0HECambridgeUK",
"Cambridge Research Laboratory\nToshiba Research Europe Ltd\n208 Science Park, Milton RoadCB4 0WECambridgeUK",
"Cavendish Laboratory\nUniversity of Cambridge\nJ. J. Thomson AvenueCB3 0HECambridgeUK",
"National Physical Laboratory\nHampton RoadTW11 0LWTeddingtonUK",
"Cavendish Laboratory\nUniversity of Cambridge\nJ. J. Thomson AvenueCB3 0HECambridgeUK",
"National Physical Laboratory\nHampton RoadTW11 0LWTeddingtonUK",
"National Physical Laboratory\nHampton RoadTW11 0LWTeddingtonUK",
"University College London\nTorrington PlaceWC1E 7JELondonUK",
"National Physical Laboratory\nHampton RoadTW11 0LWTeddingtonUK",
"National Physical Laboratory\nHampton RoadTW11 0LWTeddingtonUK",
"National Physical Laboratory\nHampton RoadTW11 0LWTeddingtonUK",
"Cavendish Laboratory\nUniversity of Cambridge\nJ. J. Thomson AvenueCB3 0HECambridgeUK",
"Cavendish Laboratory\nUniversity of Cambridge\nJ. J. Thomson AvenueCB3 0HECambridgeUK",
"Department of Physics and Astronomy\nHunter College of the City University of New York\n695 Park Avenue10065New YorkNew YorkUSA",
"Cavendish Laboratory\nUniversity of Cambridge\nJ. J. Thomson AvenueCB3 0HECambridgeUK"
] |
[] |
We present experimental studies of the current pumped through a dynamic quantum dot over a wide range of magnetic fields. At low fields we observe repeatable structure indicating increased confinement of the electrons in the dynamic dot. At higher fields (B > 3 T), we observe structure which changes markedly from device to device suggesting that in this regime the transport is sensitive to local disorder. The results are significant for the development of dynamic quantum dot pumps as quantum standards of electrical current.
|
10.1063/1.3578685
|
[
"https://arxiv.org/pdf/1009.0203v1.pdf"
] | 119,172,955 |
1009.0203
|
499214c0df1791f8c6f3077789d6288fcdc714b0
|
Single-and few-electron dynamic quantum dots in a perpendicular magnetic field
S J Wright
Cavendish Laboratory
University of Cambridge
J. J. Thomson AvenueCB3 0HECambridgeUK
Cambridge Research Laboratory
Toshiba Research Europe Ltd
208 Science Park, Milton RoadCB4 0WECambridgeUK
A L Thorn
Cavendish Laboratory
University of Cambridge
J. J. Thomson AvenueCB3 0HECambridgeUK
National Physical Laboratory
Hampton RoadTW11 0LWTeddingtonUK
M D Blumenthal
Cavendish Laboratory
University of Cambridge
J. J. Thomson AvenueCB3 0HECambridgeUK
National Physical Laboratory
Hampton RoadTW11 0LWTeddingtonUK
S P Giblin
National Physical Laboratory
Hampton RoadTW11 0LWTeddingtonUK
M Pepper
University College London
Torrington PlaceWC1E 7JELondonUK
T J B M Janssen
National Physical Laboratory
Hampton RoadTW11 0LWTeddingtonUK
M Kataoka
National Physical Laboratory
Hampton RoadTW11 0LWTeddingtonUK
J D Fletcher
National Physical Laboratory
Hampton RoadTW11 0LWTeddingtonUK
G A C Jones
Cavendish Laboratory
University of Cambridge
J. J. Thomson AvenueCB3 0HECambridgeUK
C A Nicoll
Cavendish Laboratory
University of Cambridge
J. J. Thomson AvenueCB3 0HECambridgeUK
Godfrey Gumbs
Department of Physics and Astronomy
Hunter College of the City University of New York
695 Park Avenue10065New YorkNew YorkUSA
D A Ritchie
Cavendish Laboratory
University of Cambridge
J. J. Thomson AvenueCB3 0HECambridgeUK
Single-and few-electron dynamic quantum dots in a perpendicular magnetic field
(Dated: September 2, 2010)PACS numbers: Valid PACS appear here
We present experimental studies of the current pumped through a dynamic quantum dot over a wide range of magnetic fields. At low fields we observe repeatable structure indicating increased confinement of the electrons in the dynamic dot. At higher fields (B > 3 T), we observe structure which changes markedly from device to device suggesting that in this regime the transport is sensitive to local disorder. The results are significant for the development of dynamic quantum dot pumps as quantum standards of electrical current.
I. INTRODUCTION
A quantized charge transport device can generate electrical current given by I = nef , where f is the repetition frequency of an applied potential, e is the electron charge and n is the number of charges transported in one cycle. This type of device is of great interest to electrical metrologists because it could form the basis of a new definition of the SI base unit ampere, linking current to frequency via a defined value of the electron charge [1]. Pumps based on chains of metal-oxide tunnel barriers have been researched extensively, and have demonstrated pumping accuracy at the 10 −8 level required by metrological applications [2]. Unfortunately the time constant of the tunnel junctions limits the current in these devices to the level of a few pA. Recently, a new type of pump based on metaloxide-superconductor barriers has demonstrated parallel scaling of 10 devices [3], but this device must be operated at finite bias voltage, thereby requiring stringent control of leakage currents if metrological accuracy is to be reached.
The semiconductor-based dynamic quantum dot (DyQD) pump, in contrast, can be operated at zero bias, and relatively high frequency [4]. The DyQD pump, like earlier Surface Acoustic Wave (SAW)-based pumps [5,6], transports electrons between a source and drain lead by modulation of the electrostatic potential in a reduceddimensional semiconductor system. In the SAW pumps the potential modulation is produced by a SAW launched from a tuned transducer, whereas in the DyQD pump the modulation signal is applied directly to one of the potential-defining gates. The DyQD pump avoids heating effects present in the SAW pumps [7], and can be driven at a wide range of frequencies. Under the application of a perpendicular magnetic field, the performance of the DyQD pump was shown to be enhanced [8,12]. Recent measurements at B = 5 T and f = 340 MHz did not resolve any error in the pump current within the 15 parts per million uncertainty in the current measurement system [13]. Furthermore, parallel operation of two pumps has been demonstrated with no noticeable loss of accuracy [9]. The DyQD pump is therefore a strong candidate for the realization of a quantum standard of electrical current.
In this paper, we describe the effect of a perpendicular magnetic field on the current produced by a DyQD pump. For fields of B ≤ 3 T the pumps exhibit phenomena that are reproducible from device to device. The risers between plateaus become sharper and the plateaus become flatter, indicating enhanced quantization. Transitions in the number of electrons transported per cycle shift in gate voltage, demonstrating the ability of the field to act as an extra control parameter to tune the pump system. At fields of B > 5 T an anomalous structure is observed in the quantized current that is reminiscent of earlier single-electron capacitance spectroscopy (SECS) measurements with static quantum dots (QDs) [10,11]. The observation that the details of this structure are device-dependent suggests that they originate from local disorder which is unique to each device. The magnetic field appears to strengthen the effect of disorder on the measured pumped current.
II. TUNABLE-BARRIER ELECTRON PUMP
The DyQD pump devices are fabricated in a GaAs/AlGaAs high electron mobility transistor (HEMT) heterostructure where a two-dimensional electron gas (2DEG) exists 90 nm below the surface. A scanning electron microscope (SEM) image of a similar device to the ones tested in this work is presented in Fig. 1. Ohmic contacts were made to the source (S) and drain (D) areas of 2DEG. Transverse confinement was provided by the horizontal narrow channel, created through shallow wet chemical etching. Metallic gates were deposited on the surface of the device, perpendicular to the channel. The left-most gate will be referred to hereafter as the entrance gate, and the middle gate as the exit gate. The right-most gate was grounded and not used. A sinusoidal radio frequency (RF) voltage signal V RF was added to the static DC offset voltage V ent using a bias tee, as shown, resulting in a total instantaneous entrance gate voltage V TOT ent . When tuned correctly, a DyQD is periodically formed at the position of the red dot in Fig. 1 at the repetition frequency of V RF . A well-defined number of electrons can be captured by the DyQD from the source. As the pump cycle progresses and the potential is tilted, a controlled number of the captured electrons are ejected over the exit gate and into the drain, contributing to the measured current. The direction of electron transport is shown by the white arrow in the figure.
A plot of the numerical derivative of the pumped current in V exit and V ent ,
dI pump dV exit 2 + dI pump dV ent 2 ,
is presented in the main left panel of Fig. 2. Here, V RF was set to a frequency of f = 73 MHz with an amplitude at the source of −9 dBm. All measurements in this work were performed in a dilution refrigerator with a base temperature of ∼ 50 mK. Transitions in the number of electrons transported per cycle manifest in dark lines in the plot. We will refer to this type of plot as a pump map hereafter. The blue dashed lines mark directions of line scans in V ent and V exit , seen to the right and bottom of the main panel respectively. I pump is plotted in each case.
The line scans exhibit plateaus at values corresponding to an integer number of electrons being transported per cycle of the RF signal. This is the signature of quantized charge transport.
The value of the current on the plateaus is proportional to the number of electrons n e ejected into the drain per cycle of the RF signal. Aspects of a model for the mechanism of operation of DyQD pumps have been discussed in previous works [4,15,16]. In order to correctly interpret the results presented in this paper we present a detailed description of a model to explain the features seen in the pump map of Fig. 2.
We draw the reader's attention to the four areas of the pump map along the entrance gate line scan (direction of constant V exit ). These areas are labeled by the number of electrons captured and ejected in each case, (n c , n e ). Schematic diagrams of the barriers defined by the entrance and exit gates in each area are presented in the right panel of Fig. 2. Here, E F is the Fermi energy in the source (S) and drain (D) of the channel. In order to generate pumped current, it is necessary for V exit to be negative enough at all points in the pump map for the barrier defined by the exit gate to always be opaque. The left and right schematics for each area represent the minimum and maximum barrier heights defined by the entrance gate during the pump cycle respectively.
In area (0,0), the barrier defined by V ent is too large over the whole pump cycle to allow electrons to enter the DyQD from the source. As the DyQD is never populated, we measure I pump = 0 in this region.
As V ent is made less negative the pump transitions into the (2,2) area where the entrance barrier drops enough to allow electrons to enter the DyQD from the source. When the entrance barrier subsequently rises as the pump cycle progresses we reach a point where the DyQD is isolated from the source. We refer to this point in the pump cycle as the capture point, shown by the middle schematic. In this case the DyQD captures two electrons. By changing V exit , the size of the DyQD at the capture point can be altered and hence more or fewer electrons are captured. The captured electrons are subsequently ejected into the drain as the entrance barrier rises to its highest point. This results in a measured current in the (2,2) area of I pump = 2ef .
As V ent becomes even less negative the pump switches to the (2,1) area where I pump = ef is measured for the same exit gate voltage. The size of the DyQD at the capture point is expected to be the same as the previous case, so we assume that two electrons are again captured here but only one is ejected, with the other remaining confined within the DyQD. We therefore measure I pump = ef in this area.
Finally, in area (2,0), the entrance barrier never rises high enough to push any of the captured electrons over the exit barrier and into the drain. The current measured in this region is therefore I pump = 0.
The applied V ent necessary for the entrance barrier to drop low enough to allow population of the dot (going from (0,0) to (2,2) in Fig. 2) should be independent of V exit . The measured slope of the transition highlighted by the green dashed line in Fig. 2 arises from capacitive coupling between the gates.
We measure a different slope in the transitions corresponding to when n e differs from n c . The transition from n e = n c to n e = n c − 1 (going from pumping all to pumping all but one electrons) is highlighted by the red dashed line in Fig. 2. We believe this slope arises as a result of the shape of the potential at the stage in the pump cycle where the electrons are ejected into the drain being controlled by both V ent and V exit .
III. PUMPING IN B ⊥
We next present data from measurements of the pumped current under the application of a perpendicular magnetic field to the device. We propose that information about the dynamics of the system may be extracted by monitoring changes in the pump map. Figure 3 shows the evolution of the pump map upon increasing B ⊥ . The pumping frequency was set to 73 MHz and the amplitude of V RF at the source was −9 dBm, as before.
The upper panel of Fig. 3 shows that the transitions between plateaus become sharper (i.e. darker) as B ⊥ is increased. This suggests an enhancement of the quantization [8,12]. The lower panel of Fig. 3 supplements this observation, where linescans in V exit for V ent = −0.17 V at each field increment are shown. It follows that the error mechanisms that give rise to deviations from perfectly quantized current at zero field must be suppressed at higher fields. A recent theoretical framework predicts the contribution of back-tunneling errors arising during the capture process [17]. In a perpendicular magnetic field we expect that the increased confinement of the captured electrons would lead to a reduction in the radial extent of the wave function [8]. We therefore expect a smaller overlap of the wave function with the leads and thus a lower probability of back-tunneling, resulting in an enhanced pumping accuracy.
The lower panel of Fig. 3 shows line scans in V exit at each magnetic field for V ent = −0.17 V. The plateaus are flatter in higher fields, as discussed. They are also longer, indicating enhanced robustness of the pumping mechanism [8]. In a field of B ⊥ = 5 T these DyQD pumps, operating at f = 340 MHz, were shown to be accurate to better than 1.5 parts in 10 5 [13]. This result is important for quantum metrology and the development of a quantum standard for current.
At higher fields we observe quantized current plateaus corresponding to a larger number of electrons robustly transported per cycle. The blue and red crosses in Fig. 3 are placed at the same coordinates in each plot, and they highlight the use of the field as an effective tuning parameter. In the case of the blue crosses the field is able to turn the pumping on in an area of the pump map where our model suggests that the dot is too small to capture electrons at 0 T. As illustrated in Fig. 2, during the first part of the pump cycle the DyQD is coupled to the source, and so electrons are able to easily leave the DyQD and return to the source as the RF cycle progresses and the DyQD becomes smaller. The perpendicular magnetic field has the effect of increasing the effective confinement potential experienced by electrons in the DyQD, and so there is an enhanced probability of an electron remaining in the DyQD at the capture point. This explains the gradual increase in the pumped current from zero to ef at the point indicated by the blue cross in Fig. 3 as B ⊥ is increased from 0 T to 3 T.
A similar explanation can be applied to the pumped current at the point indicated by the red cross in Fig. 3. At 0 T, the red cross resides in the (1,1) region of the map, indicating that no electrons remain in the DyQD at the end of the pump cycle. Conversely, at a field of 3 T the red cross is in the (2,1) region. Here, we see that the increased confinement has led to a transition from capturing n c electrons to capturing n c + 1 electrons, as above, whilst also enabling the DyQD to confine a single electron at the end of the pump cycle (n c − n e = 0 at 0 T, but n c − n e = 1 at 3 T).
The green dots in the B = 0 T and 2 T pump maps of Fig. 3 serve to further illustrate this behaviour. In zero field the DyQD captures and ejects three electrons per cycle. Upon increasing the field to B = 2 T the DyQD was able to capture and eject five electrons for the same electrically defined DyQD. A full explanation of the evolution of the pump map in a magnetic field will require a more detailed computational study of electron dynamics in this device [14].
IV. PUMPING IN HIGH B ⊥
At fields of B > 3 T the pump maps begin to exhibit phenomena that are no longer reproducible from device to device. Figure 4 shows pumping maps at fields of B = 5 T and 9 T for two different samples. The data presented earlier in this work was collected with sample B. Sample A was fabricated using a different HEMT wafer and had a slightly different etched channel geometry. Sample A's pumping frequency was f = 306.7 MHz.
In each sample we observe an anomalous structure in the pumped current at high fields. For sample A the plateau corresponding to capturing two electrons and ejecting one electron (the last electron remaining confined within the DyQD at the end of the pump cycle) is no longer present at 9 T, as can be seen in the upperright pump map of Fig. 4. The last two electrons appear to exit into the drain for the same entrance barrier height. Similar findings have been reported in SECS measurements where electrons were seen to tunnel into and out of static QDs in pairs and bunches over a range of B ⊥ [10,11]. Several theories which rely on disorder have been developed to explain this behavior [18][19][20] but the origin remains unclear.
We did not observe identical behavior in sample B, but we did see other plateaus disappear at similar fields as transitions in the number of ejected electrons begin to merge. The white dashed ellipses in Fig. 3 highlight regions in the pump map where this merging can be observed. Different lines are seen to merge in each device. This behavior is also reminiscent of earlier SECS measurements where the addition spectra of different QDs displayed pairing and bunching of certain energy levels. In one experiment, artificial disorder was created by tuning the coupling of two nearby QDs. The pair-ing/bunching behavior was shown to be strongly dependent on the inter-dot coupling, and hence upon disorder [21]. Bunching behavior in our devices generally occurs for magnetic fields of at least ∼ 5 T. In disordered systems it is expected that the field enhances disorder: the wave function shrinks, leading to an enhancement of the effects of a localization potential (for a review, see [22]).
A full plateau structure persisted up to the maximum readily achievable fields in our measurement system of 15 T. Our results are very different from those published by Kaestner et al. [12], where at 10.2 T only one n e = 1 plateau was observed with all n e > 1 plateaus being completely suppressed. For certain frequencies, RF signal amplitudes and field strengths we did see similar patterns to those of Kaestner et al. which we attribute to anomalous rectified biases that appear to be not only frequency dependent but also magnetic field dependent. The origin of rectification in our devices is not fully understood at present but is likely to be due to a complicated response of the sample holder, bond wires and ohmic contacts to the applied RF signal.
V. CONCLUSIONS
In summary, we have presented experimental observations of the effect of a perpendicular magnetic field on the quantized current produced by DyQD electron pumps.
The pumping accuracy was shown to be enhanced by the field, suggesting a suppression of the error mechanisms associated with a loss of quantization. The field was shown to be an effective extra control parameter in the tuning of the pump. As we increased the field to B = 3 T the pump could be turned on in a region of the pump map where no pumped current was generated at zero field. Our observations suggest the magnetic field is adding an extra confinement potential to the gate-defined DyQD. For B > 5 T we detected anomalous structure in the quantized current. We observed the onset of a pairing behavior in the ejected electrons reminiscent of SECS measurements, where several theories predict pair tunneling in QDs can arise from disorder unique to each individual QD. Our data suggests that local disorder, unique to each DyQD, affects the pumping more strongly for higher magnetic fields. We hope our findings will promote DyQDs as useful tools for probing few-electron dynamics in many fundamental investigations.
FIG. 1 :
1SEM image of the device and schematic of electrical connections. The oscillating voltage signal V RF is added to the static DC voltage Vent and applied to the left (entrance) gate. Vexit is applied to the middle (exit) gate. The right gate is grounded and not used. A DyQD is periodically formed in the channel at the position of the red dot. Electrons are transported by the DyQD from source (S) to drain (D) reservoirs in the direction of the white arrow.
FIG. 2 :
2Left: the response of the pumped current to changes in Vent and Vexit. The main panel shows the numerical derivative in Vexit and Vent of the pumped current, highlighting transitions in the number of pumped electrons. The blue dashed lines show directions of line scans in Vent and Vexit, seen to the right and bottom of the main panel respectively. Dashed black lines correspond to the expected plateau values. Right: schematic diagrams of the barriers defined by the gates in each of the four (nc, ne) regions, where nc is the number of electrons captured by the DyQD and ne is the number ejected into the drain.
FIG. 3 :
3Upper panel: evolution of the pump map in the main left panel of Fig. 2 under the application of a perpendicular magnetic field B ⊥ . Lower panel: line scans for Vent = −0.17 V at each field increment. The dashed black lines mark the expected values for each plateau.
FIG. 4 :
4Pump maps for large B ⊥ . Lower panel: continuation of the data presented inFig. 3. The upper panel shows data collected with a different device, processed using a different HEMT wafer. V RF for sample A was set to f = 306.7 MHz at an amplitude of −9.6 dBm. Numbers in the plateaus correspond to the number of electrons transported per cycle in those regions.
We gratefully acknowledge Bernd Kaestner, Christoph Leicht and Philipp Mirovsky for useful discussions. SJW acknowledges support from the EPSRC and Toshiba Research Europe Ltd. The work of MDB was supported by the UK National Measurement Systems Quantum Metrology Programme. The work of GG was supported by contract FA9453-07-C-0207 of AFRL. CAN acknowledges support from the EPSRC QIP IRC (GR/S82176/01).
. M J T Milton, J M Williams, S J Bennett, Metrologia. 44356M. J. T. Milton, J. M. Williams, and S. J. Bennett, Metrologia 44, 356 (2007).
. M W Keller, J M Martinis, N M Zimmerman, A H Steinbach, Appl. Phys. Lett. 691804M. W. Keller, J. M. Martinis, N. M. Zimmerman, and A. H. Steinbach, Appl. Phys. Lett. 69, 1804 (1996).
. V F Maisi, Yu A Pashkin, S Kafanov, J S Tsai, J P Pekola, New J. Phys. 11113057V. F. Maisi, Yu. A. Pashkin, S. Kafanov, J. S. Tsai, and J. P. Pekola, New J. Phys. 11, 113057 (2009).
. M D Blumenthal, B Kaestner, L Li, S Giblin, T J B M Janssen, M Pepper, D Anderson, G Jones, D A Ritchie, Nature Physics. 3343M. D. Blumenthal, B. Kaestner, L. Li, S. Giblin, T. J. B. M. Janssen, M. Pepper, D. Anderson, G. Jones, and D. A. Ritchie, Nature Physics 3, 343 (2007).
. J M Shilton, V I Talyanskii, M Pepper, D A Ritchie, J E F Frost, C J B Ford, C G Smith, G A C Jones, J. Phys. Cond Mat. 8531J. M. Shilton, V. I. Talyanskii, M. Pepper, D. A. Ritchie, J. E. F. Frost, C. J. B. Ford, C. G. Smith, and G. A. C. Jones, J. Phys. Cond Mat 8, L531 (1996).
. M Kataoka, M R Astley, A L Thorn, D K L Oi, C H W Barnes, C J B Ford, D Anderson, G A C Jones, I Farrer, D A Ritchie, M Pepper, Phys. Rev. Lett. 102156801M. Kataoka, M. R. Astley, A. L. Thorn, D. K. L. Oi, C. H. W. Barnes, C. J. B. Ford, D. Anderson, G. A. C. Jones, I. Farrer, D. A. Ritchie, and M. Pepper, Phys. Rev. Lett., 102, 156801 (2009).
. R J Schneble, M Kataoka, C J B Ford, C H W Barnes, D Anderson, G A C Jones, I Farrer, D A Ritchie, M Pepper, Appl. Phys. Lett. 89122104R. J. Schneble, M. Kataoka, C. J. B. Ford, C. H. W. Barnes, D. Anderson, G. A. C. Jones, I. Farrer, D. A. Ritchie, and M. Pepper, Appl. Phys. Lett. 89, 122104 (2006).
. S J Wright, M D Blumenthal, Godfrey Gumbs, A L Thorn, M Pepper, T J B M Janssen, S N Holmes, D Anderson, G A C Jones, C A Nicoll, D A Ritchie, Phys. Rev. B. 78233311S. J. Wright, M. D. Blumenthal, Godfrey Gumbs, A. L. Thorn, M. Pepper, T. J. B. M. Janssen, S. N. Holmes, D. Anderson, G. A. C. Jones, C. A. Nicoll, and D. A. Ritchie, Phys. Rev. B 78, 233311 (2008).
. S J Wright, M D Blumenthal, M Pepper, D Anderson, G A C Jones, C A Nicoll, D A Ritchie, Phys. Rev. B. 80113303S. J. Wright, M. D. Blumenthal, M. Pepper, D. Anderson, G. A. C. Jones, C. A. Nicoll, and D. A. Ritchie, Phys. Rev. B 80, 113303 (2009).
. R C Ashoori, H L Stormer, J S Weiner, L N Pfeiffer, S J Pearton, K W Baldwin, K W West, Phys. Rev. Lett. 683088R. C. Ashoori, H. L. Stormer, J. S. Weiner, L. N. Pfeiffer, S. J. Pearton, K. W. Baldwin, and K. W. West, Phys. Rev. Lett. 68, 3088 (1992).
. N B Zhitenev, R C Ashoori, L N Pfeiffer, K W West, Phys. Rev. Lett. 792308N. B. Zhitenev, R. C. Ashoori,L. N. Pfeiffer, and K. W. West, Phys. Rev. Lett. 79, 2308 (1997).
. B Kaestner, C Leicht, V Kashcheyevs, K Pierz, U Siegner, H W Schumacher, Appl. Phys. Lett. 9412106B. Kaestner, C. Leicht, V. Kashcheyevs, K. Pierz, U. Siegner, and H. W. Schumacher, Appl. Phys. Lett. 94, 012106 (2009).
. S P Giblin, S J Wright, J Fletcher, M Kataoka, M Pepper, T J B M Janssen, D A Ritchie, C A Nicoll, D Anderson, G A C Jones, New J. Phys. 1273013S. P. Giblin, S. J. Wright, J. Fletcher, M. Kataoka, M. Pepper, T. J. B. M. Janssen, D. A. Ritchie, C. A. Nicoll, D. Anderson, and G. A. C. Jones, New J. Phys. 12, 073013 (2010).
. S J Wright, A L Thorn, manuscript in preparationS. J. Wright, A. L. Thorn et al., manuscript in preparation.
. B Kaestner, V Kashcheyevs, S Amakawa, M D Blumenthal, L Li, T J B M Janssen, G Hein, K Pierz, T Weimann, U Siegner, H W Schumacher, Phys. Rev. B. 77153301B. Kaestner, V. Kashcheyevs, S. Amakawa, M. D. Blumenthal, L. Li, T. J. B. M. Janssen, G. Hein, K. Pierz, T. Weimann, U. Siegner, and H. W. Schumacher, Phys. Rev. B 77, 153301 (2008).
. B Kaestner, V Kashcheyevs, G Hein, K Pierz, U Siegner, H W Schumacher, Appl. Phys. Lett. 92192106B. Kaestner, V. Kashcheyevs, G. Hein, K. Pierz, U. Siegner, and H. W. Schumacher, Appl. Phys. Lett. 92, 192106 (2008).
. V Kashcheyevs, B Kaestner, Phys. Rev. Lett. 104186805V. Kashcheyevs and B. Kaestner, Phys. Rev. Lett. 104, 186805 (2010).
. Yi Wan, Gerardo Ortiz, Philip Phillips, Phys. Rev. Lett. 752879Yi Wan, Gerardo Ortiz, and Philip Phillips, Phys. Rev. Lett. 75 2879 (1995).
. M E Raikh, L I Glazman, L E Zhukov, Phys. Rev. Lett. 771354M. E. Raikh, L. I. Glazman, and L. E. Zhukov, Phys. Rev. Lett. 77, 1354 (1996).
. C M Canali, Phys. Rev. Lett. 843934C. M. Canali, Phys. Rev. Lett. 84, 3934 (2000).
. M Brodsky, N B Zhitenev, R C Ashoori, L N Pfeiffer, K W West, Phys. Rev. Lett. 852356M. Brodsky, N. B. Zhitenev, R. C. Ashoori, L. N. Pfeiffer, and K. W. West, Phys. Rev. Lett. 85, 2356 (2000).
. M Pepper, Journal of Non-Crystalline Solids. 32161M. Pepper, Journal of Non-Crystalline Solids 32, 161 (1979).
|
[] |
[
"Mathematics Subject Classification. 14J29 (primary), 14L30, 14Q99",
"Mathematics Subject Classification. 14J29 (primary), 14L30, 14Q99"
] |
[
"Giovanna Carnovale ",
"Francesco Polizzi "
] |
[] |
[] |
A smooth, projective surface S is said to be isogenous to a product if there exist two smooth curves C, F and a finite group G acting freely on C × F so that S = (C × F )/G. In this paper we classify all surfaces with pg = q = 1 which are isogenous to a product.
|
10.1515/advgeom.2009.015
|
[
"https://arxiv.org/pdf/0704.0446v2.pdf"
] | 2,329,234 |
0704.0446
|
002a3ce87c292c737208732c9dedacac1f888639
|
Mathematics Subject Classification. 14J29 (primary), 14L30, 14Q99
August 3. 2008. 2000
Giovanna Carnovale
Francesco Polizzi
Mathematics Subject Classification. 14J29 (primary), 14L30, 14Q99
August 3. 2008. 2000and phrases Surfaces of general typeisotrivial fibrationsactions of finite groups 1
A smooth, projective surface S is said to be isogenous to a product if there exist two smooth curves C, F and a finite group G acting freely on C × F so that S = (C × F )/G. In this paper we classify all surfaces with pg = q = 1 which are isogenous to a product.
Introduction
The classification of smooth, complex surfaces S of general type with small birational invariants is quite a natural problem in the framework of algebraic geometry. For instance, one may want to understand the case where the Euler characteristic χ(O S ) is 1, that is, when the geometric genus p g (S) is equal to the irregularity q(S). All surfaces of general type with these invariants satisfy p g ≤ 4. In addition, if p g = q = 4 then the self-intersection K 2 S of the canonical class of S is equal to 8 and S is the product of two genus 2 curves, whereas if p g = q = 3 then K 2 S = 6 or 8 and both cases are completely described ( [CCML98], [HP02], [Pir02]). On the other hand, surfaces of general type with p g = q = 0, 1, 2 are still far from being classified. We refer the reader to the survey paper [BaCaPi06] for a recent account on this topic and a comprehensive list of references. A natural way of producing interesting examples of algebraic surfaces is to construct them as quotients of known ones by the action of a finite group. For instance Godeaux constructed in [Go31] the first example of surface of general type with vanishing geometric genus taking the quotient of a general quintic surface of P 3 by a free action of Z 5 . In line with this Beauville proposed in [Be96,p. 118] the construction of a surface of general type with p g = q = 0, K 2 S = 8 as the quotient of a product of two curves C and F by the free action of a finite group G whose order is related to the genera g(C) and g(F ) by the equality |G| = (g(C) − 1)(g(F ) − 1). Generalizing Beauville's example we say that a surface S is isogenous to a product if S = (C × F )/G, for C and F smooth curves and G a finite group acting freely on C × F . A systematic study of these surfaces has been carried out in [Ca00]. They are of general type if and only if both g(C) and g(F ) are greater than or equal to 2 and in this case S admits a unique minimal realization where they are as small as possible. From now on, we tacitly assume that such a realization is chosen, so that the genera of the curves and the group G are invariants of S. The action of G can be seen to respect the product structure on C × F . This means that such actions fall in two cases: the mixed one, where there exists some element in G exchanging the two factors (in this situation C and F must be isomorphic) and the unmixed one, where G acts faithfully on both C and F and diagonally on their product. After [Be96], examples of surfaces isogenous to a product with p g = q = 0 appeared in [Par03] and [BaCa03], and their complete classification was obtained in [BaCaGr06]. The next natural step is therefore the analysis of the case p g = q = 1. Surfaces of general type with these invariants are the irregular ones with the lowest geometric genus and for this reason it would be important to provide their complete description. So far, this has been obtained only in the cases K 2 S = 2, 3 ( [Ca81], [CaCi91], [CaCi93], [Pol05], [CaPi06]). The goal of the present paper is to give the full list of surfaces with p g = q = 1 that are isogenous to a product. Our work has to be seen as the sequel to the article [Pol07], which describe all unmixed cases with G abelian and some unmixed examples with G nonabelian. Apart from the complete list of the genera and groups occurring, our paper contains the first examples of surfaces of mixed type with q = 1. The mixed cases turn out to be much less frequent than the unmixed ones and, as when p g = q = 0, they occur for only one value of the order of G. However, in contrast with what happens when p g = q = 0, the mixed cases do not correspond to the maximum value of |G| but appear for a rather small order, namely |G| = 16. Our classification procedure involves arguments from both geometry and computational group theory. We will give here a brief account on how the result is achieved. If S is any surface isogenous to a product and satisfying p g = q then |G|, g(C), g(F ) are related as in Beauville's example and we have K 2 S = 8. Besides, if p g = q = 1 such surfaces are necessarily minimal and of general type (Lemma 2.1). If S = (C×F )/G is of unmixed type then the two projections π C : C×F −→ C, π F : C×F −→ F induce two morphisms α : S −→ C/G, β : S −→ F/G, whose smooth fibres are isomorphic to F and C, respectively. Moreover, the geometry of S is encoded in the geometry of the two coverings h : C −→ C/G, f : F −→ F/G and the invariants of S impose strong restrictions on g(C), g(F ) and |G|. Indeed we have 1 = q(S) = g(C/G) + g(F/G) so we may assume that E := C/G is an elliptic curve and F/G ∼ = P 1 . Then α : S −→ E is the Albanese morphism of S and the genus g alb of the general Albanese fibre equals g(F ). It is proven in [Pol07, Proposition 2.3] that 3 ≤ g(F ) ≤ 5; in particular this allows us to control |G|. The covers f and h are determined by two suitable systems of generators for G, that we call V and W, respectively. Besides, in order to obtain a free action of G on C × F and a quotient S with the desired invariants, V and W are subject to strict conditions of combinatorial nature (Proposition 2.2). The geometry imposes also strong restrictions on the possible W and the genus of C, so the existence of V and W and the compatibility conditions can be verified through a computer search. It is worth mentioning that the classification of finite groups of automorphisms acting on curves of genus lesser than or equal to 5 could have also been retrieved from the existing literature ( [Br90], [Ki03], [KuKi90], [KuKu90]).
If S = (C × C)/G is of mixed type then the index two subgroup G • of G corresponding to transformations that do not exchange the coordinates in C × C acts faithfully on C. The quotient E = C/G • is isomorphic to the Albanese variety of S and g alb = g(C) (Proposition 2.5).
Moreover g(C) may only be 5, 7 or 9, hence |G| is at most 64 (Proposition 2.10). The cover h : C −→ E is determined by a suitable system of generators V for G • and since the action of G on C × C is required to be free, combinatorial restrictions involving the elements of V and those of G \ G • have to be imposed (Proposition 2.6). Our classification is obtained by first listing those groups G • for which V exists and then by looking at the admissible extensions G of G • . We find that the only possibility occurring is for g(C) = 5 so that |G| is necessarily 16 (Propositions 4.1, 4.2, 4.3).
In the last part of the paper we examine the structure of the subset of the moduli space corresponding to surfaces isogenous to a product with p g = q = 1. It can be explicitly described by calculating the number of orbits of the direct product of certain mapping class groups with Aut(G) acting on the set (of pairs) of systems of generators (Proposition 5.1). In particular it is possible to determine the number of irreducible connected components and their respective dimensions, see the forthcoming article [Pe08].
Our computations were carried out by using the computer algebra program GAP4, whose database includes all groups of order less than 2000, with the exception of 1024 (see [GAP4]). For the reader's convenience we included the scripts in the Appendix. Now let us state the main result of this paper.
Main Theorem. Let S = (C × F )/G be a surface with p g = q = 1, isogenous to a product of curves. Then S is minimal of general type and the occurrences for g(F ), g(C), G, the dimension D of the moduli space and the number N of its connected components are precisely those in the table below. Here IdSmallGroup(G) denotes the label of the group G in the GAP4 database of small groups. The calculation of N is due to Penegini and Rollenske, see [Pe08], except for the cases marked with ( * ), which were already studied in [Pol07]. The cases marked with ( * * ) also appeared in [Pol07], but the computation of N was missing. This work is organized as follows. In Section 1 we collect the basic facts about surfaces isogenous to a product, following the treatment given by Catanese in [Ca00] and we fix the algebraic setup. In Section 2 we apply the structure theorems of Catanese to the case p g = q = 1 and this leads to Propositions 2.2 and 2.6, that provide the translation of our classification problem from geometry to algebra. All these results are used in Sections 3 and 4, which are the core of the paper and give the complete lists of the occurring groups and genera in the unmixed and mixed cases, respectively. Finally, Section 5 is devoted to the description of the moduli spaces.
IdSmall g(F ) = g alb g(C) G Group(G) Type D N 3 3 (Z 2 ) 2 G(4, 2) unmixed ( * ) 5 1 3 5 (Z 2 ) 3 G(8, 5) unmixed ( * ) 4 1 3 5 Z 2 × Z 4 G(8, 2) unmixed ( * ) 3 2 3 9 Z 2 × Z 8 G(16, 5) unmixed ( * ) 2 1 3 5 D 4 G(8, 3) unmixed 3 1 3 7 D 6 G(12, 4) unmixed ( * * ) 3 1 3 9 Z 2 × D 4 G(
Notations and conventions. All varieties, morphisms, etc. in this article are defined over C. By "surface" we mean a projective, non-singular surface S, and for such a surface K S denotes the canonical class, p g (S) = h 0 (S, K S ) is the geometric genus, q(S) = h 1 (S, K S ) is the irregularity and χ(O S ) = 1 − q(S) + p g (S) is the Euler characteristic. Throughout the paper we use the following notation for groups:
• Z n : cyclic group of order n.
• D p,q,r = Z p ⋉ Z q = x, y | x p = y q = 1, xyx −1 = y r : split metacyclic group of order pq.
The group D 2,n,−1 is the dihedral group of order 2n and it will be denoted by D n . • S n , A n : symmetric, alternating group on n symbols. Acknowledgements. The authors wish to thank M. Penegini and S. Rollenske for giving them a preliminary version of [Pe08] and for kindly allowing them to include their results in the Main Theorem. Moreover they are indebted with the referee for several valuable comments and suggestions to improve this article.
• If x, y ∈ G, their commutator is defined as [x, y] = xyx −1 y −1 . • If x ∈ G we denote by Int x the inner automorphism of G defined as Int x (g) = xgx −1 . • IdSmallGroup(G)
Basic on surfaces isogenous to a product
In this section we collect for the reader's convenience some basic results on groups acting on curves and surfaces isogenous to a product, referring to [Ca00] for further details.
Definition 1.1. A complex surface S of general type is said to be isogenous to a product if there exist two smooth curves C, F and a finite group G acting freely on C ×F so that S = (C ×F )/G. There are two cases: the unmixed one, where G acts diagonally, and the mixed one, where there exist elements of G exchanging the two factors (and then C, F are isomorphic).
In both cases, since the action of G on C × F is free, we have
K 2 S = K 2 C×F |G| = 8(g(C) − 1)(g(F ) − 1) |G| χ(O S ) = χ(O C×F ) |G| = (g(C) − 1)(g(F ) − 1) |G| ,(1)
hence K 2 S = 8χ(O S ). Let C, F be curves of genus ≥ 2. Then the inclusion Aut(C × F ) ⊃ Aut(C) × Aut(F ) is an equality if C and F are not isomorphic, whereas Aut(C × C) = Z 2 ⋉ (Aut(C) × Aut(C)), the Z 2 being generated by the involution exchanging the two coordinates. If S = (C × F )/G is a surface isogenous to a product, we will always consider its unique minimal realization. This means that
• in the unmixed case, we have G ⊂ Aut(C) and G ⊂ Aut(F ) (i.e. G acts faithfully on both C and F );
• in the mixed case, where C ∼ = F , we have G • ⊂ Aut(C), for G • := G∩(Aut(C)×Aut(C)). (See [Ca00, Corollary 3.9 and Remark 3.10]).
Definition 1.2. Let G be a finite group and let g ′ ≥ 0, and m r ≥ m r−1 ≥ . . . ≥ m 1 ≥ 2 be integers. A generating vector for G of type (g ′ | m 1 , . . . , m r ) is a (2g ′ + r)-ple of elements
V = {g 1 , . . . , g r ; h 1 , . . . , h 2g ′ } such that: the set V generates G; |g i | = m i and g 1 g 2 · · · g r Π g ′ i=1 [h i , h i+g ′ ] = 1. If such a V exists, then G is said to be (g ′ | m 1 , . . . , m r )-generated.
For convenience we make abbreviations such as (4 | 2 3 , 3 2 ) for (4 | 2, 2, 2, 3, 3) when we write down the type of the generating vector V. By Riemann's existence theorem a finite group G acts as a group of automorphisms of some compact Riemann surface X of genus g with quotient a Riemann surface Y of genus g ′ if and only if there exist integers m r ≥ m r−1 ≥ . . . ≥ m 1 ≥ 2 such that G is (g ′ | m 1 , . . . , m r )generated and g, g ′ , |G| and the m i are related by the Riemann-Hurwitz formula. Moreover, if V = {g 1 , . . . , g r ; h 1 , . . . , h 2g ′ } is a generating vector for G, the subgroups g i and their conjugates are precisely the nontrivial stabilizers of the G-action ([Br90, Section 2], [Bre00, Chapter 3], [H71]). The description of surfaces isogenous to a product can be therefore reduced to finding suitable generating vectors. Requiring that S has given invariants p g and q imposes numerical restrictions on the order of the group G and the genus of the curves C and F . Our goal is to classify all surfaces with p g = q = 1 isogenous to a product. The aim of the next section is to translate this classification problem from geometry to algebra.
2. The case p g = q = 1. Building data
Lemma 2.1. Let S = (C × F )/G be a surface isogenous to a product with p g = q = 1. Then (i) K 2 S = 8. (ii) |G| = (g(C) − 1)(g(F ) − 1). (iii) S is a minimal surface of general type.
Proof. Claims (i) and (ii) follow from (1). Now let us consider (iii). Since C × F is minimal and the cover C×F −→ S isétale, S is minimal as well. Moreover (ii) implies either g(C) = g(F ) = 0 or g(C) ≥ 2, g(F ) ≥ 2. The first case is impossible otherwise S = P 1 × P 1 and p g = q = 0; thus the second case occurs, hence S is of general type.
2.1. Unmixed case. If S = (C × F )/G is a surface with p g = q = 1, isogenous to an unmixed product, then g(C) ≥ 3, g(F ) ≥ 3 and up to exchanging F and C one may assume F/G ∼ = P 1 and C/G ∼ = E, where E is an elliptic curve. Moreover α : S −→ C/G is the Albanese morphism of S and g alb = g(F ), see [Pol07, Proposition 2.2]. This leads to
Proposition 2.2. ([Pol07, Proposition 3.1]) Let G be a finite group which is both (0 | m 1 , . . . , m r )
and (1 | n 1 , . . . , n s )-generated, with generating vectors V = {g 1 , . . . , g r } and W = {ℓ 1 , . . . , ℓ s ; h 1 , h 2 }, respectively. Let g(F ), g(C) be the positive integers defined by the Riemann-Hurwitz relations
2g(F ) − 2 = |G| − 2 + r i=1 1 − 1 m i , 2g(C) − 2 = |G| s j=1 1 − 1 n j .
(2)
Assume moreover that g(C) ≥ 3, g(F ) ≥ 3, |G| = (g(C) − 1)(g(F ) − 1) and (U ) σ∈G r i=1 σg i σ −1 ∩ σ∈G s j=1 σℓ j σ −1 = {1 G }.
Then there is a free, diagonal action of G on C × F such that the quotient S = (C × F )/G is a minimal surface of general type with p g = q = 1, K 2 S = 8. Conversely, every surface with p g = q = 1, isogenous to an unmixed product, arises in this way.
Here, condition (U ) ensures that the G-action on C × F is free. Set m := (m 1 , . . . , m r ) and n := (n 1 , . . . , n s ); if S = (C × F )/G is a surface with p g = q = 1 which is constructed by using the recipe in Proposition 2.2, it will be called an unmixed surface of type (G, m, n).
C = H 0 (Ω 1 S ) = (H 0 (Ω 1 C ) ⊕ H 0 (Ω 1 C )) G = (H 0 (Ω 1 C ) G • ⊕ H 0 (Ω 1 C ) G • ) G/G • = (H 0 (Ω 1 E ) ⊕ H 0 (Ω 1 E )) G/G • .
Since S is of mixed type, the quotient Z 2 = G/G • exchanges the last two summands, whence h 0 (Ω 1 E ) = 1. Thus E is an elliptic curve and there is a commutative diagram
(3)
C × C ρ / / π E × E ε Sρ / / α % % K K K K K K K K K K K K E (2) α E
showing that the Albanese morphism α of S factors through the Abel-Jacobi mapα of the double symmetric product E (2) of E.
By Lemma 2.1 we have |G| = (g(C) − 1) 2 . In this case [Ca00, Proposition 3.16] becomes Proposition 2.6. Assume that G • is a (1 | n 1 , . . . , n s )-generated finite group with generating vector V = {ℓ 1 , . . . , ℓ s ; h 1 , h 2 } and that there is a nonsplit extension
(4) 1 −→ G • −→ G −→ Z 2 −→ 1
which gives an involution [ϕ] in Out(G • ). Let g(C) ∈ N be defined by the Riemann-Hurwitz relation 2g(C) − 2 = |G • | s j=1 1 − 1 n j . Assume, in addition, that |G| = (g(C) − 1) 2 and that (M 1) for all g ∈ G \ G • we have {ℓ 1 , . . . , ℓ s } ∩ {gℓ 1 g −1 , . . . , gℓ s g −1 } = ∅;
(M 2) for all g ∈ G \ G • we have
g 2 / ∈ s j=1 σ∈G • σℓ j σ −1 .
Then there is a free, mixed action of G on C × C such that the quotient S = (C × C)/G is a minimal surface of general type with p g = q = 1, K 2 S = 8. Conversely, every surface S with p g = q = 1, isogenous to a mixed product, arises in this way.
Here, conditions (M 1) and (M 2) ensure that the G-action on C × C is free.
Remark 2.7. The surface S is not covered by elliptic curves because it is of general type (Lemma 2.1), so the map C −→ C/G • = E is ramified. Therefore condition (M 1) implies that G is not abelian.
Remark 2.8. The exact sequence (4) is non split if and only if the number of elements of order 2 in G equals the number of elements of order 2 in G • .
Proposition 2.9. Let S = (C × C)/G be a surface with p g = q = 1, isogenous to a mixed product. Then g alb = g(C).
Proof. Let us look at diagram (3). The Abel-Jacobi mapα gives to E (2) the structure of a P 1 -bundle over E ( [CaCi93]); let f be the generic fibre of this bundle and F * := ρ * ε * (f). If F alb is the generic Albanese fibre of S we have F alb = π(F * ). Let n = (n 1 , . . . , n s ) be such that G • is (1 | n 1 , . . . n s )-generated and 2g(C) − 2 = |G • | s j=1 1 − 1 n j . The (G • × G • )-cover ρ is branched exactly along the union of s "horizontal" copies of E and s "vertical" copies of E; moreover for each i there are one horizontal copy and one vertical copy whose branching number is n i . Since ε * (f) is an elliptic curve that intersects all these copies of E transversally in one point, by Riemann-Hurwitz formula applied to F * −→ ε * (f) we obtain
2g(F * ) − 2 = |G • | 2 · s j=1 2 1 − 1 n j .
On the other hand the G-cover π isétale, so we have
2g(F alb ) − 2 = 1 |G| (2g(F * ) − 2) = |G • | s j=1 1 − 1 n j = 2g(C) − 2,
whence g alb = g(C).
If S = (C × C)/G is a surface with p g = q = 1 which is constructed by using the recipe of Proposition 2.6, it will be called a mixed surface of type (G, n). The analogue of Proposition 2.3 in the mixed case is Proposition 2.10. Let S = (C × C)/G be a mixed surface of type (G, n). Then there are at most the following possibilities:
• g(C) = 5, n = (2 2 ), |G| = 16;
• g(C) = 7, n = (3), |G| = 36; • g(C) = 9, n = (2), |G| = 64.
Proof. By Proposition 2.6 we have 2g(C) − 2 = |G • | s j=1 1 − 1 n j and |G • | = 1 2 (g(C) − 1) 2 , so g(C) must be odd and we obtain 4 = (g(C) − 1) s j=1 1 − 1 n j . Therefore 4 ≥ 1 2 (g(C) − 1) and the only possibilities are g(C) = 3, 5, 7, 9. The case g(C) = 3 is ruled out because G cannot be abelian by Remark 2.7. If g(C) = 5 then s j=1 1 − 1 n j = 1, so n = (2 2 ) and |G| = 16.
If g(C) = 7 then s j=1 1 − 1 n j = 2 3 , so n = (3) and |G| = 36. If g(C) = 9 then s j=1 1 − 1 n j = 1 2 , so n = (2) and |G| = 64.
We will see in Section 2.10 that only the case g(C) = 5 actually occurs.
The unmixed case
The classification of surfaces of general type with p g = q = 1 isogenous to an unmixed product is carried out in [Pol07] when the group G is abelian. Therefore in this section we assume that G is nonabelian. (3 2 , 5) 15 , (2, 4, 12) 12 , (2, 6 2 ) 12 , (3 2 , 6) 12 ,
(3, 4 2 ) 12 , (2, 5, 10) 10 , (3 2 , 9) 9 , (2, 8 2 ) 8 ,
(4 3 ) 8 , (3, 6 2 ) 6 , (5 3 ) 5 , (2 3 , 3) 12 , (2 3 , 4) 8 , (2 3 , 6) 6 , (2 2 , 3 2 ) 6 , (2 2 , 4 2 ) 4 , (3 4 ) 3 , (2 5 ) 4 , (2 6 ) 2
Proof. This follows combining [BaCaGr06, Proposition 1.4] with Lemma 2.4.
By abuse of notation, we write m ∈ T instead of m α(m) ∈ T.
Now we analyze the three cases in Proposition 2.3 separately, according to the value of g(F ). Note that if g(F ) = 3, 4, 5 then |Aut(F )| ≤ 168, 120, 192, respectively ([Bre00, p. 91]).
Proposition 3.2. If g(F ) = 3 we have precisely the following possibilities.
IdSmall G Group(G) m D 4 G(8, 3) (2 2 , 4 2 ) D 6 G(12, 4) (2 3 , 6) Z 2 × D 4
G(16, 11) (2 3 , 4) D 2,12,5 G(24, 5) (2, 4, 12)
Z 2 × A 4 G(24, 13) (2, 6 2 ) S 4 G(24, 12) (3, 4 2 ) Z 2 ⋉ (Z 2 × Z 8 ) G(32, 9) (2, 4, 8) Z 2 × S 4 G(48, 48) (2, 4, 6)
Proof. Since n = (2 2 ) it follows that G is (1 | 2 2 )-generated and by the second relation in (2) we have |G| = 2(g(C) − 1). So we must describe all unmixed surfaces of type (G, m, n) with m ∈ T, n = (2 2 ) and |G| = 2α(m). By a computer search through the r-tuples in Proposition 3.1 we can therefore list all possibilities, proving our statement. See the GAP4 script 1 in the Appendix to see how this procedure applies to an explicit example.
Proposition 3.3. If g(F ) = 4 we have precisely the following possibilities. (36, 9) (3, 4 2 ) A 5 G(60, 5) (2, 5 2 ) Z 3 × S 4 G(72, 42) (2, 3, 12) S 5 G(120, 34) (2, 4, 5)
IdSmall G Group(G) m S 3 G(6, 1) (2 6 ) D 6 G(12, 4) (2 5 ) Z 3 × S 3 G(18, 3) (2 2 , 3 2 ) Z 3 × S 3 G(18, 3) (3, 6 2 ) S 4 G(24, 12) (2 3 , 4) S 3 × S 3 G(36, 10) (2, 6 2 ) Z 6 × S 3 G(36, 12) (2, 6 2 ) Z 4 ⋉ (Z 3 ) 2 G
Proof. Since n = (3) it follows that G is (1 | 3)-generated and by the second relation in (2) we have |G| = 3(g(C) − 1). Therefore our statement can be proven searching by computer calculation all unmixed surfaces of type (G, m, n) with m ∈ T, n = (3), |G| = 3α(m) and α(m) ≤ 40.
Proposition 3.4. If g(F ) = 5 we have precisely the following possibilities.
IdSmall G Group(G) m D 4 G(8, 3) (2 6 ) A 4 G(12, 3) (3 4 ) Z 4 ⋉ (Z 2 ) 2 G(16, 3) (2 2 , 4 2 ) Z 2 × A 4 G(24, 13) (2 2 , 3 2 ) Z 2 × A 4 G(24, 13) (3, 6 2 ) Z 8 ⋉ (Z 2 ) 2 G(32, 5) (2, 8 2 ) Z 2 ⋉ D 2,8,5 G(32, 7) (2, 8 2 ) Z 4 ⋉ (Z 4 × Z 2 ) G(32, 2) (4 3 ) Z 4 ⋉ (Z 2 ) 3 G(32, 6) (4 3 ) (Z 2 ) 2 × A 4 G(48, 49) (2, 6 2 ) Z 4 ⋉ (Z 2 ) 4 G(64, 32) (2, 4, 8) Z 5 ⋉ (Z 2 ) 4 G(80, 49) (2, 5 2 )
Proof. Since n = (2), it follows that G is (1 | 2)-generated and by the second relation in (2) we have |G| = 4(g(C) − 1). Therefore our statement can be proven searching by computer calculation all unmixed surfaces of type (G, m, n) with m ∈ T, n = (2), |G| = 4α(m) and α(m) ≤ 48.
The mixed case
In this section we use Proposition 2.6 in order to classify the surfaces with p g = q = 1 isogenous to a mixed product. By Proposition 2.10 we have g(C) = 5, 7 or 9. Let us consider the three cases separately.
4.1. The case g(C) = 5, |G| = 16.
Proposition 4.1. If g(C) = 5, |G| = 16 we have precisely the following possibilities.
IdSmall
IdSmall
G • Group(G • ) G Group(G) D 4 G(8, 3) D 2,8,3 G(16, 8) Z 2 × Z 4 G(8, 2) D 2,8,5 G(16, 6) (Z 2 ) 3 G(8, 5) Z 4 ⋉ (Z 2 ) 2 G(16, 3)
Proof. In this case n = (2 2 ), so our first task is to find all nonsplit sequences of type (4) for which G • is a (1 | 2 2 )-generated group of order 8. The three abelian groups of order 8 and D 4 are (1 | 2 2 )-generated whereas the quaternion group Q 8 is not.
Since Z 8 has only one element ℓ of order 2, condition (M 1) in Proposition 2.6 cannot be satisfied for any choice of V. By Remark 2.7 we are left to analyze the possible embeddings of Z 2 × Z 4 , D 4 and (Z 2 ) 3 in nonabelian groups of order 16. The groups Z 2 × Z 4 , D 4 and (Z 2 ) 3 have 3, 5 and 7 elements of order 2, respectively. Therefore if n 2 denotes the number of elements of order 2 in G, by Remark 2.8 we must consider only those groups G of order 16 with n 2 ∈ {3, 5, 7}. The nonabelian groups of order 16 with n 2 = 3 are D 2,8,5 , Z 2 × Q 8 and D 4,4,−1 and they all contain a copy of Z 2 × Z 4 . The only nonabelian group of order 16 with n 2 = 5 is D 2,8,3 and it contains a subgroup isomorphic to D 4 . The nonabelian groups of order 16 with n 2 = 7 are Z 4 ⋉ (Z 2 ) 2 = G(16, 3) and Z 2 ⋉ Q 8 , and only the former contains a subgroup isomorphic to (Z 2 ) 3 (cfr. [Wi05]). Summarizing, we are left with the following cases:
G • G D 4 D 2,8,3 Z 2 × Z 4 D 2,8,5 Z 2 × Z 4 Z 2 × Q 8 Z 2 × Z 4 D 4,4,−1 (Z 2 ) 3 Z 4 ⋉ (Z 2 ) 2
Let us analyze them separately.
• G • = D 4 , G = D 2,8,3 = x, y | x 2 = y 8 = 1, xyx −1 = y 3 We consider the subgroup G • := x, y 2 ∼ = D 4 . Set ℓ 1 = ℓ 2 = x and h 1 = h 2 = y 2 . Condition (M 1) holds because C G (x) = x, y 4 ⊂ G • . Condition (M 2) is satisfied because the conjugacy class of x in G • is contained in the coset x y 2 while for every g ∈ yG • we have g 2 ∈ y . Therefore this case occurs by Proposition 2.6.
• G • = Z 2 × Z 4 , G = D 2,8,5 = x, y | x 2 = y 8 = 1, xyx −1 = y 5
We consider the subgroup G • := x, y 2 ∼ = Z 2 × Z 4 . Set ℓ 1 = ℓ 2 = x and h 1 = h 2 = y 2 . Conditions (M 1) and (M 2) are verified as in the previous case, so this possibility occurs.
• G • = Z 2 × Z 4 , G = Z 2 × Q 8 and G • = Z 2 × Z 4 , G = D 4,4,−1 .
All elements of order 2 in G are central so condition (M 1) cannot be satisfied and these cases do not occur.
• G • = (Z 2 ) 3 , G = Z 4 ⋉ (Z 2 ) 2 = x, y, z | x 4 = y 2 = z 2 = 1, xyx −1 = yz, [x, z] = [y, z] = 1
We consider the subgroup G • := y, z, x 2 ∼ = (Z 2 ) 3 . Set ℓ 1 = ℓ 2 = y and h 1 = z, h 2 = x 2 . Condition (M 1) holds because G • is abelian and [x, y] = 1. Condition (M 2) is satisfied because if g ∈ xG • then g 2 ∈ z, x 2 . Therefore this case occurs.
4.2. The case g(C) = 7, |G| = 36.
Proposition 4.2. The case g(C) = 7, |G| = 36 does not occur.
Proof. In this case n = (3), so G • is a group of order 18 which is (1 | 3)-generated. There are five groups of order 18 up to isomorphism. By computer search or direct calculation we see that the only one which is (1 | 3)-generated is Z 3 × S 3 = G (18, 3). Thus G would fit into a short exact sequence
(5) 1 −→ Z 3 × S 3 −→ G −→ Z 2 −→ 1.
A computer search shows that the only groups of order 36 containing a subgroup isomorphic to Z 3 × S 3 are G(36, 10) = S 3 × S 3 and G(36, 12) = Z 6 × S 3 (see GAP4 script 2 in the Appendix). They contain 15 and 7 elements of order 2, respectively. On the other hand Z 3 × S 3 contains 3 elements of order 2, so by Remark 2.8 all possible extensions of the form (5) are split and this case cannot occur.
4.3. The case g(C) = 9, |G| = 64.
Proposition 4.3. The case g(C) = 9, |G| = 64 does not occur.
The proof will be the consequence of the results below. First notice that, since n = (2), the group G • must be (1 | 2)-generated.
Computational Fact 4.4. There exist precisely 8 groups of order 32 which are (1 | 2)generated, namely G(32, t) for t ∈ {2, 4, 5, 6, 7, 8, 12, 17}. The number n 2 of their elements of order 2 is given in the table below:
t 2 4 5 6 7 8 12 17 n 2 (G(32, t)) 7 3 7 11 11 3 3 3
Proof. Slightly modifying the first part of GAP4 script 1 in the Appendix we easily find that the groups of order 32 which are (1 | 2)-generated are exactly those in the statement. The number of elements of order 2 in each case are found by a quick computer search: see again the Appendix, GAP4 script 3. Lemma 4.6. Assume g(C) = 9 and that one of the following situations occur:
Computational
• [G, G] 2 ⊆ Z(G); • there exists some element y ∈ G \ G • commuting with all elements in [G • , G • ] 2 .
Then given any generating vector V = {ℓ 1 ; h 1 , h 2 } of type (1 | 2) for G • , condition (M 1) in Proposition 2.6 cannot be satisfied.
Proof. Since ℓ 1 ∈ [G • , G • ] 2 ⊆ [G, G] 2 ,G • = x, y, z | x 8 = y 2 = z 2 = 1, [y, z] = [x, z] = 1, [x, y] = z .
Its derived subgroup contains exactly one element of order 2, namely z. It follows that if {ℓ 1 ; h 1 , h 2 } is any generating vector of type (1 | 2) for G • , then ℓ 1 = z. Since [G • , G • ] is characteristic in G • , condition (M 1) cannot be satisfied for any embedding of G • into G.
By using the two instructions P:=PresentationViaCosetTable(G) and TzPrintRelators(P) and setting in the output x := f1, y := f2, z := f3, w := f4, v := f5, u := f6 one obtains the following presentations for G(64, 33), G(64, 35) and G(64, 37). G(64, 33) = x, y, z, w, v, u | z 2 = w 2 = v 2 = u 2 = 1, x 2 = w, y 2 = u,
[x, zy] = z, [x, vz] = v, [x, vu] = u, [y, z] = [y, v] = [z, v] = [w, v] = [x, u] = 1 (7) G(64, 35) = x, y, z, w, v, u | w 2 = v 2 = u 2 = 1, z 2 = y 2 = u, x 2 = w, [y, z] = [z, w] = u, [x, yz] = z, [x, z] = uv, [y, v] = [z, v] = [w, v] = [x, u] = 1 (8)
G(64, 37) = x, y, z, w, v, u | v 2 = u 2 = 1, w 2 = z 2 = y 2 = u, x 2 = w, Computational Fact 4.9. Referring to presentations (7), (8) and (9), we have the following facts.
• The group G(64, 33) contains exactly one subgroup N 1 isomorphic to G(32, 6) and one subgroup N 2 isomorphic to G(32, 7), namely N 1 := x, z, w, v, u , N 2 := xy, z, w, v, u .
• The group G(64, 35) contains exactly two subgroups N 3 , N 4 isomorphic to G(32, 6), namely N 3 := x, z, w, v, u , N 4 := xy, z, w, v, u . • The group G(64, 37) contains exactly two subgroups N 5 , N 6 isomorphic to G(32, 8), namely N 5 := x, z, w, v, u , N 6 := xy, z, w, v, u . In addition, for every i ∈ {1, . . . , 6} we have (a) [N i
, N i ] = v, u ∼ = Z 2 × Z 2 . (b) y / ∈ N i and y commutes with all elements in [N i , N i ].
Proof. See the GAP4 script 7 in the Appendix.
| σ i σ i+1 σ i = σ i+1 σ i σ i+1 , σ i σ j = σ j σ i if |i − j| ≥ 2,
σ r−1 σ r−2 · · · σ 2 1 · · · σ r−2 σ r−1 = 1 ,
Mod 1,1 := t α , t β , t γ | t α t β t α = t β t α t β , (t α t β ) 3 = 1 , Mod 1,[2] := t α , t β , t γ , ρ | t α t β t α = t β t α t β , t α t γ t α = t γ t α t γ , t β t γ = t γ t β , (t α t β t γ ) 4 = 1, t α ρ = ρt α , t β ρ = ρt β , t γ ρ = ρt γ .
One can prove that where Σ 1 is the torus S 1 × S 1 ([Schn03], [CattMu04]). This implies that we can define actions of these groups on the set of generating vectors for G of type (0 | m 1 , . . . , m r ), (1 | n) and (1 | n 2 ), (0 | 2, 4, 12)-generated. This is done using GAP4 as below; the output tells us that there is only one such a group, namely G = G(24, 5).
gap> # --------------SCRIPT 1 ------------------gap> s:=NumberSmallGroups(24);; set:=[1..s]; [1..15] gap> for t in set do > c:=0; G:= SmallGroup(24,t); > Ab:=IsAbelian(G); > for g1 in G do > for g2 in G do > g3:=(g1*g2)^-1; > H:= Subgroup(G, [g1,g2]); > if Order(g1)=2 and Order(g2)=4 and Order(g3)=12 and > Order(H)=Order(G) and > Ab=false then > c:=c+1; fi; > if Order(g1)=2 and Order(g2)=4 and Order(g3)=12 and > Order(H)=Order(G) and > Ab=false and c=1 then > Print(IdSmallGroup(G)," "); > fi; od; od; od; Print("\n"); [24,5] By using the two instructions P:=PresentationViaCosetTable(G) and TzPrintRelators(P) we see that G has the presentation x, y | x 2 = y 12 = 1, xyx −1 = y 5 , hence it is isomorphic to the metacyclic group D 2,12,5 . In order to speed up further computations, we define the sets G2, G4 given by the elements of G having order 2 and 4, respectively. ; gap> for g in G do > if Order(g)=2 then Add(G2,g); fi; > if Order(g)=4 then Add(G4,g); fi; od;
Then we check whether G is actually (1 | 2 2 )-generated; if not, it should be excluded. gap> c:=0;; gap> for l2 in G2 do > for h1 in G do > for h2 in G do > l1:=(l2*h1*h2*h1^-1*h2^-1)^-1; > K:=Subgroup(G, [l2, h1, h2]); > if Order(l1)=2 and Order(K)=Order(G) then > Print(IdSmallGroup(G), " is (1 | 2,2)-generated", "\n"); c:=1; fi; > if c=1 then break; fi; od; > if c=1 then break; fi; od; > if c=1 then break; fi; od; [24,5] is (1 | 2,2)-generated We finish the proof by checking whether the surface S actually exists; the procedure is to look for a pair (V, W) of generating vectors for G satisfying the assumptions of Proposition 2.2. gap> c:=0;; gap> for g1 in G2 do > for g2 in G4 do > g3:=(g1*g2)^-1; > H:=Subgroup(G, [g1, g2]); > for l2 in G2 do > for h1 in G do > for h2 in G do > l1:=(l2*h1*h2*h1^-1*h2^-1)^-1; > K:=Subgroup(G, [l2, h1, h2]); > Boole1:=l1 in ConjugacyClass(G, g1); > Boole2:=l1 in ConjugacyClass(G, g2^2); > Boole3:=l1 in ConjugacyClass(G, g3^6); > Boole4:=l2 in ConjugacyClass(G, g1); > Boole5:=l2 in ConjugacyClass(G, g2^2); > Boole6:=l2 in ConjugacyClass(G, g3^6); > if Order(g3)=12 and Order(l1)=2 and > Order(H)=Order(G) and Order(K)=Order(G) and > Boole1=false and Boole2=false and Boole3=false and > Boole4=false and Boole5=false and Boole6=false then > Print("The surface exists "); c:=1; fi; > if c=1 then break; fi; od; > if c=1 then break; fi; od; > if c=1 then break; fi; od; > if c=1 then break; fi; od; > if c=1 then break; fi; od; Print("\n"); The surface exists
The script above can be easily modified in order to obtain the list of all admissible pairs (V, W); for instance, one of such pairs is given by g 1 = x, g 2 = xy −1 , g 3 = y ℓ 1 = xy 2 , ℓ 2 = xy 2 , h 1 = y, h 2 = y.
Finally, here are the GAP4 scripts used in Section 4.
indicates the label of the group G in the GAP4 database of small groups. For instance IdSmallGroup(D 4 ) = G(8, 3) and this means that D 4 is the third in the list of groups of order 8.
Let S = (C × F )/G be an unmixed surface of type (G, m, n). Then there are exactly the following possibilities:(a) g(F ) = 3, n = (2 2 ) (b) g(F ) = 4, n = (3) (c) g(F ) = 5, n = (2).The following lemma gives a restriction on m instead.Lemma 2.4. Let S = (C × F )/G be an unmixed surface of type (G, m, n). Then every m i divides |G| (g(F )−1) . Proof.Since g i is a stabilizer for the G-action on F and since G acts freely on (C × F ), the subgroup g i ∼ = Z m i acts freely on C. By Riemann-Hurwitz formula applied to the coverC −→ C/ g i we have g(C) − 1 = m i (g(C/ g i ) − 1). Thus m i divides g(C) − 1 = |G| (g(F )−1) . 2.2. Mixed case.Proposition 2.5. Let S = (C × C)/G be a surface with p g = q = 1 isogenous to a mixed product. Then E := C/G • is an elliptic curve isomorphic to the Albanese variety of S.Proof. We have (see[Ca00, Proposition 3.15])
Following [ BaCaGr06 ,
BaCaGr06Section 1.2], for an r-ple m = (m 1 , . . . , m r ) ∈ N r If S is an unmixed surface of type (G, m, n) then we necessarily have 2 ≤ m 1 ≤ . . . ≤ m r and Θ(m) > 0. Besides, by Proposition 2.2 we have α(m) = |G| g(F )−1 = g(C) − 1 ∈ N and by Lemma 2.4 each integer m i divides α(m). Then we get Proposition 3.1. Let S = (C × F )/G be a surface with p g = q = 1 isogenous to an unmixed product of type (G, m, n). Then the possibilities for m and α(m), written in the format m α(m) , lie in the set T below: 84 , (2, 3, 8) 48 , (2, 4, 5) 40 , (2, 3, 9) 36 , (2, 3, 10) 30 , (2, 3, 12) 24 , (2, 4, 6) 24 , (3 2 , 4) 24 , (2, 5 2 ) 20 , (2, 3, 18) 18 , (2, 4, 8) 16 ,
Fact 4 . 5 .
45Let t ∈ {2, 4, 5, 6, 7, 8, 12, 17}. A nonsplit extension of the form (6) 1 −→ G(32, t) −→ G(64, s) −→ Z 2 −→ 1 exists if and only if the pair (t, s) is one of the following: , (12, 13), (12, 14), (12, 15), (12, 16), (12, 126), (12, 127), (12, 143), (12, 156), (12, 158), (12, 160), (17, 28), (17, 43), (17, 45), (17, 46).
Fact 4.4, in order to detect all the groups G(64, s) fitting in some nonsplit extension of type (6) with t = 2, it is sufficient to select from the previous list the groups containing exactly n 2 = 7 elements of order 2. This can be done with the GAP4 script 5 in the Appendix, proving the claim in the case t = 2. The proof for the other values of t may be carried out exactly in the same way. Let us denote by [G, G] 2 and [G • , G • ] 2 the subsets of elements of order 2 in [G, G] and [G • , G • ], respectively.
in any of the above situations C G (ℓ 1 ) is not contained in G • , so (M 1) cannot hold. Computational Fact 4.7. Let G = G(64, s) be one of the groups appearing in the list of Computational Fact 4.5. Then [G, G] 2 is not contained in Z(G) if and only if s = 5, 33, 35, 37. Proof. See the GAP4 script 6 in the Appendix. Computational facts 4.5, 4.7 and Lemma 4.6 imply that we only need to analyze the following pairs (G • , G): G • G G(32, 5) G(64, 5) G(32, 6) G(64, 33) G(32, 7) G(64, 33) G(32, 6) G(64, 35) G(32, 8) G(64, 37) Proposition 4.8. The case G • = G(32, 5) does not occur. Proof. A presentation for the group G • is
[y, z] = [z, w] = u, [x, yz] = z, [x, z] = uv, [y, v] = [z, v] = [w, v] = 1 .
Proposition 4 . 10 .
410The cases G • = G(32, 6), G(32, 7), G(32, 8) do not occur.Proof. By Lemma 4.6 and Computational Fact 4.9 it follows that, given any nonsplit extension of type (6) with G • as above, condition (M 1) in Proposition 2.6 cannot be satisfied.Summing up, we finally obtainProof of Proposition 4.3. It follows from Propositions 4.8 and 4.10. 5. Moduli spaces Let M a,b be the moduli space of smooth minimal surfaces of general type with χ(O S ) = a, K 2 S = b; by an important result of Gieseker, M a,b is a quasiprojective variety for all a, b ∈ N (see [Gie77]). Obviously, our surfaces are contained in M 1,8 and we want to describe their locus there. We denote by M(G, m, n) the moduli space of unmixed surfaces of type (G, m, n) and by M(G, n) the moduli space of mixed surfaces of type (G, n). We know that n = (2 2 ), (3) or (2) in the unmixed case, whereas n = (2 2 ) in the mixed one. By a general result of Catanese ([Ca00]), both M(G, m, n) and M(G, n) consist of finitely many irreducible connected components of M 1,8 , all of the same dimension. More precisely, we have dim M(G, m, n) = r + s − 3, dim M(G, n) = s. Consider the mapping class groups in genus zero and one: Mod 0,[r] := σ 1 , . . . , σ r
Mod 0 ,
0[r] : = π 0 Diff + (P 1 − {p 1 , . . . , p r }), Mod 1,1 : = π 0 Diff + (Σ 1 − {p}), Mod 1,[2] : = π 0 Diff + (Σ 1 − {p, q}),
gap> # --------------SCRIPT 2 ------------------gap> s:=NumberSmallGroups(36);; set:=[1..; od; od; Print("\n"); [36,10] [36,12]
respectively.If V := {g 1 , . . . , g r } is of type (0 | m 1 , . . . , m r ) then the action is given byIf W := {ℓ 1 ; h 1 , h 2 } is of type (1 | n) then t α :2 . These are called Hurwitz moves and the induced equivalence relation on generating vectors is said Hurwitz equivalence (see[BaCa03],[BaCaGr06],[Pol07]). Proof. We can repeat exactly the same argument used in [BaCaGr06, Propositions 5.2 and 5.5]; we must just replace, where it is necessary, the mapping class group of P 1 with the mapping class group of the elliptic curve E.Proposition 5.1 in principle allows us to compute the number of connected components of the moduli space in each case. In practice, this task may be too hard to be achieved by hand, but it is not out of reach if one uses the computer. Recently, M. Penegini and S. Rollenske developed a GAP4 script that solves this problem in a rather short time. We put the result of their calculations in the Main Theorem (see Introduction), referring the reader to the forthcoming paper[Pe08]for further details.AppendixIn this Appendix we include, for the reader's convenience, some of the GAP4 scripts that we have used in our computations; all the others are similar and can be easily obtained modifying the ones below.Let us show how the procedure in the proof of Proposition 3.2 applies to an explicit example, namely m α(m) = (2, 4, 12) 12 . First we find all the nonabelian groups of order 24 that are
. > G0, =SmallGroup(32,t> G0:=SmallGroup(32,t);
> for g in G0 do > if Order(g)=2 then > n2:=n2+1; fi; od. > for g in G0 do > if Order(g)=2 then > n2:=n2+1; fi; od;
> Print, IdSmallGroup(G0). > Print(IdSmallGroup(G0), " ");
Print(n2. Print(n2, " ");
. > Print, > od; Print("\n");
. > , =NormalSubgroups(G)> N:=NormalSubgroups(G);
G0)=[32,2] and c=1 then > Print(IdSmallGroup(G). Idsmallgroup > If, > if IdSmallGroup(G0)=[32,2] and c=1 then > Print(IdSmallGroup(G), " ");
> fi; od; od; Print. > fi; od; od; Print("\n");
> for g in G do > if Order(g)=2 then n2:=n2+1. > for g in G do > if Order(g)=2 then n2:=n2+1;
. > , > fi; od;
> if n2=7 then > Print. > if n2=7 then > Print(IdSmallGroup(G), " ");
. Print, > fi; od; Print("\n");
gap> set:=[5,7,9. 917264,57] [64,59] [64,63. 64,64. 64,68. 64,70. 64,72. 64,76. 64,79] [64,81. c:=0; G:=SmallGroup(64,t,9] [64,57] [64,59] [64,63] [64,64] [64,68] [64,70] [64,72] [64,76] [64,79] [64,81] [64,82] gap> # --------------SCRIPT 6 ------------------ gap> set:=[5,7,9,11,13,,14,15,16,28,33,35,37,43,45,46, >57,59,63,64,68,70,72,76,79,81,82,112,113,114, 122,126, >127,132,143,156,158,160,164,165,166,172,182];c:=0; G:=SmallGroup(64,t);
. > , =DerivedSubgroup> D:=DerivedSubgroup(G);
. > , =d in Center(G)> B:=d in Center(G);
> if Order(d)=2 and B=false then > c:=c+1; fi. > if Order(d)=2 and B=false then > c:=c+1; fi;
> if Order(d)=2 and B=false and c=1 then > Print. > if Order(d)=2 and B=false and c=1 then > Print(IdSmallGroup(G), " ");
> fi; od; od; Print. > fi; od; od; Print("\n");
. > G: =smallgroup, > G:=SmallGroup(64, s[i]);
G) do > if IdSmallGroup(N) in r[i] then > Print(N. > For N In Normalsubgroups, > for N in NormalSubgroups(G) do > if IdSmallGroup(N) in r[i] then > Print(N, "=");
> Print, DerivedSubgroup(N). > Print(DerivedSubgroup(N), "\n");
. Print, od; [64,33] Group. f1*f2, f3, f4, f5, f6 ] )=[32,7] Group( [ f5, f6> fi; od; Print("\n"); od; [64,33] Group( [ f1*f2, f3, f4, f5, f6 ] )=[32,7] Group( [ f5, f6 ] )
Group. f1, f3, f4, f5, f6 ] )=[32,6] Group( [ f5, f6Group( [ f1, f3, f4, f5, f6 ] )=[32,6] Group( [ f5, f6 ] )
Group. f1, f3, f4, f5, f6 ] )=[32,6] Group( [ f5, f6Group( [ f1, f3, f4, f5, f6 ] )=[32,6] Group( [ f5, f6 ] )
Group. f1, f3, f4, f5, f6 ] )=[32,8] Group( [ f5, f6Group( [ f1, f3, f4, f5, f6 ] )=[32,8] Group( [ f5, f6 ] )
I Bauer, F Catanese, Some new surfaces with pg = q = 0, Proceedings of the Fano Conference. TorinoI. Bauer, F. Catanese: Some new surfaces with pg = q = 0, Proceedings of the Fano Conference (Torino, 2002).
The classification of surfaces with pg = q = 0 isogenous to a product of curves, e-print math. I Bauer, F Catanese, F Grunewald, AG/0610267 (2006) to appear in Pure Appl. Math Q., volume in honour of F. Bogomolov's 60-th birthdayI. Bauer, F. Catanese, F. Grunewald: The classification of surfaces with pg = q = 0 isogenous to a product of curves, e-print math.AG/0610267 (2006) to appear in Pure Appl. Math Q., volume in honour of F. Bogomolov's 60-th birthday.
Complex surfaces of general type: some recent progress. I Bauer, F Catanese, R Pignatelli, Springer-Verlagto appear in Global methods in complex geometryI. Bauer, F. Catanese, R. Pignatelli: Complex surfaces of general type: some recent progress, (2006), to appear in Global methods in complex geometry, 1-58 Springer-Verlag.
Complex algebraic surfaces. A Beauville, Cambridge University PressA. Beauville: Complex algebraic surfaces, Cambridge University Press 1996.
T Breuer, Characters and Automorphism groups of Compact Riemann Surfaces. Cambridge University PressT. Breuer: Characters and Automorphism groups of Compact Riemann Surfaces, Cambridge University Press 2000.
Classifying finite group actions on surfaces of low genus. S A Broughton, J. Pure Appl. Algebra. 69S. A. Broughton: Classifying finite group actions on surfaces of low genus, J. Pure Appl. Algebra 69 (1990), 233-270.
On a class of surfaces of general type. F Catanese, Algebraic Surfaces, CIME, Liguori. F. Catanese: On a class of surfaces of general type, in Algebraic Surfaces, CIME, Liguori (1981), 269-284.
Fibred surfaces, varieties isogenous to a product and related moduli spaces. F Catanese, American J. of Math. 122F. Catanese: Fibred surfaces, varieties isogenous to a product and related moduli spaces, American J. of Math. 122 (2000), 1-44.
Surfaces with pg = q = 1. F Catanese, C Ciliberto, Sympos. Math. X X X II. F. Catanese and C. Ciliberto: Surfaces with pg = q = 1, Sympos. Math. X X X II (1991), 49-79.
Symmetric product of elliptic curves and surfaces of general type with pg = q = 1. F Catanese, C Ciliberto, J. Algebraic Geom. 2F. Catanese and C. Ciliberto: Symmetric product of elliptic curves and surfaces of general type with pg = q = 1, J. Algebraic Geom. 2 (1993), 389-411.
Pignatelli: Fibrations of low genus I. F Catanese, R , Ann. Sci.École Norm. Sup. 4F. Catanese, R. Pignatelli: Fibrations of low genus I, Ann. Sci.École Norm. Sup. (4)39 (2006), 1011- 1049.
Of the classification of irregular surfaces of general type with non birational bicanonical map. F Catanese, C Ciliberto, M M Lopes, Trans. Amer. Math. Soc. 350F. Catanese, C. Ciliberto and M. M. Lopes: Of the classification of irregular surfaces of general type with non birational bicanonical map, Trans. Amer. Math. Soc. 350 (1998), 275-308.
,1)-knots via the mapping class group of the twice punctured torus. A Cattabriga, M Mulazzani, Adv. Geom. 41A. Cattabriga, M. Mulazzani: (1,1)-knots via the mapping class group of the twice punctured torus, Adv. Geom. 4 (2004), 263-277.
Gap The, Group, GAP -Groups, Algorithms, and Programming. Version 4.4 ; 2006The GAP Group, GAP -Groups, Algorithms, and Programming, Version 4.4 ; 2006, http : //www.gap − system.org.
Global moduli for surfaces of general type. D Gieseker, Invent. Math. 43D. Gieseker: Global moduli for surfaces of general type, Invent. Math 43 (1977), 233-282.
Sur une surface algébrique de genre zero et bigenre deux. L Godeaux, Atti Accad. Naz. Lincei. 14L. Godeaux: Sur une surface algébrique de genre zero et bigenre deux, Atti Accad. Naz. Lincei 14 (1931), 479-481.
On the branch loci in Teichmüller space. W J Harvey, Trans. Amer. Mat. Soc. 153W. J. Harvey: On the branch loci in Teichmüller space, Trans. Amer. Mat. Soc. 153 (1971), 387-399.
C Hacon, R Pardini, Surfaces with pg = q = 3. 354C. Hacon, R. Pardini: Surfaces with pg = q = 3, Trans. Amer. Math. Soc. 354 no. 7 (2002), 2631-2638.
Classification of automorphism groups, up to topological equivalence, of compact Riemann surfaces of genus 4. H Kimura, J. Algebra. 264H. Kimura: Classification of automorphism groups, up to topological equivalence, of compact Riemann surfaces of genus 4, J. Algebra 264 (2003), 26-54.
Automorphism groups of compact Riemann surfaces of genus five. A Kuribayashi, H Kimura, J. Algebra. 1341A. Kuribayashi, H. Kimura: Automorphism groups of compact Riemann surfaces of genus five, J. Algebra 134 (1990), no. 1, 80-103.
Automorphism groups of compact Riemann surfaces of genera three and four. I Kuribayashi, A Kuribayashi, J. Pure Appl. Algebra. 653I. Kuribayashi, A. Kuribayashi: Automorphism groups of compact Riemann surfaces of genera three and four, J. Pure Appl. Algebra 65 (1990), no. 3, 277-292.
The classification of double planes of general type with K 2 S = 8 and pg = 0. R Pardini, J. Algebra. 2593R. Pardini: The classification of double planes of general type with K 2 S = 8 and pg = 0, J. Algebra 259 (2003) no. 3, 95-118.
Surfaces with pg = q = 2 isogenous to a product of curves: a computational approach. M Penegini, With an appendix of S. Rollenske. Work in progressM. Penegini: Surfaces with pg = q = 2 isogenous to a product of curves: a computational approach. With an appendix of S. Rollenske. Work in progress
Surfaces with pg = q = 3. G P Pirola, Manuscripta Math. 1082G. P. Pirola: Surfaces with pg = q = 3, Manuscripta Math. 108 no. 2 (2002), 163-170.
On surfaces of general type with pg = q = 1, K 2 S = 3. F Polizzi, Collect. Math. 562F. Polizzi: On surfaces of general type with pg = q = 1, K 2 S = 3, Collect. Math. 56, no. 2 (2005), 181-234.
On surfaces of general type with pg = q = 1 isogenous to a product of curves, e-print math. F Polizzi, AG/0601063, to appear in Comm. AlgebraF. Polizzi: On surfaces of general type with pg = q = 1 isogenous to a product of curves, e-print math.AG/0601063, to appear in Comm. Algebra.
L Schneps, Special loci in moduli spaces of curves. Galois groups and fundamental groups. CambridgeCambridge Univ. Press41L. Schneps: Special loci in moduli spaces of curves. Galois groups and fundamental groups, 217-275, Math. Sci. Res. Inst. Publ. 41, Cambridge Univ. Press, Cambridge, 2003.
The groups of order sixteen made easy. M Wild, Amer. Math. Monthly. Dipartimento di Matematica Pura ed Applicata, Università di Padova11235121Via TriesteM. Wild: The groups of order sixteen made easy, Amer. Math. Monthly 112, Number 1 (2005), 20-31. Dipartimento di Matematica Pura ed Applicata, Università di Padova, Via Trieste 63, 35121
E-mail address: carnoval@math. Italy Padova, unipd.itPadova, Italy. E-mail address: [email protected]
Università della Calabria, Via Pietro Bucci, 87036 Arcavacata di Rende (CS), Italy. E-mail address: polizzi@mat. Matematica Dipartimento Di, unical.itDipartimento di Matematica, Università della Calabria, Via Pietro Bucci, 87036 Arcavacata di Rende (CS), Italy. E-mail address: [email protected]
|
[] |
[
"PARAMETER OPTIMISATION OF A VIRTUAL SYNCHRONOUS MACHINE IN A MICROGRID",
"PARAMETER OPTIMISATION OF A VIRTUAL SYNCHRONOUS MACHINE IN A MICROGRID"
] |
[
"Timo Dewenter \nInstitut für Physik\nCarl von Ossietzky Universität Oldenburg\nD-26111OldenburgGermany\n",
"Wiebke Heins \nInstitut für Elektrische Informationstechnik\nTU Clausthal\nD-38678Clausthal-ZellerfeldGermany\n\nZentrum für Technomathematik\nUniversität Bremen\nD-28359BremenGermany\n",
"Benjamin Werther \nInstitut für Elektrische Energietechnik und Energiesysteme\nTU Clausthal\nD-38678Clausthal-ZellerfeldGermany\n",
"Alexander K Hartmann \nInstitut für Physik\nCarl von Ossietzky Universität Oldenburg\nD-26111OldenburgGermany\n",
"Christian Bohn \nInstitut für Elektrische Informationstechnik\nTU Clausthal\nD-38678Clausthal-ZellerfeldGermany\n",
"Hans-Peter Beck \nInstitut für Elektrische Energietechnik und Energiesysteme\nTU Clausthal\nD-38678Clausthal-ZellerfeldGermany\n"
] |
[
"Institut für Physik\nCarl von Ossietzky Universität Oldenburg\nD-26111OldenburgGermany",
"Institut für Elektrische Informationstechnik\nTU Clausthal\nD-38678Clausthal-ZellerfeldGermany",
"Zentrum für Technomathematik\nUniversität Bremen\nD-28359BremenGermany",
"Institut für Elektrische Energietechnik und Energiesysteme\nTU Clausthal\nD-38678Clausthal-ZellerfeldGermany",
"Institut für Physik\nCarl von Ossietzky Universität Oldenburg\nD-26111OldenburgGermany",
"Institut für Elektrische Informationstechnik\nTU Clausthal\nD-38678Clausthal-ZellerfeldGermany",
"Institut für Elektrische Energietechnik und Energiesysteme\nTU Clausthal\nD-38678Clausthal-ZellerfeldGermany"
] |
[] |
Parameters of a virtual synchronous machine in a small microgrid are optimised. The dynamical behaviour of the system is simulated after a perturbation, where the system needs to return to its steady state. The cost functional evaluates the system behaviour for different parameters. This functional is minimised by Parallel Tempering. Two perturbation scenarios are investigated and the resulting optimal parameters agree with analytical predictions. Dependent on the focus of the optimisation different optima are obtained for each perturbation scenario. During the transient the system leaves the allowed voltage and frequency bands only for a short time if the perturbation is within a certain range.
|
10.2316/journal.203.2016.4.203-6270
|
[
"https://arxiv.org/pdf/1606.07357v1.pdf"
] | 52,219,966 |
1606.07357
|
237865e10a065d839fe219249d07e40bcc35db26
|
PARAMETER OPTIMISATION OF A VIRTUAL SYNCHRONOUS MACHINE IN A MICROGRID
Timo Dewenter
Institut für Physik
Carl von Ossietzky Universität Oldenburg
D-26111OldenburgGermany
Wiebke Heins
Institut für Elektrische Informationstechnik
TU Clausthal
D-38678Clausthal-ZellerfeldGermany
Zentrum für Technomathematik
Universität Bremen
D-28359BremenGermany
Benjamin Werther
Institut für Elektrische Energietechnik und Energiesysteme
TU Clausthal
D-38678Clausthal-ZellerfeldGermany
Alexander K Hartmann
Institut für Physik
Carl von Ossietzky Universität Oldenburg
D-26111OldenburgGermany
Christian Bohn
Institut für Elektrische Informationstechnik
TU Clausthal
D-38678Clausthal-ZellerfeldGermany
Hans-Peter Beck
Institut für Elektrische Energietechnik und Energiesysteme
TU Clausthal
D-38678Clausthal-ZellerfeldGermany
PARAMETER OPTIMISATION OF A VIRTUAL SYNCHRONOUS MACHINE IN A MICROGRID
Manuscript received 23 June 2016Inverter-Based MicrogridVirtual Synchronous MachineStochastic OptimisationParallel Tempering
Parameters of a virtual synchronous machine in a small microgrid are optimised. The dynamical behaviour of the system is simulated after a perturbation, where the system needs to return to its steady state. The cost functional evaluates the system behaviour for different parameters. This functional is minimised by Parallel Tempering. Two perturbation scenarios are investigated and the resulting optimal parameters agree with analytical predictions. Dependent on the focus of the optimisation different optima are obtained for each perturbation scenario. During the transient the system leaves the allowed voltage and frequency bands only for a short time if the perturbation is within a certain range.
Introduction
The number of renewable distributed energy sources (DER) has increased in the last decades forced by political, ecological, and economical aspects. Many DER are attached to the low-voltage grid by inverters, whose increased usage is accompanied by the need to find suitable control strategies and parameters for, e.g., frequency-power droop control in autonomous microgrids. Simulation methods, models and stability conditions for microgrids based on droop-controlled inverters are investigated in [1][2][3][4][5][6][7]. A rigorous stability arXiv:1606.07357v1 [math.OC] 23 Jun 2016 analysis is done in [8], in which conditions on the droop gains are derived. Simulations [9][10][11][12] have been used to obtain optimal control parameters of inverters or distributed generators in microgrids. Particle swarm optimisation in which a "swarm" of solutions moves in the search-space is used in [13][14][15][16][17][18][19].
To enhance stability in microgrids, one can use programmable inverters with storage, as e.g., the virtual synchronous machine (VISMA) [20]. It is a hysteresis controlled three-phase inverter, whose setpoints are determined by a synchronous machine model implemented on a control computer. Inertia to improve transient stability of the grid, is provided by a storage device. The VISMA is able to control (re-)active power bidirectionally and can be adjusted to meet specific power system requirements.
Here, the VISMA as grid-building element in a low-voltage islanded microgrid with voltage source inverters is investigated. The basic control strategy is droop control [6,8] for both voltage and frequency.
We use the parallel tempering method [21,22] for optimisation of the VISMA parameters under varying transient loads (see e.g. [23]). Parallel Tempering allows to find (near-) optimal solutions for complex optimisation problems [24,25] efficiently. The objective of our analysis is to show that the optimisation method is generally applicable to determine optimal control parameters in microgrids. Furthermore, different types of optima allow insights in the effects of the VISMA in combination with regular droop controlled inverters in microgrids for the first time.
In Sec. 2, the simulation model is described. The optimisation problem is stated in Sec. 3 and Sec. 4 explains the implementation. Results are presented in Sec. 5, a conclusion is given in Sec. 6.
Model of an Inverter-Based Microgrid with VISMA
Lines and Loads
Lines are modelled as algebraic equations describing the relation between voltage angles θ i (t) and voltage magnitudes V i (t) at grid node i ∈ [1, n] and (re-)active power flows [26]. Magnitudes and angles for all grid nodes are gathered in V (t) = [V 1 (t), V 2 (t), ... , V n (t)] T and θ (t) = [θ 1 (t), θ 2 (t), ... , θ n (t)] T . The 2 (re-)active power injected at node i is then described by the power balance equations:
P i (V (t), θ (t)) = 3 G ii V 2 i (t) − ∑ k∈N(i) V i (t)V k (t) (G ik cos(θ i (t) − θ k (t)) + B ik sin(θ i (t) − θ k (t))) (1) Q i (V (t), θ (t)) = 3 −B ii V 2 i (t) − ∑ k∈N(i) V i (t)V k (t) (G ik sin(θ i (t) − θ k (t)) − B ik cos(θ i (t) − θ k (t)))(2)
Here, G ii =Ĝ ii + ∑ k∈N(i) G ik and B ii =B ii + ∑ k∈N(i) B ik , and k ∈ N(i) denotes summation over neighbours k of node i,Ŷ ii =Ĝ ii + jB ii its shunt admittance, and Y ik = G ik + jB ik the admittance of line ik.
The load is modelled as an external disturbance S load (t) = P load (t) + jQ load (t). An algebraic constraint is introduced for the node k to which the load is connected, so P load (t) = P k (V , θ ), Q load (t) = Q k (V , θ ).
Droop-Controlled Inverters
Following [8], inverters are modelled as controllable AC voltage sources described by differential equations. Each inverter is connected to the grid via an LCL-filter with inductance L inv on the inverter side, filter capacitance C f and coupling inductance L C . Here, V i (t) and θ i (t) denote time-varying voltage magnitudes and angles over filter capacitances C f , assuming that these are the voltages controlled by the inverter.
Droop frequency and voltage control is based on decentralized proportional controllers. Its adaption to inverter-based microgrids has been investigated [2,[4][5][6]27] extensively. Because droop control is purely proportional, an offset error occurs as soon as the system is permanently disturbed. The objective of the control strategy is that in steady state (denoted by * ) of the closed loop system devices participating in droop control share the additional (re-)active power caused by the disturbance according to the equations:
k P,i (P nom,i − P * i (V * , θ * )) = ω * i − ω nom , k Q,i (Q nom,i − Q * i (V * , θ * )) = V * i −V nom(3)
Here, P nom,i and Q nom,i denote the nominal active and reactive power injections of each device. V nom and ω nom denote the nominal voltage magnitude and frequency, respectively. The coefficients k P,i and k Q,i are parameters which determine the desired power sharing among devices. A common approach for the choice of droop coefficients k P,i and k Q,i is proportional load sharing (see [8] for analysis). Based on the power rating S i of each device and taking into account the legal limits for grid frequency and voltage magnitudes (49.8 Hz to 50.2 Hz, and 207 V to 253 V, respectively), we obtain:
k P,i = 0.4 · 2π 2S i rad s = 0.4π S i rad s , k Q,i = 46V 2S i = 23V S i ∀ i(4)
In [8], voltage source inverters are modelled with instantaneous frequencyθ i (t) = ω sp,i (t), and first-
order-delay voltage control T invVi (t) = −V i (t) +V sp,i (t),
where ω sp,i (t) and V sp,i (t) denote freely adjustable frequency and voltage setpoints. Furthermore, power measurements are processed by low-pass filters with time constants T i T inv . Choosing setpoints ω sp,i (t) and V sp,i (t) according to (3) gives (see [8] for details):
θ i (t) = ω i (t) (5) T iωi (t) = −ω i (t) + ω nom + k P,i (P nom,i − P i (V (t), θ (t))) (6) T iVi (t) = −V i (t) +V nom + k Q,i (Q nom,i − Q i (V (t), θ (t)))(7)
The Virtual Synchronous Machine (VISMA) with Droop-and Secondary Frequency Control
The VISMA [20] is a programmable inverter which mimics the dynamics of a synchronous machine. It uses the three-phase grid voltages as input and the three-phase currents as output. Its machine model, which defines how the programmable inverter is supposed to act, is adapted to fit in the overall model:
θ i (t) = ω i (t)(8)Jω i (t) = − k d T d ω i (t) − k d T d d(t) + 1 ω i (t) P inject (t) − P i (V (t), θ (t))(9)d(t) = − 1 T d ω i (t) − 1 T d d(t)(10)
Parameters are the virtual moment of inertia J > 0, the mechanical damping factor k d > 0, the damping
time constant T d > 0.
Compared to the VISMA model as stated in [28], this model was obtained by defin-
ing a "damping state" d(t) = T d k d M d (t) − ω i (t) and replacement of the momentum M mech (t) by M mech (t) = 1 ω i (t) P inject (t),
with P inject (t) denoting the active power injected into the grid by the VISMA. In this setup, it is used for the purpose of droop and secondary frequency control, i.e. P inject (t) = P droop (t) + P secondary (t).
According to (3) it is P droop (t) = P nom,i + 1 k P,i (ω nom − ω i (t))
. Secondary frequency control is only performed by the VISMA and realized via an integral controller:
x(t) = K I (ω nom − ω i (t)), P secondary (t) = x(t)(11)
The voltage E P [28] is represented here by the voltage magnitude V i (t) > 0 of the VISMA. Voltage dynamics of the VISMA are assumed as a first-order delay with a fast time constant T inv,i . A specific voltage control strategy for the VISMA [29] is implemented using the root mean square value V grid,i (t)
obtained from the grid voltage measurement between stator and grid (cf., Fig. 1) as:
T inv,iVi (t) = −V i (t) +V nom + k V V nom −V grid,i (t)(12)
Furthermore, VISMA stator equations [28] are simplified as quasi-static and represented via an alge-
braic equation Y VISMA = 1 R S +jωL S with stator resistance R S > 0 and stator inductance L S > 0.
Overall Simulation Model with Respect to a Reference Node
We choose the VISMA node (node 1) as reference node. All voltage angles are replaced by their difference to the reference node's voltage angle via ∆θ i (t) := θ i (t) − θ 1 (t) ∀ i. Naturally, we have ∆θ 1 (t) ≡ 0 and therefore the state ∆θ 1 (t) and (8) are not needed to describe the full system. A new vector is defined for 5 angle states as ∆θ (t) = [∆θ 2 (t), ... , ∆θ n (t)] T ∈ R n−1 . For lines and loads, θ i (t) can be directly replaced by
∆θ i (t) ∀ i.
For the inverters, assuming that none of them is connected to node 1,
(5) is replaced by ∆θ i (t) = ω i (t) − ω 1 (t)
. Given the complex power S i = P i + jQ i , complex coupling admittance Y coupl = 1 R S +jωL S (or, for the inverters Y coupl = 1 jωL C ) and complex voltage V i = V i e j∆θ i , we obtain the complex voltage V grid,i = V grid,i e j∆θ grid,i between VISMA stator or inverter filters and grid as V grid,
i (t) = |V i (t)| 2 V i (t) − S i (t) 3Y coupl V i (t) .
Problem Statement
Optimisation Constraints
The objective of the optimisation is to find parameters J, k d , T d , and K I for the VISMA which positively influence the overall system behaviour after a perturbation. In order to avoid undesired or physically impossible behaviour, the optimisation variables have to be bounded by user-defined constraints.
The first constraint assures that the VISMA does not react faster than the other inverters. Therefore, we investigate the dynamics of the VISMA (cf., (9)- (10)). For the purpose of deriving a simple model as reference for the optimisation constraints, the following simplifications are used: Only the machine model is investigated, grid and stator equations, differential equations of voltage dynamics and secondary control are not considered. Taking P 1 (V (t), ∆θ (t)) as system input u(t), linearising around the equilibrium point u * = P nom,1 , ω * 1 = ω nom and d * = −ω nom , and applying the Laplace transform gives the transfer function:
G VISMA,lin (s) = − k P,1 (T d s + 1) 1 Ω 2 s 2 + D 2Ω s + 1 , c := 1 k P,1 ω nom , D := 1 c (k d + J) + T d 2 1 c JT d , Ω := 1 1 c JT d(13)
The poles are real because D > 1 for any choice of parameters (proof omitted), therefore:
s pole,1 = −Ω D + D 2 − 1 , s pole,2 = −Ω D − D 2 − 1(14)
From linear system theory it is known that τ 1/2 = − 1 s pole,1/2 determines the exponential decay rate. This results in the constraint max i (T i ) ≤ min (τ 1 , τ 2 ), where T i is the time constant of the regular inverters. 6
Assuming stable system configurations, i.e., s pole,1 , s pole,2 < 0, we conclude that τ 1 < τ 2 , and therefore:
max i (T i ) ≤ 1 Ω D + √ D 2 − 1(15)
A second constraint defines an upper bound for the parameter K I of the integral controller (11) (13), is given by:
K I ≤ Jω nom 3τ 2 = 1 3 J ω nom Ω D − D 2 − 1(16)
Cost Functional
The cost functional to be optimised contains three parts with parameters α > 0 and β > 0:
E[∆ f , ∆V, δ f , δ V , α, β ] = t final + α · (k d + J) + (∆ f /δ f + ∆V /δ V ) =:Σ /β(/δ f + ∆V /δ V → min, where ∆ f = max {i∈{1,2,3}}, t>t 0 } | f i (t) − f i (t 0 )|
is the maximum frequency deviation, f i being the frequency at node i. The max. voltage deviation is
∆V = max {i∈{1,2,3,4}, t>t 0 } |V grid,i (t) −V grid,i (t 0 )|, with grid voltage V grid,i (t).
Third, a trade-off must be found between the required storage capacity of the VISMA, which should be as small as possible, and the energy that is used to keep up its virtual inertia. By setting M mech = 0 and integrating (9) over time, the energy provided to or taken from the microgrid by the VISMA is: will give results close to T d ≈ max i=1,2 T i , k d ≈ 0 and J ≈ c max i=1,2 T i . Minimising mainly t final on the other hand leads to a large value within limits given by (16), namely K I ≈ 1 3k P,1 .
E VISMA = − 1 2 (J + k d ) ω 2 (t 2 ) − ω 2 (t 1 ) + T d t 2 t 1 ω i (t)Ṁ d (t) dt(
Implementation
The energy landscape (see Fig. 2) is rough with many local minima, in particular due to a stochastic term needed for the initial conditions when solving the differential equations, preventing the application of standard, e.g., gradient-based, methods. Instead, we use Parallel Tempering here, which also works for harder optimization problems, but is easy to implement.
Parallel Tempering
The optimisation algorithm works as follows. The configurations of the system are sampled according to the Boltzmann probability distribution P(
E i ) = 1/Z exp(−E i /Θ), where Z is a normalization constant, 8
E i is the energy of configuration i and Θ an artificial temperature. This is achieved via a special Monte Carlo (MC) sampling, the Metropolis alogrithm [30], where in each iteration, a new candidate configuration with corresponding energy E 2 is created and accepted with probability: [21,22], where a random walk in temperature space is performed. To preserve detailed balance and equilibrium for an infinite number of iterations, the Metropolis criterion [21] with energies E(· ) is used:
p Metr = min 1, e −(E 2 −E 1 )/Θ(19)p Swap = min 1, exp 1 Θ k − 1 Θ k+1 [E(y k ) − E(y k+1 )](20)
Two neighbouring configurations with temperatures Θ k , Θ k+1 (k ∈ [1, n − 1]) can be exchanged. In each such swap, k ∈ {1, 2, 3, . . . , n − 1} is chosen at random with equal probability.
Implementation of the Optimisation Algorithm
Within an optimisation procedure, one of two perturbation scenarios is considered. Both are based on a step in load. During the transient it is checked whether the usual frequency and voltage ranges are met (see (4)). If these are not fulfilled, the parameter set is rejected, i.e., E = ∞. Before (17) is calculated in the simulation, the constraints (15) and (16) are checked. If they are not fulfilled, the parameter set is rejected.
We use from the GSL [32]: A hybrid method (Newton and dogleg step) for solving the steady state equations, where the results are used as initial conditions for the Runge-Kutta-Fehlberg method to solve the differential equations. For parallelization we use OpenMPI [33]. The 12 temperatures used in the simulations are Θ i ∈ {0.01, 0.02, 0.07, 0.2, 0.5, 1, 3, 7, 20, 50, 100, 10 9 }, where 10 9 corresponds to the acceptance of every new state except the ones that violate (15), (16) or lead to an unstable system state. For each temperature Θ i the MC sampling is performed in the following way:
1. Calculate value of cost functional E 1 with given parameter set Φ = (J, k d , T d , K I ).
2. Choose one parameter O of the four parameters from Φ with equal probability at random. The steps 2.-5. are repeated N params · 2 = 4· 2 = 8 times. After two such sweeps are performed for each temperature, n −1 swap attempts are done. For each of these attempts the procedure is the following: First, choose a configuration k ∈ [1, n − 1] uniformly at random. Then, exchange the two configurations y k and y k+1 with the swap probability given by (20). For each parameter set (α, β ), 200 swaps are performed with R perc = 0.8. The minimum of E is found by taking the minimum of all temperatures Θ i leading to Φ min .
3. Calculate O = O · m with m = |1 + R perc · r|, where r ∈ [−1, 1]
Another simulation [34] with Φ min as initial parameter set is started with 200 swaps and R perc = 0.4.
Results
Optimisation Results for Different Perturbation Scenarios
For the optimisation a microgrid in a radial topology and in island mode is considered, see Fig. 1. We neglect ohmic grid losses and focus on the frequency peak, i.e., δ V = 10 40 . The perturbation is a jump in active load. Table 1 shows the parameters that remain the same for all scenarios. In this scenario, a step decrease of the load power is done. See Table 2 for all parameters. In the last three columns of Table 3 the values of the parts of the cost functional (17) are given, which reflect on which part of the functional the optimisation has been focused. For the first minimum (#1) the three parts of (17) have the same weight, the second minimum (min.) focuses on t final , the third on the value of J + k d , and the fourth on small frequency peaks (Σ). For the second, third and forth min. the focus is reflected in the result,
L lines L S R lines R S L C T 1 T 2/3 k V Q nom,
i.e., min. #2 has the smallest value of t final , min. #3 the smallest of J + k d . Min. #4 gives a comparably small value for Σ, but apparently only a local min. was found, since in min. #2 it is even smaller. Figure 3. Comparison of the four minima for scenario 1 given in Table 3.
Scenario 2: Different nominal powers and load jump of 7 kW
In this scenario, we assume different rated and nominal active powers of the devices (see Table 4). The optimal parameter sets obtained for this setup are shown in Table 5. Fig. 5 shows the system behaviour for min. #1 of Scenario 2. Different time constants of the VISMA and the two inverters cause a different system behaviour than in Scenario 1. The VISMA's reaction is very slow due to its high virtual inertia (for min. #1, J ≈ 11.5). Directly after the load jump the inverters have to balance the sudden power demand.
This forces them to provide active power at a value (∼ 8 kW) above their nominal rated power values S 2/3 .
Inverters for island grids allow this for a short amount of time.
Conclusion
Parameters of a VISMA in an islanded microgrid with radial topology containing two inverters and a load have been optimised using Parallel Tempering. By varying additional parameters in the cost functional the focus of the optimisation was shifted. For two perturbation scenarios minima of the cost functional were found which are stable solutions within the prescribed boundaries. The results show that this optimisation 13 procedure is in general applicable to the task of parameter optimisation in a microgrid. It is also shown that through the proper setting of the VISMA's parameters its functionality can be adapted to different participants in the grid. The values obtained by analytical investigation in 3.2 seem to offer a "rule of thumb" for a good parameter region. However, the effects of each parameter in complex situations are not obvious. For other topologies, devices or disturbances, totally different parameter sets might be needed.
In order to find a good parameter set for a given microgrid setup, various disturbances should be analyzed.
In future research, other forms of the constraints for the VISMA parameters and cost functionals should be investigated. Larger microgrids with other power generating systems can be put under scrutiny. The proposed approach could be transferred to the optimisation of parameters in other control strategies. An extension would be the study of different disturbance scenarios, where one uses e.g., a series of steps in load. Finally, theoretical results should be confirmed by measurements in an appropriate laboratory.
Figure 1 .
1Scheme of the microgrid setup for simulation in perturbation scenarios.
based on the design preference that integral control action should occur only after the first part of the transient caused by droop control is finished. For the simplified model (13) more than 95 % of the absolute value of the step size are reached after 3max i=1,2 (τ i ) because of the exponential character of the linearised system's step response. The response time of the integral controller should be larger. A lower bound for the response time 1 K I of the integral controller, by using that x(t) is scaled by 1 Jω nom in
17) α and β allow to shift the focus of the optimisation. First, we want the transient after a perturbation to be as short as possible, i.e., t relax → min. The time t relax is the relaxation time of the system after a perturbation. It is defined as the largest time of the moments, when the frequencies reach 49.999 Hz again. The time interval t final = t relax −t 0 considers the moment t 0 when the jump in load occurs. Second, we consider the peak depth in the transients of frequency and voltage. We want them to be as small as possible to avoid damage on electronic devices, i.e., ∆ f
18) T d is responsible for scaling the energy loss due to damping and the other part of the energy is dominated by J + k d . To avoid unnecessary large storage capacities k d + J is minimised. The constraints of the optimisation problem allow some insights into the structure of the optima in advance. Choosing a very small k d and T d close to max i=1,2 T i gives values as close as possible to the lower bound of (15). On the one hand this indicates that an optimisation with focus on keeping the transient behaviour of the VISMA close to those of regular inverters (i.e., minimising the virtual inertia J + k d and the transient time t final )
Figure 2 .
22D projection of the energy landscape (value of cost functional E (17)) for scenario 1, close to Min. #2. The parameters T d = 0.6, k d = 2.6 · 10 −4 , and K I = 1060 are fixed, whereas J is varied.
1/2/3 K awu 1.514 mH 42.0 mH 0.0 Ω 0.3 Ω 1.8 mH 0.01 s 0.5 s 10.
Figure 4 .
4Minimum #1 of scenario 1.
Figure 5 .
5Minimum #1 of scenario 2.
E 1 is the energy of the current state. As(19) fulfills detailed balance, sampling according to a Boltzmann distribution is ensured. From physics we know that for Θ → 0, the energy obtains a minimum E → E min . This leads to the idea of Simulated Annealing[31], where the temperature of an MC simulation is gradually decreased until a minimum is found. This approach can get stuck in a local minimum. An improvement is to simulate the system at various temperatures Θ i . This can be done by Parallel Temper-ing
is a random number. 4. Calculate new value of E 2 with modified parameter set Φ in which O has been replaced by O . 5. Accept the new parameter set Φ with Metropolis probability (19).
Table 1
1Parameters Used for the Optimisation in All Scenarios. Values for T 2/3 taken from[8]
Table 2
2Parameters Used for the Optimisation in Scenario 1 Optimal Parameter Sets for Scenario 1 (R perc = 0.4, Focus on Frequency Peak: δ f = 0.05, δ V = 10 40 ). Errors of E Resulting from 50 Runs with Different Initial Conditions.S 1/2/3
P nom,1/2/3
k P,1/2/3
k Q,2/3
P load before jump P load after jump
4000.0 VA 500.0 W 3.1416·10 −4 rad
s VA 5.75·10 −3 V
VA
1500.0 W
4500.0 W
Table 3
#
J k d /10 −4
T d
K I
E
α
β J + k d
Σ
t final α(J + k d )
Σ/β
1 5.0895
1.1857 0.5029 1054.56 108.93(6)
7
0.027 5.090 0.994 36.483
35.627 36.820
2 91.479
2.5800 0.5917 1060.97
35.12(2) 0.07
2.7 91.480 0.817 28.415
6.4036 0.3026
3 5.0692
1.0071 0.5163 975.67 3624.89(9) 700
0.027 5.069 1.000 39.379 3548.494 37.026
4 50.894 10.1498 1.2539 1053.54
3425(46)
7 2.7 · 10 −4 50.895 0.820 32.913
356.265 3036.54
In Sec. 3.2 it was stated that T d ≈ max i=1,2 T i = 0.5, k d ≈ 10 −4 (since it is bounded by this value) and
J ≈ c max i=1,2 T i ≈ 10.13 · 0.5 ≈ 5.07 in cases where J + k d and t final are minimised with equal weighting,
and K I ≈ 1
3k p,1 ≈ 1061.03, if the minimisation focuses on t final . These values are close to min. #1 and #3,
and min. #2 for focussing on t final . Weightings on other parts of the cost functional (e.g., min. #4) lead to
minima which are further away from the bounds of the optimisation constraints.
The results from Table 3 are visualized (see Fig. 3) by a comparison for VISMA frequencies and
voltages for all four minima. The remaining grid values are shown only for the first minimum, see Fig. 4.
Table 4
4Parameters Used for the Optimisation in Scenario 2 Despite the different system behaviour compared to Scenario 1, the effect of the optimisation with different weights is clearly reflected in the values of the cost functional (seeTable 5). The very small t final for min. #2 is due to an overshoot in frequencies (figure not shown): After reaching 49.999 Hz for the firstS 1
S 2
S 3
P nom,1
P nom,2
P nom,3
k P,1
k P,2
9.0 kVA 3.0 kVA 1.0 kVA 1.0 kW 1.5 kW 0.5 kW 1.3963·10 −4 rad
s VA 4.1888·10 −4 rad
s VA
k P,3
k Q,2
k Q,3
P load before jump P load after jump
12.5664·10 −4 rad
s VA 7.67·10 −3 V
VA 23.0·10 −3 V
VA
3.0 kW
10.0 kW
Table 5
5Optimal Parameter Sets for Scenario 2 (R perc = 0.4, Focus on Frequency Peak: δ f = 0.2, δ V = 10 40 ). Errors of E Resulting from 50 Runs with Different Initial Conditions. 11.408 0.902 19.967 1939.309 20.040 4 16.8974 13.2759 0.6524 2382.76 2039.5(5) 1.7 4.5 · 10 −4 16.899 0.896 19.076 28.728 1991.7 time, f 1 leaves the range of [49.999, 50.001] again before returning to 50 Hz. Optimisations for the same scenarios with ohmic losses in lines have been performed (results not shown). Though transients show different characteristics, almost the same parameter sets are found.#
J k d /10 −4
T d
K I
E
α
β J + k d
Σ
t final α(J + k d )
Σ/β
1 11.4986
1.1595 0.5035 2379.26
59.26(1)
1.7
0.045 11.499 0.902 19.671
19.548 20.046
2 59.3043
4.2760 0.9356 2387.04
14.99(1) 0.017
4.5 59.305 0.887 13.781
1.0082 0.1971
3 11.4076
1.0139 0.5064 2348.85 1979.32(1)
170
0.045
Modeling, analysis, and testing of autonomous operation of an inverter-based microgrid. N Pogaku, M Prondanovic, & T Green, IEEE Trans. Power Electron. 22613N. Pogaku, M. Prondanovic, & T. Green, Modeling, analysis, and testing of autonomous operation of an inverter-based microgrid, IEEE Trans. Power Electron., 22, 2007, 613.
Small signal stability for parallel connected inverters in standalone AC supply systems. E Coelho, P Cortizo, & P Garcia, IEEE Trans. Ind. Appl. 38533E. Coelho, P. Cortizo, & P. Garcia, Small signal stability for parallel connected inverters in stand- alone AC supply systems, IEEE Trans. Ind. Appl., 38, 2002, 533.
Stability analysis of load sharing control for distributed generation systems. M Marwali, J Jung, & A Keyhani, IEEE Trans. Energy Convers. 22737M. Marwali, J. Jung, & A. Keyhani, Stability analysis of load sharing control for distributed genera- tion systems, IEEE Trans. Energy Convers., 22, 2007, 737.
Energy management in autonomous microgrid using stability-constrained droop control of inverters. E Barklunk, IEEE Trans. Power Electron. 232346E. Barklunk et al., Energy management in autonomous microgrid using stability-constrained droop control of inverters, IEEE Trans. Power Electron., 23, 2008, 2346.
A stability algorithm for the dyn. analysis of inv. dominated unbalanced LV MGs. N L Soultanis, S A Papathanasiou, & N D Hatziargyriou, IEEE Trans. Power Syst. 221N. L. Soultanis, S. A. Papathanasiou, & N. D. Hatziargyriou, A stability algorithm for the dyn. anal- ysis of inv. dominated unbalanced LV MGs, IEEE Trans. Power Syst., 22(1), 2007, 294-304.
Applicability of droops in low voltage grids. A Engler, DER Journal. A. Engler, Applicability of droops in low voltage grids, DER Journal, 1, 2005, 1.
Synchronization and power sharing for droop-controlled inverters in islanded microgrids. J W Simpson-Porco, F Dörfler, & F Bullo, Automatica. 499J. W. Simpson-Porco, F. Dörfler, & F. Bullo, Synchronization and power sharing for droop-controlled inverters in islanded microgrids, Automatica, 49(9), 2013, 2603 -2611.
J Schiffer, Cond. for stab. of droop-contr. inv.-based MGs. 50J. Schiffer et al., Cond. for stab. of droop-contr. inv.-based MGs, Automatica, 50, 2014, 2457-2469.
Online droop tuning of a multi-DG microgrid using cuckoo search algorithm. A Raghami, M T Ameli, & M Hamzeh, Electr. Pow. Compo. Sys. 4314A. Raghami, M. T. Ameli, & M. Hamzeh, Online droop tuning of a multi-DG microgrid using cuckoo search algorithm, Electr. Pow. Compo. Sys., 43(14), 2015, 1583-1595.
Different.-evolution-based optim. of the dyn. response for parallel oper. of inv. with no controller interconnection. R Godoy, IEEE Trans. Ind. Electron. 592859R. Godoy et al., Different.-evolution-based optim. of the dyn. response for parallel oper. of inv. with no controller interconnection, IEEE Trans. Ind. Electron., 59, 2012, 2859.
Design of controller and communication for frequency regulation of a smart microgrid. S Mishra, G Mallesham, & A Jha, IET Renewable Power Gener. 6248S. Mishra, G. Mallesham, & A. Jha, Design of controller and communication for frequency regulation of a smart microgrid, IET Renewable Power Gener., 6, 2011, 248.
Game-theoretic appr. to coop. control of distrib. energy resources in islanded MG considering voltage and frequency stab. M J B Sanjari & G, Gharehpetian, Neural Comput. Appl. 25343M. J. Sanjari & G. B. Gharehpetian, Game-theoretic appr. to coop. control of distrib. energy resources in islanded MG considering voltage and frequency stab., Neural Comput. Appl., 25, 2013, 343.
Control parameter optim. for multiple distr. generators in a MG using particle swarm optimization. I.-Y Chung, W Liu, D A Cartes, & S.-I Moon, Eur. Trans. Electr. Power. 211200I.-Y. Chung, W. Liu, D. A. Cartes, & S.-I. Moon, Control parameter optim. for multiple distr. gener- ators in a MG using particle swarm optimization, Eur. Trans. Electr. Power, 21, 2011, 1200.
Optimal design of microgrids in autonomous and grid-connected modes using particle swarm optimization. M Hassan, & M Abdio, IEEE Trans. Power Electron. 26755M. Hassan & M. Abdio, Optimal design of microgrids in autonomous and grid-connected modes using particle swarm optimization, IEEE Trans. Power Electron., 26, 2011, 755.
Power quality enhancement in auton. MG operation using particle swarm optimization. W Al-Saedi, S Lachowicz, D Habibi, & O Bass, Int. J. Electr. Power Energy Syst. 42139W. Al-Saedi, S. Lachowicz, D. Habibi, & O. Bass, Power quality enhancement in auton. MG opera- tion using particle swarm optimization, Int. J. Electr. Power Energy Syst., 42, 2012, 139.
Intelligent frequency control in an AC microgrid: Online PSO-based fuzzy tuning approach. H Bevrani, IEEE Trans. Smart Grid. H. Bevrani et al., Intelligent frequency control in an AC microgrid: Online PSO-based fuzzy tuning approach, IEEE Trans. Smart Grid, 3, 2012, 1935.
A novel information exchange particle swarm optimization for MG multi-objective dynamic optimization control. L Yu, J. Renew. and Sustain. Ener. 6223114L. Yu et al., A novel information exchange particle swarm optimization for MG multi-objective dynamic optimization control, J. Renew. and Sustain. Ener., 6(2), 2014, 023114.
Enhanced multi-objective particle swarm opt. for opt. reactive power dispatch considering voltage stability. Y Zeng, & S Yanguang, Int. J. of Power and Energy Systems. 34Y. Zeng & S. Yanguang, Enhanced multi-objective particle swarm opt. for opt. reactive power dis- patch considering voltage stability, Int. J. of Power and Energy Systems, 34, 2014, 116-24.
Control parameter optimization for AP1000 reactor using particle swarm optimization. P Wang, Ann. Nucl. Energy. 872P. Wang et al., Control parameter optimization for AP1000 reactor using particle swarm optimization, Ann. Nucl. Energy, 87, Part 2, 2016, 687 -695.
Virtual synchronous machine. H.-P Beck, & R Hesse, IEEE 9th Intern. Conf. on Electrical Power Quality and Utilisation (EPQU). Barcelona, SpainH.-P. Beck & R. Hesse, Virtual synchronous machine, IEEE 9th Intern. Conf. on Electrical Power Quality and Utilisation (EPQU), Barcelona, Spain, 2007.
Exchange monte carlo method and application to spin glass simulations. K Hukushima, & K Nemoto, J. Phys. Soc. Jpn. 651604K. Hukushima & K. Nemoto, Exchange monte carlo method and application to spin glass simula- tions, J. Phys. Soc. Jpn., 65, 1996, 1604.
Replica MC simul. of spin-glasses. R H Swendsen, & J.-S Wang, Phys. Rev. Lett. 572607R. H. Swendsen & J.-S. Wang, Replica MC simul. of spin-glasses, Phys. Rev. Lett., 57, 1986, 2607.
Sizing optimization of wind-photovoltaic hybrid energy sys. under transient load. M S Rahman Tito, T T Lie, & T Anderson, Int. J. of Power and Energy Systems. 33M. S. Rahman Tito, T. T. Lie, & T. Anderson, Sizing optimization of wind-photovoltaic hybrid energy sys. under transient load, Int. J. of Power and Energy Systems, 33, 2013, 168-74.
. A K Hartmann, & H Rieger, Optim. Algor. in Physics. Wiley-VCH1st editionA. K. Hartmann & H. Rieger, Optim. Algor. in Physics, (Berlin: Wiley-VCH, 2001), 1st edition.
. New Optim. Algor. in Physics. A. K. Hartmann & H. RiegerWiley-VCHA. K. Hartmann & H. Rieger (Eds.), New Optim. Algor. in Physics, (Weinheim: Wiley-VCH, 2004).
P Kundur, Power System Stability and Control. New YorkMcGraw-HillP. Kundur, Power System Stability and Control, (New York: McGraw-Hill, 1994).
Control of parallel connected inverters in standalone AC supply systems. M Chandorkar, D Divan, & R Adapa, IEEE Trans. Ind. Appl. 291M. Chandorkar, D. Divan, & R. Adapa, Control of parallel connected inverters in standalone AC supply systems, IEEE Trans. Ind. Appl., 29(1), 1993, 136-143.
Verhaltens netzstützender Anlagen am Bsp. der VISMA (Optim. of the dyn. behavior of grid-supporting devices for the example of the VISMA). T Dewenter, B Werther, A K Hartmann, & H.-P Beck, Proc. 13. Symp. Energieinnov. (EnInnov2014). 13. Symp. Energieinnov. (EnInnov2014)Graz, Austria16Optim. des dynT. Dewenter, B. Werther, A. K. Hartmann, & H.-P. Beck, Optim. des dyn. Verhaltens netzstützender Anlagen am Bsp. der VISMA (Optim. of the dyn. behavior of grid-supporting devices for the example of the VISMA), Proc. 13. Symp. Energieinnov. (EnInnov2014), Graz, Austria, 2014. 16
Y Chen, R Hesse, D Turschner, & H.-P Beck, Improving the grid power quality using VISMAs, Intern. Conf. on Power Engin., Energy and Electr. Drives (POWERENG). Malaga, SpainY. Chen, R. Hesse, D. Turschner, & H.-P. Beck, Improving the grid power quality using VISMAs, Intern. Conf. on Power Engin., Energy and Electr. Drives (POWERENG), Malaga, Spain, 2011.
Eq. of state calcul. by fast comput. mach. N Metropolis, J. Chem. Phys. 21N. Metropolis et al., Eq. of state calcul. by fast comput. mach., J. Chem. Phys., 21, 1953, 1087-1092.
Optimization by simulated annealing. S Kirkpatrick, C D Gelatt, & M P Vecchi, Science. 2204598S. Kirkpatrick, C. D. Gelatt, & M. P. Vecchi, Optimization by simulated annealing, Science, 220(4598), 1983, 671-680.
. M Galassi, GNU Scientific Library Reference Manual. 3rd editionM. Galassi et al., GNU Scientific Library Reference Manual, 3rd edition, 2009.
Open MPI: Goals, concept, and design of a next genereration MPI implementation. E Gabriel, Proc. 11th Europ. PVM/MPI Users' Group Meet. 11th Europ. PVM/MPI Users' Group MeetBudapest, HungaryE. Gabriel et al., Open MPI: Goals, concept, and design of a next genereration MPI implementation, Proc. 11th Europ. PVM/MPI Users' Group Meet., Budapest, Hungary, 2004.
. A K Hartmann, Big Practical Guide to Computer Simulations. World ScientificA. K. Hartmann, Big Practical Guide to Computer Simulations, (Singapore: World Scientific, 2015).
|
[] |
[
"TWO STEP RECOVERY OF JOINTLY SPARSE AND LOW-RANK MATRICES: THEORETICAL GUARANTEES",
"TWO STEP RECOVERY OF JOINTLY SPARSE AND LOW-RANK MATRICES: THEORETICAL GUARANTEES"
] |
[
"Sampurna Biswas \nDepartment of Electrical and Computer Engineering\nThe University of Iowa\nIAUSA\n",
"Sunrita Poddar \nDepartment of Electrical and Computer Engineering\nThe University of Iowa\nIAUSA\n",
"Soura Dasgupta \nDepartment of Electrical and Computer Engineering\nThe University of Iowa\nIAUSA\n",
"Raghuraman Mudumbai \nDepartment of Electrical and Computer Engineering\nThe University of Iowa\nIAUSA\n",
"Mathews Jacob \nDepartment of Electrical and Computer Engineering\nThe University of Iowa\nIAUSA\n"
] |
[
"Department of Electrical and Computer Engineering\nThe University of Iowa\nIAUSA",
"Department of Electrical and Computer Engineering\nThe University of Iowa\nIAUSA",
"Department of Electrical and Computer Engineering\nThe University of Iowa\nIAUSA",
"Department of Electrical and Computer Engineering\nThe University of Iowa\nIAUSA",
"Department of Electrical and Computer Engineering\nThe University of Iowa\nIAUSA"
] |
[] |
We introduce a two step algorithm with theoretical guarantees to recover a jointly sparse and low-rank matrix from undersampled measurements of its columns. The algorithm first estimates the row subspace of the matrix using a set of common measurements of the columns. In the second step, the subspace aware recovery of the matrix is solved using a simple least square algorithm. The results are verified in the context of recovering CINE data from undersampled measurements; we obtain good recovery when the sampling conditions are satisfied.
|
10.1109/isbi.2015.7164019
|
[
"https://arxiv.org/pdf/1412.2669v1.pdf"
] | 10,265,460 |
1412.2669
|
abca3ccbeda482c5cfc9e09386531f888605b64e
|
TWO STEP RECOVERY OF JOINTLY SPARSE AND LOW-RANK MATRICES: THEORETICAL GUARANTEES
Sampurna Biswas
Department of Electrical and Computer Engineering
The University of Iowa
IAUSA
Sunrita Poddar
Department of Electrical and Computer Engineering
The University of Iowa
IAUSA
Soura Dasgupta
Department of Electrical and Computer Engineering
The University of Iowa
IAUSA
Raghuraman Mudumbai
Department of Electrical and Computer Engineering
The University of Iowa
IAUSA
Mathews Jacob
Department of Electrical and Computer Engineering
The University of Iowa
IAUSA
TWO STEP RECOVERY OF JOINTLY SPARSE AND LOW-RANK MATRICES: THEORETICAL GUARANTEES
Index Terms-Low rankJoint sparsityRIPDynamic MRI
We introduce a two step algorithm with theoretical guarantees to recover a jointly sparse and low-rank matrix from undersampled measurements of its columns. The algorithm first estimates the row subspace of the matrix using a set of common measurements of the columns. In the second step, the subspace aware recovery of the matrix is solved using a simple least square algorithm. The results are verified in the context of recovering CINE data from undersampled measurements; we obtain good recovery when the sampling conditions are satisfied.
INTRODUCTION
The recovery of matrices that are simultaneously low-rank and jointly sparse from few measurements has received considerable attention in the recent years, mainly in the context of the of dynamic MRI reconstruction [1,2]. In this context, the columns of the matrix correspond to vectorized image frames, while the rows are the temporal profiles of each voxel. While there is considerable theoretical progress in problems such as recovering jointly sparse vectors or low-rank matrices, the recovery of matrices that are simultaneously low-rank and jointly sparse have received considerably less attention.
Recently in [3] Golbabee et al., have developed theoretical guarantees for the recovery of a matrix of rank r and which has only k non-zero rows using low rank and joint sparsity priors from its random Gaussian dense measurements. Unfortunately, the dense measurement scheme, where each measurement is a linear combination of all matrix entries is not practical in dynamic imaging; each measurement can only depend on a single column of the matrix. Another alternative is the multiple measurement scheme (MMV), where all the columns are measured by the same sampling operator [4]. This scheme offers a factor of two gain over the independent recovery of the columns, when the matrix is full rank; the gain is minimal when the rank of the matrix is far lower This work is in part supported by US NSF grants EPS-1101284, ECCS-1150801, CNS-1329657, CCF-1302456, CCF-1116067, NIH 1R21HL109710-01A1, ACS RSG-11-267-01-CCE, and ONR grant N00014-13-1-0202. than the number of columns. This is clearly undesirable since the columns are highly redundant in the low-rank setting; one would expect significant gains in this case.
We consider a two step strategy to recover a simultaneously low-rank and jointly sparse matrix from the measurements of its columns. Specifically, we propose to first recover the row subspace of the matrix from a set of common measurements made on the columns. Once the row subspace is estimated, the subspace aware recovery of the column subspace simplifies to a simple linear problem. This work is motivated by two-step algorithms used in dynamic MRI, where the temporal basis functions are first recovered from navigator signals or central k-space samples [1]. While excellent reconstruction performance is reported in a range of dynamic and spectroscopic MRI applications [1], theoretical guarantees on the recovery of the matrix using this two-step strategy are lacking. A key difference of the proposed formulation with [1] is the assumption of joint sparsity, which plays a key role in ensuring perfect recovery. The joint sparsity of the matrix columns/ image frames is a reasonable assumption in dynamic imaging, where the edge in the images are approximately restricted to the same image regions.
Our results show that the row subspace can be robustly recovered from a few measurements, which are common for all the columns. The number of common measurements is dependent on the joint sparsity or rank, which ever is smaller. We also developed a sufficient condition to guarantee perfect subspace aware recovery of the matrix, once the row subspace is known. We verify the results using numerical simulations and demonstrate the utility of the scheme in recovering free breathing cardiac CINE MRI data. We observe that good recovery is possible when the number of measurements are comparable to the theoretical guarantees. We also observe that in addition to providing good guarantees on recovering the matrix, joint sparsity provides a significant improvement in performance in practical applications.
PROPOSED APPROACH
We consider the recovery of X ∈ R n×N that is k-jointly sparse (has only k non-zero rows) and has a rank of r. In the context of dynamic imaging, n is the number of pixels in the image, while N is the number of frames in the time series. The skinny singular value decomposition (SVD) of this matrix is specified by X = UΣV H , where the columns of U ∈ R n×r and V ∈ R N ×r are orthonormal. We consider measurements that are only dependent on columns of the matrix, denoted by x i :
z i y i y i = Φ A i Di x i .(1)
The measurement matrix Φ ∈ C s×n is common for all columns, while different measurement matrices A i are chosen for different columns.
We introduce a two-step algorithm to recover the matrix from its measurements y i ; i = 0, .., N − 1.
1. We show that the row subspace matrix Q = RV H can be estimated from the common measurements Z = ΦX as the eigen decomposition of Z H Z. Here, R is an arbitrary invertible matrix, whose condition number is bounded under simple conditions on Φ.
2. The subspace aware recovery of X = PQ H in (1) simplifies to a linear system of equations. This system is invertible, if the matrix is k-jointly sparse and satisfies the condition spark(X) = r + 1. The last sufficient condition implies that every r columns of the matrix are linearly independent, which is a bit pessimistic. In reality, one requires considerably weaker conditions, which will be the focus of our future work.
We will now derive conditions for the success of the above two steps.
Recovery of the row subspace
The common measurements Z are related to the row subspace vectors V as
Z = Φ U Σ R V H .(2)
We propose to estimate the subspace from the eigen decomposition of
Z H Z = V H R H R V.(3)
Note that if R is a full rank matrix, R H R is positive definite and has a singular value decomposition WΛW H ; where W ∈ R r×r is an orthonormal matrix and all the diagonal entries of Λ are positive. Thus, the eigen decomposition of Z H Z yields
Z H Z = WV Q H Λ WV Q .(4)
Note that {span(w i ; i = 0, .., r − 1)} = {span(v i ; i = 0, .., r − 1)} since Q is orthonormal. We now present conditions on Φ that will guarantee R to be full rank.
Theorem 1. The row subspace of X is uniquely recovered from the measurements Z = ΦX, if X is k−jointly sparse and iff spark(Φ) ≥ k + 1.
We now show that the recovery of the subspace is also robust, when X is k-jointly sparse and the measurement matrix Φ satisfies the restricted isometry property (RIP) for k sparse vectors.
Theorem 2. Suppose the measurement matrix Φ satisfies the restricted isometry conditions for k-sparse vectors
(1 − δ k ) x 2 2 ≤ Φx 2 2 ≤ (1 − δ k ) x 2 2(5)
then, the condition number of R, κ is bounded by
κ(R) ≤ 1 + δ k 1 − δ k κ(X)(6)
The above conditions guarantee good recovery of the matrix when the measurement matrix Φ satisfies the RIP conditions for k-sparse vectors. In many practical applications, the rank of X is much smaller than k. We now show that the row subspace can be reliably recovered using a Φ with considerably lower number of measurements compared to k.
Theorem 3. The row subspace Q of any matrix X can be uniquely recovered from the measurements Z = ΦX for almost all matrices Φ ∈ C s×n , if s ≥ r.
The next theorem shows that ΦU is well conditioned, when Φ has complex Gaussian random entries; the condition number of R is bounded as long as X is well-conditioned.
The constant M , defined in [5], depends on r and s and is phrased as an expectation. Note that the probability that the condition number exceeds c declines rapidly with a growing c, depending on s−r +1. The proofs will be added to a future work.
The above theorems guarantee the recovery of the row subspace of X from the common measurements of its columns, acquired by Φ. The number of common measurements depend upon the joint sparsity k or the rank r, depending on which is smaller. In many dynamic imaging applications, r << k and hence the number of common measurements is dependent on the rank. This implies that very few common measurements are required to recover the subspace.
Subspace aware recovery of X
Once Q = RV H ∈ R N ×r are obtained from the common measurements of the columns, the recovery of the matrix simplifies to the estimation of the coefficient matrix P ∈ C n×r . Vectorizing both sides of second row of equation (2), we obtain
X = UΣR −1 P RV H Q H(8) y 1 . . . y N vec(Y) = q 11 A 1 · · · q r1 A 1 . . . q 1N A N · · · q rN A N B p 1 . . . p r vec(P)(9)
Since X is jointly k sparse, the sparsity of vec(P) is kr. We now introduce a sufficient condition spark(X) = r + 1 (10)
to guarantee the recovery of P from (1). This condition implies that every collection of r columns of X is linearly independent. In the absence of such a condition, there might exist columns of X that are orthogonal to all other columns of X.
To obtain perfect recovery of all the columns in this worst case scenario, we require spark(A i ) = 2k; ∀i = 0, .., n; there is no benefit over the independent recovery of the columns or the knowledge of the subspace. We now present a sufficient condition on the measurement matrices to guarantee the subspace aware recovery of X that is k−jointly sparse and has rank r, while satisfying (10).
Theorem 5. Let n = (p+1)r, where p is an arbitrary integer and the measurement matrices are chosen as
C 1 = A 1 = A 2 .. = A r . . . C p = A pr+1 = A pr+2 .. = A N (11)
Here, C i ∈ R si×n ; i = 1, ..p. Then, P can be uniquely deter-
mined from (9) if spark C 1 . . . C p C ≥ 2k.(12)
The classical MMV scheme requires a total of (2k − r + 1)N measurements for its unique recovery of a matrix of dimension n × N and rank r. The total number of measurements required by the dense measurement scheme is considerably lower and of the order of the degrees of freedom in a Figure 2. Recovery error vs # variable radial lines matrix [3]. Combining the results in the above subsections, the proposed scheme requires of the order of (2k − r + N )r for unique recovery-or equivalently r + 2kr/N measurements per frame; this is comparable to the degrees of freedom in the matrix and is comparable to the best possible scenario involving dense measurement matrices. Considering that the dense measurement scheme is impractical in a dynamic imaging setting, the gains offered by the practical efficient two step strategy is quite significant.
Algorithm
We pose the recovery of the jointly sparse vector P from the linear measurements (9) as a 1 minimization scheme:
P = arg min P ||B vec(P) − vec(Y)|| 2 2 + TP 1− 2 (13)
Here, T is an appropriately chosen transform or frame operator, while 1 − 2 norm is the mixed norm to encourage joint sparsity. In this work, we use T as the finite difference operator. We solve the above problem using the alternating direction method of multipliers (ADMM) algorithm.
RESULTS
We first validate our results using numerical simulations on PINCAT phantom corresponding to CINE MRI data, before using the framework to recover free breathing CINE data.
Numerical simulations
We consider a PINCAT phantom with dimension of 128 x 128 x 200 and a rank of 20. In this case, the rank r is far less than sparsity k. We first determine the accuracy of the subspace matrix, recovered from the common lines. We use the projection error between two subspaces V 1 and V 2 is defined as
E = ||(I − V 1 V H 1 )V 2 || 2 2 + ||(I − V 2 V H 2 )V 1 || 2 2 ||V 1 || 2 2 + ||V 2 || 2 2 .(14)
as the metric for comparing two subspaces. In Fig. 1 we plot the projection error vs the number of common Gaussian samples (left) and common points on radial Fourier measurements(right). Noiseless and a noisy setting with a SNR of 35 dB are compared. We observe that the projection error drops to zero when the number of samples equals the rank 20 in the noiseless cases. We also observe that good estimates for the subspaces can be obtained with more measurements in the noisy setting, indicating that the recovery is robust to noise. In Fig. 2, we consider the subspace aware recovery of the matrix using the subspace estimated from 4 common radial lines. We recovered the images using joint sparse TV recovery. The normalized recovery error as a function of the number of radial lines used in each frame. We observe that we obtain a recovery error of 1% when eight radial lines/frame are used; this corresponds to an acceleration of approximately 10.7. We expected the error goes down with more number of lines. We show the reconstructions corresponding to 4 common radial lines and 5 variable lines in Fig. 3. The rows in Fig. 3 corresponds to the reconstructions obtained when P is recovered with no regularization, standard spatial TV regularization, and the proposed joint sparsity regularization. The first two columns show the reconstructed image and the error image w.r.t the original phantom in the noiseless case. The corresponding noisy cases are shown in the last two columns with an output SNR of 50 dB.
Recovery of free breathing cardiac CINE data
We demonstrate the utility of the algorithm in recovering free breathing CINE data in Fig. 4. The data was acquired using an SSFP sequence with an 18 channel coil array, with TR/TE of 4.2/2.1 ms, matrix size of 512 × 512, FOV of 300mm×300mm and slice thickness of 5mm on 3T Siemens Trio scanner. We considered 12 radial lines of k-space to reconstruct each image frame, 4 of which were navigator lines. This translated to a temporal resolution of 50 ms. The acquisition time was 25 s which corresponds to 500 image frames. The rows correspond the the reconstructions obtained when P is recovered with no regularization, standard TV regularization and the proposed joint sparsity regularization. The
CONCLUSION
We introduced a two step algorithm with recovery guarantees to reconstruct a low rank and jointly sparse matrix from its under sampled measurements. The results show that under simple assumptions, the two step recovery scheme is guaranteed to provide perfect recovery of the matrix. The application of the scheme to the recovery free breathing CINE data demonstrates the utility of the scheme in practical applications.
Theorem 4 .
4[5, Theorem 3.2] Suppose the entries of Φ are independent, zero mean, complex Gaussian with unit variance. Then for a constant M independent of c and for every c > 1 P r[κ(ΦU) > c] ≤ M c −2(s−r+1) .
Fig. 1 .
1Projection error between subspaces vs # common Gaussian samples (left) and common points on radial Fourier lines (right)
Fig. 3 .
3Reconstructed Pincat phantom. Top: No regularization, Middle: Standard TV regularization, Bottom: Joint sparsity regularized, Noiseless reconstruction and error images on first two columns and the corresponding noisy (SNR of 50 dB) on the last two columns
Fig. 4 .
4Reconstructed free breathing CINE data. Top: No regularization, Middle: Standard TV regularization, Bottom: Joint sparsity regularized. Last column shows the time profile along the myocardium. last column shows the time profile along a vertical line. The results show the utility of the proposed scheme in providing good reconstruction of free breathing CINE MRI data.
Spatiotemporal imaging with partially separable functions. Z Liang, ISBI. Z. Liang, "Spatiotemporal imaging with partially separa- ble functions," in ISBI, 2007, pp. 181-182.
Accelerated dynamic mri exploiting sparsity and low-rank structure: kt slr. S G Lingala, Y Hu, E Dibella, M Jacob, IEEE Transactions on. 305Medical ImagingS. G. Lingala, Y. Hu, E. DiBella, and M. Jacob, "Ac- celerated dynamic mri exploiting sparsity and low-rank structure: kt slr," Medical Imaging, IEEE Transactions on, vol. 30, no. 5, pp. 1042-1054, 2011.
Compressed sensing of simultaneous low-rank and joint-sparse matrices. M Golbabaee, P Vandergheynst, arXiv:1211.5058arXiv preprintM. Golbabaee and P. Vandergheynst, "Compressed sens- ing of simultaneous low-rank and joint-sparse matrices," arXiv preprint arXiv:1211.5058, 2012.
Theoretical results on sparse representations of multiple-measurement vectors. J Chen, X Huo, IEEE Transactions on Signal Processing. 5412J. Chen and X. Huo, "Theoretical results on sparse representations of multiple-measurement vectors," IEEE Transactions on Signal Processing, vol. 54, no. 12, pp. 4634-4643, 2006.
Tails of condition number distributions. A Edelman, B Sutton, SIAM J. of Matrix anal. and Applic. A. Edelman and B. Sutton, "Tails of condition number distributions," SIAM J. of Matrix anal. and Applic.
|
[] |
[
"A semi-continuous model for transmission of SARS-CoV-2 and other respiratory viruses in enclosed spaces via multiple pathways to assess risk of infection and mitigation strategies",
"A semi-continuous model for transmission of SARS-CoV-2 and other respiratory viruses in enclosed spaces via multiple pathways to assess risk of infection and mitigation strategies"
] |
[
"Panagiotis Demis ",
"Ishanki De Mel ",
"Hayley Wragg ",
"Michael Short ",
"Oleksiy V Klymenko [email protected] ",
"\nof Chemical and Process Engineering\n‡Department of Mathematical Sciences\nUniversity of Surrey\nGuildfordUK\n",
"\nUniversity of Bath\nBathUK\n"
] |
[
"of Chemical and Process Engineering\n‡Department of Mathematical Sciences\nUniversity of Surrey\nGuildfordUK",
"University of Bath\nBathUK"
] |
[] |
AbstractThe Covid-19 pandemic has taken millions of lives, demonstrating the tragedy and disruption of respiratory diseases, and how difficult they can be to manage. However, there is still significant debate in the scientific community as to which transmission pathways are most significant and how settings and behaviour affect risk of infection, which all have implications for which mitigation strategies are most effective. This study presents a general model to estimate the rate of viral transfer between individuals, objects, and the air. The risk of infection to individuals in a setting is then computed considering the behaviour and interactions of individuals between themselves and the environment in the setting, survival times of the virus on different surface types and in the air, and mitigating interventions 1 arXiv:2109.00977v1 [math.DS] 2 Sep 2021 (ventilation, hand disinfection, surface cleaning, etc.). The model includes discrete events such as touch events, individuals entering/leaving the setting, and cleaning events. We demonstrate the model capabilities on three case studies to quantify and understand the relative risk associated with the different transmission pathways and the effectiveness of mitigation strategies in different settings. The results show the importance of considering all transmission pathways and their interactions, with each scenario displaying different dominant pathways depending on the setting and behaviours of individuals therein.The flexible model, which is freely available, can be used to quickly simulate the spread of any respiratory virus via the modelled transmission pathways and the efficacy of potential mitigation strategies in any enclosed setting by making reasonable assumptions regarding the behaviour of its occupants. It is hoped that the model can be used to inform sensible decision-making regarding viral infection mitigations that are targeted to specific settings and pathogens.
| null |
[
"https://arxiv.org/pdf/2109.00977v1.pdf"
] | 237,385,752 |
2109.00977
|
04b08cb34849d3850b33949ccc05fd95a4f93337
|
A semi-continuous model for transmission of SARS-CoV-2 and other respiratory viruses in enclosed spaces via multiple pathways to assess risk of infection and mitigation strategies
Panagiotis Demis
Ishanki De Mel
Hayley Wragg
Michael Short
Oleksiy V Klymenko [email protected]
of Chemical and Process Engineering
‡Department of Mathematical Sciences
University of Surrey
GuildfordUK
University of Bath
BathUK
A semi-continuous model for transmission of SARS-CoV-2 and other respiratory viruses in enclosed spaces via multiple pathways to assess risk of infection and mitigation strategies
AbstractThe Covid-19 pandemic has taken millions of lives, demonstrating the tragedy and disruption of respiratory diseases, and how difficult they can be to manage. However, there is still significant debate in the scientific community as to which transmission pathways are most significant and how settings and behaviour affect risk of infection, which all have implications for which mitigation strategies are most effective. This study presents a general model to estimate the rate of viral transfer between individuals, objects, and the air. The risk of infection to individuals in a setting is then computed considering the behaviour and interactions of individuals between themselves and the environment in the setting, survival times of the virus on different surface types and in the air, and mitigating interventions 1 arXiv:2109.00977v1 [math.DS] 2 Sep 2021 (ventilation, hand disinfection, surface cleaning, etc.). The model includes discrete events such as touch events, individuals entering/leaving the setting, and cleaning events. We demonstrate the model capabilities on three case studies to quantify and understand the relative risk associated with the different transmission pathways and the effectiveness of mitigation strategies in different settings. The results show the importance of considering all transmission pathways and their interactions, with each scenario displaying different dominant pathways depending on the setting and behaviours of individuals therein.The flexible model, which is freely available, can be used to quickly simulate the spread of any respiratory virus via the modelled transmission pathways and the efficacy of potential mitigation strategies in any enclosed setting by making reasonable assumptions regarding the behaviour of its occupants. It is hoped that the model can be used to inform sensible decision-making regarding viral infection mitigations that are targeted to specific settings and pathogens.
place when an infected individual secretes large droplets (> 5 µm) via vocalising, coughing, sneezing etc. that are directly deposited onto susceptible individuals' mucous membranes via close proximity. Early in the COVID-19 pandemic, this was thought to be the main transmission pathway and this is mitigated quite effectively through the use of facemasks 3,4 .
Since, the aerosol/microdroplet pathway has also been identified as a potential respiratory transmission pathway 5 . In this pathway, small airborne particles secreted by infected individuals remain suspended in the air and can be inhaled by susceptible individuals. This has been observed for other viruses such as influenza A 6 and has also been conjectured to have been responsible for SARS-CoV-2 transmission events, such as in a Chinese restaurant 7 , a cruise ship 8 , and the Skagit Valley Chorale superspreading event 9 . Mitigation of this pathway is challenging, but increasing ventilation rates with more outside air, discouraging high concentrations of people indoors and limiting the time spent indoors are most effective 1,10 . It has also been shown in Leung et al. 11 that face coverings can reduce aerosol emissions. However, other studies have found that this effect can be limited due to small aerosol particles either penetrating the mask or leaking through gaps around the cheeks and nose, which results in apparent filtration efficiencies of face coverings from cloth masks to N95-grade respirators in the range of 10-60%. 12,13 Lastly, the contact route is a mechanism in which the surface of an object is contaminated with viral particles, either via deposition from the air (through particle settling or droplets) or from an individuals' contaminated hand. 14 Such a contaminated object is called a fomite. Susceptible individuals can then transfer the virus from the fomite to their mucous membranes through touching the object and then their face. 15 This mechanism has been anecdotally recognised by several hospitals 16 and the WHO and UK governmental advice to frequently wash hands and disinfect surfaces shows that this mechanism is believed to be important. Figure 1 represents the transmission pathways for viral infections.
Most epidemiological studies related to viral disease transmission have focused on macroscopic, large-scale transmission and often use modified classical susceptible-infected-recovered (SIR) modelling frameworks. 18,19 There have been some efforts to use these models to determine which transmission routes are likely to be dominant 20,21 , however they are not able to consider environment-specific risks. Many researchers have also performed detailed transmission studies and simulations to understand and quantify the specific aerosol, [22][23][24] droplet 2,25,26 , and contact routes 27,28 , however few modelling efforts exist for quantifying overall risks of infection based on the combination of transmission routes.
Fomite transmission routes have seen significant study via models and simulations. Zhao et al. 29 developed an Environmental Infection Transmission System model to quantify risks of infection from droplet-contaminated and hand-contaminated routes, concluding that public, large-surface area droplet-contaminated surfaces have the highest transmission potential.
Beamer et al. 30 used micro-activity empirical data to validate their model for fomite pathogen transmission to test different workplace strategies for reducing infection risks from rhinovirus and rotavirus. The effects of increasing hand hygiene to reduce potential infection by SARS-CoV-2 was investigated by Pham et al. 31 and their results show that event-based hand washing (such as after touching an object) is more effective than frequency washing (every 30 minutes, for example). Kraay et al. 15 developed an ordinary differential equation (ODE) model for fomite-mediated virus transmission, with results highlighting that fomites play an important role in virus transmission. The study also simulated cleaning events on hands and surfaces, demonstrating the efficacy of different cleaning strategies.
To understand influenza transmission routes, Xiao et al. 32 used a multi-agent modelling framework to understand a nosocomial outbreak in a Hong Kong hospital. Their detailed spatio-temporal model identified that long-range airborne (94 %) and fomite routes (6 %) were both likely to have played a role in the outbreak. Xiao et al. 33 used a similar multi-agent approach to understand the transmission of a norovirus outbreak in a UK hotel restaurant in 1998 and found that, out of the multiple pathways examined, fomite transmission played the largest role in the outbreak.
Lei et al. 34 developed a multi-route disease transmission model to study in-flight outbreaks of norovirus, SARS-CoV and influenza A H1N1. They used the detailed seat positions of those infected to consider all 3 routes, modelling each of the 3 transmission routes separately. Markov Chains were built to model fomite transmission routes in a surface contamination network via transfer efficiencies and transition matrices. Their study concluded that for H1N1 transmission, the close contact and aerosol routes were more important, but for SARS-CoV, fomites are slightly more important than the other transmission routes, although all 3 play a key role. Finally for norovirus, fomites are the largest contributor to infection risk. Using a similar Markov Chain modelling approaches, Azimi et al. 8 showed that it was likely that multiple transmission routes played a role in the SARS-CoV2 outbreak on the Diamond Princess cruise ship. 35 used a similar approach to attempt to quantify relative contributions of different pathways in healthcare personnel in patient care, determining that all pathways played a part, however, respiratory pathways dominated during the short patient interactions. These studies highlight the importance of developing models to understand the various transmission pathways, quantify the importance of each route, and the environment-specific risks.
Zhang and Li 17 developed a model considering the three different routes of transmission for the Influenza A virus in a student office (see Figure 1). They used observations from camera recordings to track interactions between students and model the risk of infections to individuals. Zhang and Li 17 present two equations to determine the total quantity of virus on a hand V H and on a surface V S at time t:
dV H dt = R SH A c V S A S − R HS A c V H A H dV S dt = R HS A c V H A H − R SH A c V S A S where R SH is
Mathematical model
In the following, we will be considering a generic enclosed setting such as an office, a classroom, a retail outlet, a gym or public transport (e.g., train carriage, bus, etc.) which individuals can enter at different times and remain therein for different periods of time.
The way individuals interact with each other and objects in the setting is parameterised in the model since it is dependent on setting type and associated behavioural patterns. For example, in an office environment each individual interacts primarily with few objects on and around their desk ('private surfaces' following the terminology of Zhang and Li 17 ) and only occasionally touches shared objects (or 'public surfaces' 17 ) such as door handles, light switches, water fountains, etc. On the other hand, people visiting retail outlets or gyms or using public transport interact mainly with shared (public) objects, and the associated touching behaviour is markedly different from that in a typical office.
The model tracks the time-dependent spread of virus in the setting from one or more 'sources' (infected individuals) to mucous membranes and respiratory tracts of susceptible individuals through the transmission routes shown in Figure 1 Consider an enclosed space (setting) characterised by volume Vol a that is visited by N p individuals over a period of T hours. Individuals can arrive at different times t 0 j and remain in the setting for the duration ∆t j . The presence of an individual in a setting can be described by the following indicator function:
I j (t) = Θ(t − t 0 j ) × Θ(t 0 j + ∆t j − t), j = 1, . . . , N p(1)
where Θ(·) is the Heaviside step function. This formalism allows modelling people getting on and off public transport, entering and leaving shops, offices and other enclosed settings, and can be easily extended to multiple visits by an individual to the same setting.
The status of an individual is represented by a binary parameter ψ j , j = 1, . . . , N p which takes the value 1 if they are infected or 0 if they are susceptible:
ψ j = 1 if j is infected 0 otherwise
We assume that at the time of entry susceptible individuals are free of the virus, i.e., their viral loads on the hands, V H j (t), and mucous membranes, V M j (t) are zero at t = 0, while infected individuals carry a significant amount of the virus on their mucous membranes that is assumed to remain constant throughout the period of interest, and the viral load on their hands at t = 0 is assumed to be in equilibrium with their mucous membranes (see
section 3.5).
The setting is assumed to contain N S objects that can be touched or handled by people
p j I = 1 − exp − V M j (t) k j(2)
which depends on the dose response parameter k j .
We note that the latter measure may not be reliable due to the high uncertainty in the value of the dose response parameter, k j , measured in the units of virus quantity −1 .
Therefore both the risk of infection and the cumulative viral load, V M j , are considered as important outputs of the model presented.
The above-mentioned transmission mechanisms along with the underlying modelling assumptions are described mathematically below. In the following subsections we are devel-oping a continuous model formulation relying on average frequencies of discrete events such as touching fomite surfaces, breathing, coughing, etc., unless otherwise stated.
Aerosol pathway
During their stay in the setting, individuals are assumed to expel droplets of mucus of various sizes via their respiratory activities (including breathing, coughing and sneezing) and
vocalisation. The corresponding rate of continuous viral shedding through these activities by individual j is denoted in the following as r a j . While this rate should clearly be dependent on the viral load on an individual's mucous membranes 40 the form of this dependence is unknown. There are, however, experimental measurements of the rate of virus shedding by infected individuals through this pathway. 41 Therefore, r a j is treated as a parameter in our model, the values of which are presented in Table 3 of Appendix A.
Following the widely-adopted terminology, we will classify droplets expelled through respiratory activities and vocalisation into large and small. 17,34 This subdivision is based on the propensity of small droplets to remain aerosolised for prolonged periods of time in that they are either small enough when expelled or rapidly lose their water content through evaporation before hitting the ground leaving aerosolised nuclei. By contrast, large droplets do not reduce in size sufficiently so as to become aerosolised and land on nearby surfaces, which may lead to their contamination.
In the model, the fraction of large droplets expelled by individual j is denoted as j . These droplets are involved in the close contact transmission pathway which will be discussed in Section 3.2.
Using this notation, the rate at which an individual j expels small droplets, which contribute to the viral load suspended in the air and can subsequently be inhaled by other individuals, can be expressed as
(1 − j ) r a j
Note that we do not consider individual respiratory events, so the shedding of droplets of mucus is treated as a continuous process for the purpose of modelling.
We assume that the air in the setting is perfectly mixed such that aerosol generated anywhere in the room is instantly dispersed equally throughout the room. This approximation is often used in modelling aerosol distribution in enclosed settings since the time scale of air homogenisation is often significantly shorter than other time scales, particularly the duration of time individuals spend in the setting.
The deposition of small virus-laden droplets/aerosols onto fomite surfaces as well as on human skin has also been included in the model. The dynamics between airborne and fomite virus concentrations are discussed in Zhang and Li 17 , including loss of the virus from surfaces through resuspension. The rate of virus deposition onto surface i is proportional to its surface area and the viral concentration in air so that
r SD i = k d A S i V a Vol a
where k d is the rate constant for small droplet deposition.
While we include in our model, for the sake of generality, the terms describing the rates of deposition of small droplets, r SD i , and resuspension of the virus from surface i back into the air, r resp S i , there is no sufficient evidence in the literature to quantify these phenomena.
Thus, although the corresponding terms are included in the formulation, they are kept zero in all the simulations reported below.
Close contact pathway
Large droplets expelled by an infected individual may travel only a short distance from the source before landing on nearby surfaces, other people's skin, or mucous membranes. The viral load deposited on an object through this pathway depends on the following:
• The rate at which large droplets are emitted by the infected individual, j r a j (see Section 3.1).
• The amount of time the infected individual j spends in close proximity to the object or other individual x. This is represented in the model by the fraction of the time, θ jx , that the pairs spend in close proximity.
• The fraction of large droplets emitted by the infected individual j that land on object
x, π jx , which depends on the relative positions and the distance between the source of the droplets and the acceptor surface. 37 The latter can also change with time, so this parameter should be interpreted as the average fraction of large droplets transferred while in close proximity.
Thus, the rate of viral deposition from source j onto surface x through the close contact pathway is given by
r LD jx = θ jx π jx j r a j
while both the individual j and acceptor x are present in the setting. As with the aerosol generation, the deposition of large droplets is treated as a continuous process, and the above rate can be considered as the average rate of large droplet deposition throughout the duration of close contact between the individual j and acceptor surface x.
Viral particles deposited onto surfaces with large droplets can then be spread further by individuals through touch (fomite-mediated transmission), which is discussed in 3.3.
Fomite transmission pathway
The fomite transmission pathway involves the transfer of viral particles between an individual's hand and a fomite surface or their own mucous membranes (mouth, nose or eyes). In the following, we will be treating the surface touch behaviour as a continuous process as opposed to individual touch events. We assume that the rate of transfer from a donor object and y, f xy , and contact area, A c xy , specific to the way objects x and y come into contact. V x is the viral load on x and A x is its surface area. Using this notation, the rate r touch xy of viral transfer from object x to object y can be described using the following formalism:
r touch xy = R xy f xy A c xy V x A x = m xy V x (3)
where R xy is the fraction of the viral load on surface x within the area of contact A c xy that is transferred to y upon one contact, and m xy = R xy f xy A c xy /A x is the overall transfer rate constant from x to y. Note that f xy = f yx and A c xy = A c yx for any x and y.
In our case studies presented in Section 4 we assume that there are no direct contacts between individuals (i.e., the hands and mucous membranes of one individual do not come into contact with either hands or mucous membranes of another), which corresponds to physical distancing being enforced. Therefore,
R H i M j = R M i H j = R M i M j = 0 if i = j. Direct
contacts between fomites are also not considered, so R S i S j = 0 for any i and j.
Removal and inactivation of the virus 3.4.1 Continuous formulation
Natural inactivation of the virus is assumed to be exponential in air and on all surfaces, including skin, so that the absolute rates of inactivation can be described as
r in x = k in x V x where k in x is the inactivation rate constant which can be defined through virus half-life, τ in x , on surface x or in air as k in x = ln(2)/τ in x .
The latter parameter is dependent on the environmental conditions (temperature and relative humidity) and, for fomites, on their material type (e.g., stainless steel, copper, plastic, paper, etc.) and structure of the surface (porous or non-porous). These dependencies can be readily accounted for in the model using an Arrhenius-type inactivation model such as the one proposed by Yap et al. 42 .
In the continuous formulation, the rates of any washing/cleaning interventions are assumed to be described by continuous rates which, on average, correspond to the frequency of cleaning events.
The absolute rate of decrease of viral load on the hands due to washing (or using hand sanitiser gel, wipes, etc.) can be described as:
r wash H j (t) = α j f wash H j V H j
where α j is the 'efficiency' of virus removal during hand-washing, and f wash H j is the frequency of hand washing. If the fraction of the virus removed in a hand-washing event,α j , is given the two parameters are related through 1
α j = ln(α j )
and ifα j is represented using the log 10 reduction value (LRV),α j = 10 LRV j , (10) A similar expression can be used for the cleaning of fomite surfaces:
α j = LRV j lnr clean S i = β i f clean S i V S i
where β i is the virus deactivation efficiency on surface i, and f clean S i is the frequency of cleaning 1 The rate of viral load reduction due to hand washing expressed using the continuous formulation is
dV Hj (t) dt = −r wash Hj = −α j f wash Hj V Hj which describes an exponential decay V Hj (t) = V Hj (t = 0) e −αj f wash H j t
The relative reduction of the viral load between hand washing events is then
V Hj t = 1/f wash Hj V Hj (t = 0) = e −αj =α −1 j = 10 −LRVj the surface.
If the fraction of the virus removed in a single intervention,β i , is given, the two parameters are related through β i = ln β i and ifβ i is represented using log 10 reduction value (LRV),β i = 10 LRV i ,
β i = LRV i ln(10)
Mixed continuous-discrete formulation
Some or all of the events such as touching surfaces, washing hands and cleaning of surfaces can be modelled as discrete occurrences. This can be combined with continuous formulations for other transmission rates, however the form of some of the terms would change.
If the frequency of contacts between the hands of individuals and various surfaces is much higher than the rate of handwashing and cleaning, it is justified to describe only the latter two using a discrete formulation. Hand washing (or using hand sanitiser gel, wipes, etc.) at discrete times t w k is assumed to lead to an instantaneous drop in virus concentration on the hands by a factorα j (see the footnote on page 15) so that the absolute rate of viral removal is:
r wash H j = N w j k=1α j V H j δ(t w k )(4)
where N w j is the total number of times individual j washes their hands, and δ is the Dirac delta-function.
A similar expression can be used for the cleaning of fomite surfaces:
r clean S i = N c i k=1β i V S i δ(t c k )(5)
where N c i is the number of times surface i is cleaned, andβ i is the fraction of the virus deactivated by each of the interventions taking place at times t c k .
Full model
Using the above descriptions of the rates of the three viral transmission pathways, we can formulate the following ODEs describing the evolution of the viral loads on fomite surfaces, and the hands and mucous membranes of individuals present/visiting the setting:
dV S i (t) dt = Np k=1 I k (t) r touch H k S i − r touch S i H k − r in S i − r resp S i − r clean S i + r SD S i + Np k=1 I k (t)r LD kS i (6a) dV H j (t) dt = I j (t) N S i=1 r touch S i H j − r touch H j S i + r touch M j H j − r touch H j M j − r in H j − r wash H j + r SD H j + Np k=1 I k (t)r LD kH j (6b) dV M j (t) dt = I j (t) (1 − ψ j ) r touch H j M j − r touch M j H j − r in M j − r a j + V a Vol a r resp j + Np k=1 k =j I k (t)r LD kM j (6c)
The right-hand side of eq. (6a) accounts for the fomite pathway (transfer from and to the hands of individuals while they are present in the setting; see eq. (1)), the reduction of the viral load due to natural inactivation, resuspension from the surface and cleaning as well as the deposition of aerosol (small droplets, SD) and large droplets (LD) emitted by any individual in close proximity.
In eqs. (6b) and (6c), the right-hand side is non-zero only when individual j is present in the setting (i.e., then the indicator function eq. (1) is non-zero) and the model is able to track their interactions with the environment. Owing to this formulation, the viral loads on both their hands and mucous membranes remain constant when the individual is outside the setting (either before they enter it or after they leave) since the model is unaware of the individuals' whereabouts and interactions when they are not in the enclosed setting being modelled.
When individual j is in the setting, the viral load on their hands follows eq. (6b) which involves transfer to/from fomite surfaces and the individuals' own mucous membranes, natural deactivation of the virus and its removal through hand washing as well as the deposition onto the hands of small and large droplets.
In eq. (6c) we assume that the viral load on the mucous membranes of infected individuals (for whom ψ j = 1) is so high that no interactions with the surroundings can either increase or reduce it. Therefore, the right-hand side of equation eq. (6c) is multiplied by (1 − ψ j )
to 'freeze' the viral load on the mucosa of infected individuals at its initial level to avoid any artificial decrease in their viral loads due to shedding (also, the timescales considered here are assumed to be shorter than those leading to significant changes in the condition Variations in the viral load within the total volume of air, Vol a , of the setting are described by the following ODE:
dV a dt = Np j=1 I j (1 − j ) r a j − V a Vol a r resp j + N S i=1 r resp S i − r SD S i − Np i=1 r SD H i − r in air − r vent(7)
where the only positive contributions are those due to aerosol produced by infected individuals while they are in the setting and possible resuspension of the virus from fomite surfaces.
Aerosolised viral particles can be removed from the air by people inhaling them, via the physical removal of the virus due to ventilation with the rate r vent = q vent V a /Vol a (q vent is the ventilation air flow in m 3 h −1 ), or through its natural inactivation. Note that, although the terms describing the deposition of small droplets, r SD i , and resuspension of the virus back into the air, r resp S i , are included in this formulation, their magnitudes can be assumed to be negligible in comparison with the rate of natural inactivation in air and those of other transmission events. Therefore, their values are set to zero in all the simulations reported below.
We assume that start of the observation period the setting is uncontaminated, so that the initial conditions for the viral loads on fomite surfaces and in the air are zero:
V S i (t = 0) = 0, i = 1, ..., N S V a (t = 0) = 0
Similarly, susceptible individuals entering the setting are assumed to have had no prior exposure to the virus:
V H j (t = 0) = V M j (t = 0) = 0, ∀j = 1, ..., N p , ψ j = 0
On the other hand, infected individuals carry significant viral loads V M 0 (see Table 3 of Appendix A) on their mucous membranes which are assumed to remain unchanged throughout the simulated period of T hours:
V M j (t = 0) = V M 0 , ∀j = 1, ..., N p , ψ j = 0
Prior to entering the setting, infected individuals are also assumed not to have touched their faces and mucous membranes with typical frequencies but not any fomites, so that an equilibrium has been established between the viral loads on the individuals' mucous membranes and hands:
V H j (t = 0) = r touch M j H j + r LD jH j m H j M j + k inD a m j = T T 0 I j (t) V a Vol a r resp dt D f m j = T T 0 I j (t)r touch H j M j dt D LD m j = T T 0 Np k=1 k =j I k (t)r LD kM j dt
Now the risk of infection can be defined in the following way:
p j I = 1 − exp − D f m j + D a m j + D LD m j k j
where we are explicitly assuming that the values of the dose response parameters k j associated with the different pathways are the same.
Model implementation
The mathematical model presented in this section was implemented in both MATLAB and
Python. Refer to Appendix B for relevant implementation details.
Model Demonstrations and Case Studies
To Table 1 and Table 3 of Appendix A, the rate of transmission from contaminated hands to the objects should be highest for the document, followed by the door handle and then by the desk. However, the viral loads in into each other's face. 26 The results show that the viral exposure decreases most rapidly with the increase in the number of cleaning events. When the number of cleaning events is greater than two (e.g., disinfection is performed more often than once every two hours) suppressing the direct transmission through large droplets can bring about a further reduction in the accumulated viral dose and hence the risk of infection. Table 1 of Appendix A for the other parameter values. The results of simulating the same scenarios as in Case Study 1 are presented in Figure 6.
The change in the way the individuals interact with each other and their environment leads to a different pattern of fomite contamination, shown in Figure 6a, in which the document is less than half as contaminated as during prolonged close contact in Case Study 1 and the desk of the susceptible individual is over 40 times less contaminated than that of the infected person. There is no difference, however, in the contamination of the air (see Figure 6b), since the aerosol pathway is not affected by the changes related to the fomite and close contact pathways.
Compared to Case Study 1, there is a nearly 7-fold decrease in the total viral exposure of the susceptible person at the end of the 8 h-long working day (see Figure 6c). This effect is explained by drastically reduced contributions of the fomite and close contact pathways due to reduced handling of shared objects (i.e., each other's desks and the document) as well as lower fomite contamination through the deposition of large droplets from the infected individual. This results in a different pattern of relative contributions of the three transmission pathways, as shown in Figure 6d, in which the fomite pathway is still dominant overall, while the aerosol pathway dominates over relatively short times between approximately 0.25 h to 2 h. However, it is important to emphasise here that the absolute viral exposure due to the aerosol pathway in this case is exactly the same as in Case Study 1 while the total exposure, and hence the risk of infection, are significantly lower than those predicted for Case Study 1.
Hand and fomite surface disinfection at t = 4 h leads to a significant reduction in the viral loads on surfaces as well as a 19% decrease in the total exposure of the susceptible individual ( Figure 6a,c,d). The relative reduction in exposure is lower, however, than in Case Study 1 because of the lower contribution of the fomite transmission pathway into the total exposure.
An additional cleaning event at 2 h leads to an overall decrease in viral exposure by 30%.
The effect of combining mitigation strategies on the total viral exposure is reported in Figure 5f shows that ventilation has a more pronounced influence on the total exposure than in Case Study 1 due to a higher relative importance of the aerosol pathway (although the absolute exposure is significantly lower, as mentioned above). At very low ventilation rates (<1 ACH), hand and surface cleaning become ineffective at reducing exposure since the aerosol pathway under such conditions becomes dominant. As the ventilation rate increases, the aerosol pathway becomes less important and disinfection becomes the main mitigation.
Case Study 3: Graduate Student Office
To demonstrate the model's capability to track all transmission pathways while considering a large number of individuals and objects, a test case considering a graduate student office with one infected student and 38 susceptible students is used. It is based on the work and data of Zhang and Li 17 . While some of the features described in the original test case are retained, such as the categorisation of objects into private and public surfaces depending on personal or public use, several modifications have been made to demonstrate the model's wider applicability:
• Only desks and chairs are considered as private surfaces; all other personal belongings such as mugs, bags, etc. are ignored.
• Public surfaces include cabinet handles of 3 cabinets, printer, water dispenser, and door handle.
• Each cabinet is used by 13 students, where cabinet 1 is used by students 1 -13, cabinet 2 by students 14 -26, and cabinet 3 by students 27 -39.
• For each public object, the surface area of the surface or part of the object with the highest touch frequency is considered. For example, the considered surfaces are: printer touch screen, water dispenser button, and cabinet handle.
• Average values are used for all contact frequencies, as opposed to individual contact frequencies based on individual behaviour, however, the model is also able to consider discrete touch events. and respective values used in this test case are given in Table 2
Acknowledgements
The authors would like to express their gratitude to all those that participated in the Royal Society's Rapid Action in Modelling the Pandemic (RAMP), particularly those involved in Subgroup 4. The interactions and discussions in these meetings was always thoughtprovoking and inspired much of this work. We would also like to thank Dr Marco-Felipe
King for his insights and encouragement during the initial model development. Finally, we are grateful to Dr Nan Zhang for providing us access to the data from their extensive work in Zhang et al. 25 .
Appendix A Parameter details for SARS-CoV-2 Case Studies
A.1 Human behaviour and Objects
The parameters used to describe human behaviour and interactions with objects are described in this section, for all three case studies. Table 1 contains the parameters specific to Case Studies 1 and 2, where two individuals interact with one another and fomites during a one-to-one meeting or while working alongside each other. Table 3 contains parameters used in all three case studies, primarily those associated with viral transfer, transmission, and inactivation.
Appendix B Implementation notes
Both the continuous and discrete model formulations of the model have been implemented in MATLAB and Python. The resulting system of ODEs is solved numerically using ode15s solver in MATLAB or SciPy's odeint() in the Python implementation.
To avoid discontinuities on the right-hand sides of eqs. (6a) to (6c) and (7) due to the appearance of both the Dirak's delta-function in eq. (4) and eq. (5) and indicator functions eq. (1) we replace them with continuous approximations using a triangular function to approximate Dirak's delta function:
δ(t) = 0, |t| > t 1 t 1 − |t| t
, |t| ≤ t and the following piecewise linear approximation of the Heaviside function:
Θ(t) = 0, t < 0 x/ t , 0 ≤ t ≤ t 1, t > t
The discontinuous indicator function eq. (1) is then replaced in the code by the following trapezoidal function:
I j (t) =Θ(t − t 0 j )Θ(t 0 j + ∆t j + t − t), j = 1, . . . , N p One, however, must ensure that the parameter t is much smaller than the shortest duration of occupancy:
t << max j=1,...,Np ∆t j and the maximum integration step size used by ode15s or odeint() is significantly less than t .
Figure 1 :
1Graphical illustration of the three potential transmission pathways for respiratory viruses. [adapted from Zhang and Li 17 ]
while accounting for the effects of natural deactivation of the virus in air and on fomite surfaces 39 as well as a range of mitigation interventions. The latter can be either preventative (e.g. the wearing of face coverings) or mitigatory (physical removal of the virus through, e.g., ventilation, air filtration, cleaning and handwashing, or viral deactivation via disinfection, sanitising, UV irradiation, etc.).
present in the setting leading to their contamination, the level of which is characterised by viral loads V S i (t), i = 1, . . . , N S . Surface contamination can also occur through the deposition of viral particles contained in droplets/aerosol expelled by infected individuals. Contamination of air in the setting by virus-laden aerosol is characterised by the overall viral load in the air, V a (t). A susceptible individual can also pick up virus from one surface with their hands and transfer it to another, which leads to the spread of the virus through the environment. The purpose of the model is to quantify the viral load, V M j (t), accumulated by a susceptible individual j via all three transmission pathways while in the setting as well as their associated risk of developing COVID-19:
Figure 2 Figure 2 :
22shows the possible fomite transmission pathways between surfaces, a hand and a mucous membrane as well as the deposition of virus-laden droplets and aerosol Extended fomite transmission path. Dashed lines indicate an arbitrary deposition mechanism of the virus onto surfaces. Arrows represent transmission routes characterised by transmission rates in the model.
x
∈ S 1 , ..., S N S , H 1 , ..., H Np , M 1 , ..., M Np to an acceptor object y ∈ [S 1 , ..., S N S , H 1 , ..., H Np , M 1 , ..., M Np ] (i.e., these can be surfaces, hands, or mucous membranes) is proportional to the average concentration of the virus on x, V x /A x , average frequency of contacts between x
of infected individuals). For susceptible individuals, their mucous membrane viral load can be affected by self-inoculation with their hands, inhalation of virus-laden aerosol and the deposition of large droplets from infected individuals while in close contact with them, as well as viral inactivation and possible physical removal through respiratory activities before the virus can reach the lower respiratory tract.
we defined the risk of infection in eq. (2) through the viral load accumulated on a susceptible individual's mucous membranes as is commonly done in the literature 34 . However, the ODE describing the viral load on mucous membranes in eq. (6c) contains negative rates on the right-hand side describing the removal of viral particles through touch, respiratory activities and viral inactivation. To make sure the risk of infection is not underestimated, we will redefine it taking into account only the influx of viral particles to the mucous membranes while also considering separately the exposure through the aerosol, D a m j , fomite, D f m j , and close contact (large droplet), D LD m j , transmission pathways, respectively:
illustrate the generality and predictive ability of the model developed in section 3, we consider three scenarios of different complexity. Two of these, presented in section 4.1, are small-scale to enable a detailed analysis and interpretation of the effect of model parameters on the outputs, while the third scenario involves a larger group of people and explores how interactions within and between subgroups of a larger group affect individual risk of infection. The values of model parameters describing the viral loads and shedding rate for infected individuals, viral transmission through droplets and between hands and fomites, survival of the virus on surfaces and in the air, etc., have been carefully collected from a number of literature sources and are summarised in
4.1 Case Study 1 :
1One-to-one Meeting In this case study we consider a small office or meeting room with air volume of 40 m 3 (which corresponds to the floor area of 16 m 2 with 2.5 m ceiling height), wherein two individuals one of whom is infected have a 4 h-long face-to-face meeting (see Figure 3a). While in the room, the individuals come into contact with three objects: a door handle, a desk and a document they are jointly working on. The individuals spend 90% of the time they are together in close contact so that the large respiratory droplets from the infected person can contaminate the desk, document and hands of the susceptible individual as well as directly deposit onto their mucous membranes. After the meeting, the infected individual leaves while the susceptible person remains in the room for another 4 hours. The room is ventilated at 1 air change per hour (ACH) or 40 m 3 /h unless otherwise stated.
Figure 3 :
3Small office Case Studies 1 and 2. Infected individual (red) spends 4 h in the setting out of 8 h total duration. (a) individuals share fomites and spend 90% of the time they are in the office together in close proximity; (b) individuals share the document and door handle but not the desks and spend 5% of the time they are in the office together in close proximity.In addition to ventilation, which continually dilutes the virus in the air, we also introduce the application of disinfecting agents on fomite surfaces and hands of both individuals as a way of viral inactivation. Note that, unlike hard surfaces, the shared document cannot be disinfected in this way. Hand and surface disinfection is performed either once after the meeting (i.e., at t = 4 h), twice at t = 2 h and at t = 4 h or not at all. According to the mixed continuous-discrete formulation presented in section 3.4, the viral loads on hands and fomites are diminished instantly upon disinfection by a factorα j = 10 −LRV H j or β i = 10 −LRV S i according to eq. (4) and eq. (5), respectively. In the simulations reported below these factors have the value of 100.
Figure 4
4reports the simulated viral loads on the shared objects and in air, as well as the exposure of the susceptible individual to the virus. In the base case, when no surface and hand cleaning is performed, the viral loads on the shared objects increase throughout the duration of the meeting. The desk is characterised by the highest viral load, with the document accumulating approximately 2.5 times less virus, while the number of viral particles on the door handle at t = 4 h is 65 times lower than on the desk. Considering the traditional interpretation of the fomite transmission pathway, the difference in the viral loads should be directly proportional to the contact frequency, contact area and transferred fraction while being inversely proportional to an object's surface area according to eq. (3). Given the parameter values in
Figure 4a
4acontradict this conclusion. The difference between the expected and simulated fomite viral loads are explained by the contribution of large droplets which is the dominant route of contamination for the desk and the document that remain in close proximity of the infected individual for 90% of the duration of the meeting.When considering viral concentrations per unit of surface area, however, the most contaminated object is the document, followed by the door handle and then by the desk (data not shown), which is due to the differences in the fomite surface areas over which the viral particles are deposited.After the meeting the viral loads on fomite surfaces begin to decrease as a result of natural viral deactivation in the absence of a source of contamination. The rate of decrease in each case is determined by the type of fomite surface, which affects the average survival time of the virus.
Figure 4a
4aalso shows that disinfection of surfaces and hands at t = 4 h leads to a 100fold decrease in the viral loads on hard surfaces (dashed lines). It is noteworthy that, even though the infected individual is no longer in the room after t = 4 h, the viral loads begin to increase again. This is due to the continuing interactions of the susceptible individual with the still-contaminated document, which helps to spread the pathogen onto the disinfected surfaces. An additional cleaning event at t = 2 h results in a further decrease in the viral loads on fomite surfaces at later times, although the effect of an additional cleaning event is smaller.As expected, air contamination remains independent of fomite disinfection events, as shown inFigure 4b. The viral load increases throughout the meeting almost reaching a steady state when the rate of shedding of small contaminated droplets is approximately balanced by the combined rate of their removal by ventilation and natural viral inactivation in air. When the infected individual leaves the setting, the viral load in the air decreases exponentially owing to the latter two mechanisms.It is of particular interest to observe the viral exposure of the susceptible person shown inFigure 4calong with the contributions from the fomite, close contact, and aerosol pathways.The viral exposure continues to increase throughout the simulated 8 h duration, even after the infected person leaves the meeting, albeit at a slower rate. Indeed, for t > 4 h only one of the transmission pathways (close contact) is fully eliminated while the virus persists on fomite surfaces and in the air. It is also evident that the contribution of the fomite pathway is dominant throughout the 8 h period. As mentioned above, this is due primarily to shared surface contamination by large droplets, i.e., due to the fomite and large droplet pathways being intimately linked during prolonged periods of close proximity between infected and susceptible individuals. Furthermore, the aerosol pathway quickly becomes insignificant due to a combination of ventilation and a relatively short half-life of the virus in air. This is further illustrated inFigure 4d, showing the diminishing with time of the relative contributions of the aerosol and close contact routes (i.e., direct transfer of large droplets to mucous membranes) compared to that of the fomite pathway.
Figure 4 :
4Viral loads and exposure of the susceptible individual for Case Study 1(Figure 3a). The subplots show the temporal evolution of viral loads (a) on fomites and (b) in the air, (c) total viral exposure and contributions due to the three transmission pathways, and (d) relative contributions of the three pathways into total exposure. The effect of surface and hand disinfection is shown with dashed lines for one cleaning/hand washing event at 4 h, and dash-dot lines for two cleaning/hand washing events at 2 h and 4 h.Owing to the importance of the fomite pathway in this case study the effect of surface and hand cleaning on viral exposure is expected to be significant, which is corroborated by the dashed and dash-dotted lines inFigure 4c. Indeed, a single cleaning event at the end of the meeting leads to a 34% reduction in the viral load accumulated by the susceptible person by the end of the working day, while two cleaning events give an overall 49% reduction in viral exposure. The relative contributions of the different pathways also change: the contribution of fomite transmission decreases with enhanced cleaning and those due to close contact and aerosol increase (seeFigure 4d).Such an important contribution of transmission via fomite surfaces warrants a further study into the effects of different mitigation strategies on the viral exposure of the susceptible individual. It is clear that increasing the frequency of hand and surface disinfection should lead to a significant reduction in exposure. However, it is also of interest to investigate how cleaning affects viral exposure in combination with other parameters such as (i) the close contact duration, (ii) face covering efficacy, and (iii) ventilation rate.
Figure 5a
5ashows the synergistic effect on the final viral exposure of surface and hand disinfection (between 0 and 8 times during the 4 h-long meeting) and variable close contact duration (i.e., the fraction of time when large droplets emitted by the infected person can land on mucous membranes of the susceptible one). Note that in this case the amount of large droplets landing on fomite surfaces and hands of the two people remains the same as in the results reported in Figure 4. This can be interpreted as changing the relative positioning of the infected individuals which results in different amounts of droplets expelled directly
Figure 5 :
5Effects of mitigation measures on total viral exposure of the susceptible individual for Case Study 1 (a-c) and Case Study 2 (d-f).If large droplets can be captured at the source (e.g., by wearing a face covering) so that they are prevented from either landing on another person's mucous membranes or contaminating fomite surfaces, a very significant reduction in the viral exposure can beachieved even without hand and surface disinfection (Figure 5b) provided that the face covering captures the majority of large droplets. Together with hand and surface cleaning, a more significant reduction in viral exposure can be achieved by wearing efficient face coverings than in the case of reducing close contact only. This result showing how the large droplet and fomite pathways are intricately linked in these settings is an important observation from the model. Lastly, Figure 5c shows the combined effect of cleaning and ventilation rate on viral exposure. The rate of ventilation has little effect on the viral exposure except at very low values (<1 ACH), while most of the reduction in this quantity in Figure 5c is due to hand and surface cleaning. This finding is not surprising when considering the low relative contribution of the aerosol pathway to the total viral exposure observed in Figure 4d. 4.2 Case Study 2: Working alongside in a small office The second case study uses the same environment as in Case Study 1 (section 4.1) but the two individuals are assumed to be working alongside each other (see Figure 3b) for 4 h, after which the infected person leaves the room and the susceptible one remains in the setting for a further 4 h. Unlike in Case Study 1, the individuals spend only 5% of the time they are together in the room in close proximity to each other and the other individual's desk, while each spending 50% of the time handling the document. See
Figure 6 :
6Viral loads and exposure of the susceptible individual for Case Study 2 (Figure 3b). The subplots show the temporal evolution of viral loads (a) on fomites and (b) in the air, (c) total viral exposure and contributions due to the three transmission pathways, and (d) relative contributions of the three pathways into total exposure. The effect of surface and hand disinfection is shown with dashed lines for one cleaning/hand washing event at 4 h, and dash-dot lines for two cleaning/hand washing events at 2 h and 4 h.
Figure 5d -
5df. First we note that further decreasing the fraction of time spent in close proximity from an already low value of 5% (without changing the amount of large droplets contaminating fomites) leads to marginal improvements as compared to the enhanced hand and surface cleaning(Figure 5d). Capturing large droplets with a face covering of increasing efficacy has a more tangible effect on the viral exposure since this also reduces surface contamination (Figure 5e). The improvement compared to the results inFigure 5dis marginal, however, because of the relatively short amount of time the individuals spend in close proximity.
The two case studies,1 and 2, illustrate the importance of tracking all known transmission pathways in enclosed settings in order to reveal which of them are dominant and over what time scales. It is also demonstrated how the model presented herein can be used to explore the effect of mitigation strategies on the viral exposure of susceptible individuals and thus pave the way to their optimal deployment in real life applications.
•
Groups of friends, each containing 3 individuals, are considered, whereby the students in the group spend time in close contact with one another. It is assumed that students within each group do not come into close contact with each other's private belongings. Students in the office have been organised into 3 sets based on their interactions with the infected individual. Set 1 includes individuals who are in the friend group of and share a cabinet with the infected individual (Students 2 and 3). Set 2 includes individuals who share a cabinet with the infected individual, but are not in the same friend group as the infected individual (Students 4 -13). Set 3 includes individuals who are neither in the friend group of the infected individual nor share a cabinet with them (Students 14 -39). Furthermore, to highlight the impacts of different pathways, it is assumed that the infected individual spends 4 hours in the office while all susceptible students spend 8 hours. The parameters
Figure 7
7presents the results of this test case. It is evident fromFigure 7a, which shows the relative exposures from each pathway, that the close contact pathway is the most dominant transmission pathway for students in Set 1, followed by the aerosol pathway. While the fomite pathway has the smallest contribution of approximately 7%, it cannot be neglected in this instance. These results are further emphasised inFigure 7b, which shows the number of viral particles Student 2 (representative of Set 1) is exposed to via the different pathways over a 24-hour period. Noting that both Set 1 and Set 2 share a cabinet and therefore touch 'Cabinet handle 1', not shared by Set 3, exposure via the fomite pathway is negligible for individuals in both Sets 2 and 3. This indicates that shared public surfaces do not play a significant role in the fomite pathway in this instance. Further investigations reveal that large droplets deposited directly by the infected individual onto susceptible individuals' hands in Set 1 acts as the main contributor to the fomite pathway. As individuals in Set 2 do not come into close contact with the infected individual, this interaction between the close contact and fomite pathways is eliminated.
Figure 7c
7cshows the viral concentrations on public surfaces over a 24-hour period. Cabinet handle 1, shared by the infected individual and susceptible individuals in Sets 1 and 2, has the highest viral concentration throughout the 24-hour period. This is followed by the water dispenser button and printer touch screen. Although the water dispenser button has the highest touch frequency out of all public surfaces, cabinet handle 1 has the smallest surface area, potentially resulting in the higher viral concentration seen inFigure 7c.Cabinet handles 2 and 3 are not touched by the infected individual and therefore have negligible viral concentrations. Note that the viral concentrations in all public surfaces start to decrease rapidly after 4 hours as the infected individual exits at this time. Following the exit of all susceptible individuals at 8 hours, the viral concentrations continue to decrease via natural inactivation. Figure 7d. which shows the viral concentrations on Desk 2 andChair 2 belonging to Student 2, representing Set 1, follows a similar trend over the 24-hour period. Despite the desk having a higher touch frequency when compared to the chair, the latter retains a higher viral concentration until approximately 18 hours, potentially due to its smaller surface area compared to the desk.
Figure 7 :
7Results for the larger Case Study 3. a) The relative viral exposure for each set of individuals in the office via the different transmission pathways; b) Viral exposure of Student 2, representative of Set 1, via the different transmission pathways; c) the viral concentrations over a 24-hour period on public surfaces found in the office; d) Viral concentrations on private objects which belong to Student 2, representative of Set 1, over a 24-hour period.Overall, this case study demonstrates the flexibility and adaptability of the model, which accommodates both continuous and discrete events and includes all three transmission pathways. The results shed light on how different pathways can dominate based on the extent of interaction between the susceptible and infected interactions, highlighting the importance of including all pathways, capturing human behaviour, and considering the interactions between pathways. Furthermore, the impacts of discrete events, such as the infected individual leaving the enclosed space, can be observed using this model as well.5 ConclusionsWe developed the model presented herein to quantitatively describe all the known transmission pathways of respiratory pathogens, such as SARS-CoV-2. The model formulation comprises, effectively, conservation equations tracking the virus with the level of detail sufficient to account for the expected (average) rates of transmission in a given setting: setting size and ventilation regime, objects present therein and their properties, the times of entry and exit of individuals and their interactions with other people and fomites. Like any model, the one developed here is based on a number of assumptions. First, we assume that the rate of dispersion of respiratory aerosols is high, so that any emitted small drops of mucus are instantaneously mixed with the air in the setting. This assumption is justified in most small to medium settings such as offices and classrooms. Second, we make assumptions about the fractions of large droplets that can be transmitted ballistically from one individual to mucous membranes of another or onto fomite surfaces when those are in close proximity to the source. These parameters are difficult to pinpoint in most cases, due to the distances between the source of large droplets and objects, as well as the orientation of the head of the infected individual constantly changing in most situations. Third, fomite transmission is described using either a continuous formulation using touch frequencies or through as a series of discrete events (the exact formulation can be chosen by the user depending on the context and available information).While more detailed representations of particular aspects such as exact airflow patterns or event-based simulations may provide more accurate predictions in some cases (particularly when scrutinising a transmission event post factum), they can be too constrained to yield statistically relevant conclusions. Therefore, we believe that the level of approximationadopted in our model is sufficient for simulating the expected/average rates of transmission and risk of infection, as well as assessing the effect of mitigation measures, in general enclosed setting with less information available. Our model offers new insights into the magnitude of exposure to the virus and the prevalent transmission routes under different conditions. The simulation results show that the nature of the enclosed setting and the intensity of person-to-person and person-fomite interactions therein play an important role in determining the relative contributions of the aerosol, close contact, and fomite transmission pathways. Thus, the results for Case Studies 1 and 2, focusing on two individuals sharing a small office, show that when individuals spend long periods of time in close proximity to each other and frequently touch shared objects the fomite pathway can be the dominant transmission route. The results also showed how transmission pathways can be related, particularly large droplet deposition and fomite pathways. In this case the contribution of the aerosol pathway may be significantly lower, and increasing the rate of ventilation (one of the most frequently recommended mitigation measures indoors) beyond a very modest 0.5 ACH has a negligible effect on the risk of infection. A much more effective approach to reducing infection risk under such circumstances involves frequent cleaning of often-touched surfaces that are also in close proximity to more than one individual as well as wearing effective face covering to reduce the shedding of large droplets of mucus. Our simulations also show that when close contact between individuals and between individuals and private objects of other people is minimised (as in Case Study 3), the overall exposure to the virus (and hence the risk of infection) are greatly reduced and transmission occurs predominantly through the aerosol pathway. This implies that the risk of infection under such conditions can be reduced through enhancing the rate of removal and/or inactivation of the pathogen in the air (e.g., through enhanced ventilation, air filtration or Ultraviolet germicidal irradiation). These findings highlight the fact that the mechanisms of respiratory infection transmission can be drastically different depending on the nature of an indoor space, its occupancy level and how the occupants interact with each other and their environment. The time scales involved (e.g., the total length of stay in the setting and duration of close encounters with other people) are also important factors affecting the relative contributions of the different transmission pathways into total exposure. Based on our results, we can conclude that, while UK government COVID-19 advice undoubtedly lists valid mitigating interventions, their use should be tailored to particular enclosed environments based on a quantitative assessment of the different transmission pathways. This would allow not only a significant reduction in the infection risk in a particular setting to be achieved, but would also enable optimal deployment of mitigating interventions to minimise the associated monetary and non-monetary costs. The model presented here provides a versatile means of simulating viral exposure at the level of a single enclosed space. However, it can also be used as part of a larger simulation involving multiple settings of different type visited by an individual throughout the day or over longer periods of time. Thus, this type of local, but high-fidelity, transmission model could play an important role as a building block in larger-scale epidemiological models.
the transfer rate from surface to hand; R HS represents the transfer rate from hand to surface; A c is the area of the surface that made contact; A H and A S represent areas of the hand and surface, respectively. Through a detailed surface touch map, found via video surveillance, they found that both public surfaces and private surfaces play a part in spreading viruses via fomites36 . They modeled close contacts by observing the number of contacts and duration, and making assumptions regarding the transmission of small and large droplets between individuals. There results suggest that, for influenza A, 54.3 % of infection risk is attributed to the aerosol pathway, 44.5% to the droplet pathway and 4.2% to the fomite pathway, demonstrating the importance of considering multiple pathways in viral transmission modelling. In subsequent work, Zhang et al.37 the authors obtained more accurate parameter values for close contacts. Their approach of combining data with different approximate models is promising and effective, however, up to now the different routes were modelled separately and the model has yet to be generalised to scenarios beyond those for which data is directly available.A recent review from Leung 38 highlights the lack of understanding and intense debate surrounding the importance of the different transmission mechanisms for various respiratory diseases, stressing the need for inter-disciplinary cooperation to quantify and understand these mechanisms holistically. Relatively few models attempt to quantify the effects of multiple transmission routes simultaneously, and fewer still can be used to simulate and understand the effects of different mitigation strategies. While many models with detailed spatio-temporal distributions exist using computational fluid dynamics, multi-agent models, etc., these are limited in their applicability to general scenarios, are computationally costly to run, and require detailed mathematical and coding knowledge to test different scenarios.On the other hand, models that only consider population-level transmission, can be difficult to assess for local decision-making and contain a number of highly inaccurate parameters and many simplifications.This study presents a novel, general and flexible model for risk of viral infection in enclosed spaces. The model, which has been applied to the SARS-CoV-2 virus in our case studies, can be used to study any viral transmission by modifying the appropriate virusspecific parameters and used for modelling any enclosed environment (such as public transport, offices, and classrooms) by modifying the environmental parameters. It can be used to approximate the risk of infection of individuals, based on the natural decay of viruses on different surface types and in the air, the surface types, transfer efficiencies of the virus from the surfaces, ventilation rates, and human behaviour in the settings (interactions with objects, face touching frequencies, etc.). The deterministic model is formulated as a set of first-order ODEs, that also has discrete inputs that are used to model events such as individuals entering or leaving the enclosed environment and cleaning events. The model can be used to quantify and understand the relative risk associated with the different transmission pathways and the effectiveness of different mitigation strategies in different environments.
Through this pathway, an infected individual can contaminate their hands by touching their mouth or nose before handling other objects while all individuals can spread the virus by touching multiple objects. Susceptible individuals can self-inoculate by transferring the virus from their contaminated hands to mucous membranes.
Table 3 of
3Appendix A. Other parameter values, Study 3) in Appendix A. The models and related data have been made openly available on Github: https://github.com/Ishanki/VIRAS.including reasonable assumptions about the values of some of the parameters described in
the text below, are compiled in Table 1 (for Case Studies 1 and 2) and Table 2 for (for Case
Table 1 :
1Parameters used in case studies 1 and 2. The following subscripts have been used: doc -document, dsk -desk, dh -door handle, inf -infected individual, muc -mucosa, resprespiration, susc -susceptible individual, vent -ventilation.Table 2contain those specific to Case Study 3, where infected and susceptible individuals in a large graduate student office are modelled.Parameter Value Unit
Notes and Reference
A doc
623.7
cm 2
Assumed
A dsk
6000
cm 2
Assumed
A dh
65
cm 2
Assumed
A muc
391.7
cm 2
Value calculated using data 43-45
A hand
147.02
cm 2
Area of both palms 46
A c
hand,doc
36.8
cm 2
Assumed
A c
hand,dsk
73.5
cm 2
Assumed
A c
hand,dh
36.0
cm 2
Assumed
A c
hand,muc
7.67
cm 2
Value assumed for two fingertips 47
f hand,doc
5.0
h −1
Case Study 1, assumed
f hand,doc
2.5
h −1
Case Study 2, assumed
f hand,dsk
20.0
h −1
Average value from 17
f hand,dh
1
h −1
Assumed
f hand,muc
16
h −1
Average value from 17
Table 2 :
2Parameters used in the large test case, based on the original test case byZhang and Li. 17 The following subscripts have been used: ch -cabinet handle, chr -chair, dskdesk, dh -door handle, ps -printer touch screen, wdb -water dispenser button, inf -infected individual, group -specific to interactions within a group, muc -mucosa, resp -respiration, susc -susceptible individual, vent -ventilation.Parameter Value Unit
Notes and Reference
A dsk
6000
cm 2
Desktop 17
A chr
4260
cm 2
Total chair area 17
A ch
10
cm 2
Handle area only 17
A ps
35
cm 2
Touchscreen area only 17
Table 3 :
3Parameters associated with viral transmission and other related physical properties. The following subscripts have been used: inf -infected individual, muc -mucosa, resprespiration, susc -susceptible individual, por -porous, npor -nonporous, ss -stainless steel .As deduced by49 CoV-2 half-life on swine skin at 22°C 14Parameter
Value
Unit
Notes and Reference
r resp
S j
0.39
m 3 h −1
Average value for respiration 48
k susc
3.95 × 10 5
viral particles −1
LRV i
2
-
Assumed based on other viruses 50
LRV hand
1.1
-
Assumed using data for Norwalk virus 51
LRV muc
0
-
Assumed
r a
inf
1.13015 × 10 7 viral particles −1
Taken from 41
R por,hand
0.03
-
Value assumed from influenza data 17
R npor,hand
0.07
-
Value assumed from influenza data 17
R ss,hand
0.08
-
Value assumed from influenza data 17
R hand,por
0.8
-
Value assumed from influenza data 17
R hand,npor
0.12
-
Value assumed from influenza data 17
R hand,ss
0.16
-
Value assumed from influenza data 17
R hand,muc
0.5
-
Value assumed from influenza data 17
The ventilation of buildings and other mitigating measures for COVID-19: a focus on wintertime. H C Burridge, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 20200855Burridge, H. C. et al. The ventilation of buildings and other mitigating measures for COVID-19: a focus on wintertime. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 2021, 477, 20200855.
The flow physics of COVID-19. R Mittal, R Ni, J.-H Seo, Journal of Fluid Mechanics. 8942Mittal, R.; Ni, R.; Seo, J.-H. The flow physics of COVID-19. Journal of Fluid Mechanics 2020, 894, F2.
Size distribution and sites of origin of droplets expelled from the human respiratory tract during expiratory activities. L Morawska, G Johnson, Z Ristovsk, M Hargreaves, K Mengersen, S Corbett, C Chao, D Katoshevski, Journal of Aerosol Science. 40Morawska, L.; Johnson, G.; Ristovsk, Z.; Hargreaves, M.; Mengersen, K.; Corbett, S.; Chao, C.; Katoshevski, D. Size distribution and sites of origin of droplets expelled from the human respiratory tract during expiratory activities. Journal of Aerosol Science 2009, 40, 256-269.
Efficacy of face masks depends on spatial relation between host and recipient and who is being protected. C Grover, BMJ. 2020369Grover, C. Efficacy of face masks depends on spatial relation between host and recipient and who is being protected. BMJ 2020, 369 .
Airborne transmission of coronavirus cannot be ruled out, WHO says. M Mathers, Last accessed onMathers, M. Airborne transmission of coronavirus cannot be ruled out, WHO says. https://www.independent.co.uk/news/health/ coronavirus-airborne-who-cases-deaths-a9607206.html., 2020; Last accessed on: 30th March 2021.
Aerosol transmission of influenza A virus: a review of new studies. R Tellier, Journal of The Royal Society Interface. 6Tellier, R. Aerosol transmission of influenza A virus: a review of new studies. Journal of The Royal Society Interface 2009, 6, S783-S790.
Evidence for probable aerosol transmission of SARS-CoV-2 in a poorly ventilated restaurant. Y Li, H Qian, J Hang, X Chen, L Hong, P Liang, J Li, S Xiao, J Wei, L Liu, M Kang, 2020Li, Y.; Qian, H.; Hang, J.; Chen, X.; Hong, L.; Liang, P.; Li, J.; Xiao, S.; Wei, J.; Liu, L.; Kang, M. Evidence for probable aerosol transmission of SARS-CoV-2 in a poorly ventilated restaurant. medRxiv 2020,
Mechanistic transmission modeling of COVID-19 on the Diamond Princess cruise ship demonstrates the importance of aerosol transmission. P Azimi, Z Keshavarz, J G Laurent, B Stephens, J G Allen, Proceedings of the National Academy of Sciences 2021. the National Academy of Sciences 2021118Azimi, P.; Keshavarz, Z.; Cedeno Laurent, J. G.; Stephens, B.; Allen, J. G. Mechanistic transmission modeling of COVID-19 on the Diamond Princess cruise ship demonstrates the importance of aerosol transmission. Proceedings of the National Academy of Sciences 2021, 118 .
Transmission of SARS-CoV-2 by inhalation of respiratory aerosol in the Skagit Valley Chorale superspreading event. S L Miller, W W Nazaroff, J L Jimenez, A Boerstra, G Buonanno, S J Dancer, J Kurnitski, L C Marr, L Morawska, C Noakes, Indoor Air. 31Miller, S. L.; Nazaroff, W. W.; Jimenez, J. L.; Boerstra, A.; Buonanno, G.; Dancer, S. J.; Kurnitski, J.; Marr, L. C.; Morawska, L.; Noakes, C. Transmission of SARS-CoV-2 by inhalation of respiratory aerosol in the Skagit Valley Chorale superspreading event. Indoor Air 2021, 31, 314-323.
How can airborne transmission of COVID-19 indoors be minimised? Environmental International. L Morawska, 142Morawska, L. et al. How can airborne transmission of COVID-19 indoors be minimised? Environmental International 2020, 142 .
Respiratory virus shedding in exhaled breath and efficacy of face masks. N H L Leung, D K W Chu, E Y C Shiu, K H Chan, J J Mcdevitt, B J P Hau, H L Yen, Y Li, D K M Ip, J S M Peiris, Nat. Med. 2020Leung, N. H. L.; Chu, D. K. W.; Shiu, E. Y. C.; Chan, K. H.; McDevitt, J. J.; Hau, B. J. P.; Yen, H. L.; Li, Y.; Ip, D. K. M.; Peiris, J. S. M., et al. Respiratory virus shedding in exhaled breath and efficacy of face masks. Nat. Med. 2020, 1-5.
Effectiveness of Adding a Mask Recommendation to Other Public Health Measures to Prevent SARS-CoV-2 Infection in Danish Mask Wearers. H Bundgaard, Annals of Internal Medicine. 174Bundgaard, H. et al. Effectiveness of Adding a Mask Recommendation to Other Public Health Measures to Prevent SARS-CoV-2 Infection in Danish Mask Wearers. Annals of Internal Medicine 2020, 174, 335-343.
Experimental investigation of indoor aerosol dispersion and accumulation in the context of COVID-19: Effects of masks and ventilation. Y Shah, J W Kurelek, S D Peterson, S Yarusevych, Physics of Fluids. 3373315Shah, Y.; Kurelek, J. W.; Peterson, S. D.; Yarusevych, S. Experimental investigation of indoor aerosol dispersion and accumulation in the context of COVID-19: Effects of masks and ventilation. Physics of Fluids 2021, 33, 073315.
Role of fomites in SARS transmission during the largest hospital outbreak in Hong Kong. S Xiao, Y Li, T Wong, D S Hui, PLOS ONE. 12Xiao, S.; Li, Y.; Wong, T.-w.; Hui, D. S. C. Role of fomites in SARS transmission during the largest hospital outbreak in Hong Kong. PLOS ONE 2017, 12, 1-13.
Fomite-mediated transmission as a sufficient pathway: a comparative analysis across three viral pathogens. A N Kraay, M A Hayashi, N Hernandez-Ceron, I H Spicknall, M C Eisenberg, R Meza, J N Eisenberg, BMC Infectious Diseases. 18Kraay, A. N.; Hayashi, M. A.; Hernandez-Ceron, N.; Spicknall, I. H.; Eisenberg, M. C.; Meza, R.; Eisenberg, J. N. Fomite-mediated transmission as a sufficient pathway: a comparative analysis across three viral pathogens. BMC Infectious Diseases 2018, 18 .
Report into a nosocomial outbreak of coronavirus disease 2019 (COVID-19) at Netcare St. Augustine's Hospital. R Lessells, Y Moosa, T De Oliveira, KRISP -University of KwaZulu-NatalLessells, R.; Moosa, Y.; de Oliveira, T. Report into a nosocomial outbreak of coronavirus disease 2019 (COVID-19) at Netcare St. Augustine's Hospital. KRISP -University of KwaZulu-Natal 2020,
Transmission of influenza A in a student office based on realistic person-to-person contact and surface touch behaviour. N Zhang, Y Li, International journal of environmental research. 151699Zhang, N.; Li, Y. Transmission of influenza A in a student office based on realistic person-to-person contact and surface touch behaviour. International journal of envi- ronmental research and public health 2018, 15, 1699.
Mathematical Modeling of Business Reopening When Facing SARS-CoV-2 Pandemic: Protection, Cost, and Risk. H Miao, Q Gao, H Feng, C Zhong, P Zhu, L Wu, M D Swartz, X Luo, S M Desantis, D Lai, C Bauer, A Pérez, L Rong, D Lairson, Frontiers in Applied Mathematics and Statistics. 635Miao, H.; Gao, Q.; Feng, H.; Zhong, C.; Zhu, P.; Wu, L.; Swartz, M. D.; Luo, X.; DeSantis, S. M.; Lai, D.; Bauer, C.; Pérez, A.; Rong, L.; Lairson, D. Mathematical Modeling of Business Reopening When Facing SARS-CoV-2 Pandemic: Protection, Cost, and Risk. Frontiers in Applied Mathematics and Statistics 2020, 6, 35.
Heidrich, P. A COVID-19 epidemic model integrating direct and fomite transmission as well as household structure. K P Wijaya, N Ganegoda, Y Jayatunga, T Goetz, W Bock, M Schaefer, 2020Wijaya, K. P.; Ganegoda, N.; Jayatunga, Y.; Goetz, T.; Bock, W.; Schaefer, M.; Hei- drich, P. A COVID-19 epidemic model integrating direct and fomite transmission as well as household structure. medRxiv 2020,
Identifying airborne transmission as the dominant route for the spread of COVID-19. R Zhang, Y Li, A L Zhang, Y Wang, M J Molina, Proceedings of the National Academy of Sciences 2020. the National Academy of Sciences 2020117Zhang, R.; Li, Y.; Zhang, A. L.; Wang, Y.; Molina, M. J. Identifying airborne trans- mission as the dominant route for the spread of COVID-19. Proceedings of the National Academy of Sciences 2020, 117, 14857-14863.
Quantifying SARS-CoV-2 transmission suggests epidemic control with digital contact tracing. L Ferretti, C Wymant, M Kendall, L Zhao, A Nurtay, L Abeler-Dörner, M Parker, D Bonsall, C Fraser, Science. 368Ferretti, L.; Wymant, C.; Kendall, M.; Zhao, L.; Nurtay, A.; Abeler-Dörner, L.; Parker, M.; Bonsall, D.; Fraser, C. Quantifying SARS-CoV-2 transmission suggests epidemic control with digital contact tracing. Science 2020, 368 .
It is Time to Address Airborne Transmission of COVID-19. L Morawska, D K Milton, Clinical Infectious Diseases. 939Morawska, L.; Milton, D. K. It is Time to Address Airborne Transmission of COVID-19. Clinical Infectious Diseases 2020, ciaa939.
Airborne infection R-numbers for regularly attended spaces: COVID-19 a case-study. H C Burridge, C J Noakes, P Linden, Burridge, H. C.; Noakes, C. J.; Linden, P. F. Airborne infection R-numbers for regularly attended spaces: COVID-19 a case-study. 2020.
Evaluating the commercial airliner cabin environment with different air distribution systems. R You, C.-H Lin, D Wei, Q Chen, Indoor Air. 29You, R.; Lin, C.-H.; Wei, D.; Chen, Q. Evaluating the commercial airliner cabin envi- ronment with different air distribution systems. Indoor Air 2019, 29, 840-853.
Human behavior during close contact in a graduate student office. N Zhang, J W Tang, Y Li, Indoor Air. 29Zhang, N.; Tang, J. W.; Li, Y. Human behavior during close contact in a graduate student office. Indoor Air 2019, 29, 577-590.
Close contact behavior in indoor environment and transmission of respiratory infection. N Zhang, W Chen, P.-T Chan, H.-L Yen, J W Tang, .-T Li, Y , Indoor Air. 30Zhang, N.; Chen, W.; Chan, P.-T.; Yen, H.-L.; Tang, J. W.-T.; Li, Y. Close contact behavior in indoor environment and transmission of respiratory infection. Indoor Air 2020, 30, 645-661.
Risk of fomite-mediated transmission of SARS-CoV-2 in child daycares, schools, and offices: a modeling study. A N M Kraay, M A L Hayashi, D M Berendes, J S Sobolik, J S Leon, B A Lopman, 2020Kraay, A. N. M.; Hayashi, M. A. L.; Berendes, D. M.; Sobolik, J. S.; Leon, J. S.; Lopman, B. A. Risk of fomite-mediated transmission of SARS-CoV-2 in child daycares, schools, and offices: a modeling study. medRxiv 2020,
Bacterial transfer to fingertips during sequential surface contacts with and without gloves. M.-F King, M López-García, K P Atedoghu, N Zhang, A M Wilson, M Weterings, W Hiwar, S J Dancer, C J Noakes, L A Fletcher, Indoor Air. 30King, M.-F.; López-García, M.; Atedoghu, K. P.; Zhang, N.; Wilson, A. M.; Weter- ings, M.; Hiwar, W.; Dancer, S. J.; Noakes, C. J.; Fletcher, L. A. Bacterial transfer to fingertips during sequential surface contacts with and without gloves. Indoor Air 2020, 30, 993-1004.
Model Analysis of Fomite Mediated Influenza Transmission. J Zhao, J E Eisenberg, I H Spicknall, S Li, J S Koopman, PLOS ONE. 7Zhao, J.; Eisenberg, J. E.; Spicknall, I. H.; Li, S.; Koopman, J. S. Model Analysis of Fomite Mediated Influenza Transmission. PLOS ONE 2012, 7, 1-11.
Modeling of Human Viruses on Hands and Risk of Infection in an Office Workplace Using Micro-Activity Data. P I Beamer, K R Plotkin, C P Gerba, L Y Sifuentes, D W Koenig, K A Reynolds, 25436665Journal of Occupational and Environmental Hygiene. 12Beamer, P. I.; Plotkin, K. R.; Gerba, C. P.; Sifuentes, L. Y.; Koenig, D. W.; Reynolds, K. A. Modeling of Human Viruses on Hands and Risk of Infection in an Office Workplace Using Micro-Activity Data. Journal of Occupational and Environ- mental Hygiene 2015, 12, 266-275, PMID: 25436665.
The Potential Impact of Intensified Community Hand Hygiene Interventions on Respiratory tract Infections: A Modelling Study. T M Pham, Y Mo, B Cooper, 2020Pham, T. M.; Mo, Y.; Cooper, B. The Potential Impact of Intensified Community Hand Hygiene Interventions on Respiratory tract Infections: A Modelling Study. medRxiv 2020,
Probable transmission routes of the influenza virus in a nosocomial outbreak. S Xiao, J W Tang, D S Hui, H Lei, H Yu, Y Li, Epidemiology and Infection. 146Xiao, S.; Tang, J. W.; Hui, D. S.; Lei, H.; Yu, H.; Li, Y. Probable transmission routes of the influenza virus in a nosocomial outbreak. Epidemiology and Infection 2018, 146, 1114-1122.
Airborne or Fomite Transmission for Norovirus? A Case Study Revisited. S Xiao, J W Tang, Y Li, International Journal of Environmental Research. 14Xiao, S.; Tang, J. W.; Li, Y. Airborne or Fomite Transmission for Norovirus? A Case Study Revisited. International Journal of Environmental Research and Public Health 2017, 14 .
Routes of transmission of influenza A H1N1, SARS CoV, and norovirus in air cabin: Comparative analyses. H Lei, Y Li, S Xiao, C.-H Lin, S L Norris, D Wei, Z Hu, S Ji, Indoor Air. 28Lei, H.; Li, Y.; Xiao, S.; Lin, C.-H.; Norris, S. L.; Wei, D.; Hu, Z.; Ji, S. Routes of transmission of influenza A H1N1, SARS CoV, and norovirus in air cabin: Comparative analyses. Indoor Air 2018, 28, 394-403.
Relative contributions of transmission routes for COVID-19 among healthcare personnel providing patient care. R M Jones, 32643585Journal of Occupational and Environmental Hy. 2020Jones, R. M. Relative contributions of transmission routes for COVID-19 among health- care personnel providing patient care. Journal of Occupational and Environmental Hy- giene 2020, 17, 408-415, PMID: 32643585.
Surface touch and its network growth in a graduate student office. N Zhang, Y Li, H Huang, 28Zhang, N.; Li, Y.; Huang, H. Surface touch and its network growth in a graduate student office. Indoor air 2018, 28, 963-972.
Infection Spread and High-Resolution Detection of Close Contact Behaviors. N Zhang, B Su, P.-T Chan, T Miao, P Wang, Y Li, International Journal of Environmental Research and Public Health. 171445Zhang, N.; Su, B.; Chan, P.-T.; Miao, T.; Wang, P.; Li, Y. Infection Spread and High-Resolution Detection of Close Contact Behaviors. International Journal of Envi- ronmental Research and Public Health 2020, 17, 1445.
Transmissibility and transmission of respiratory viruses. N H Leung, Nature Reviews Microbiology. 19Leung, N. H. Transmissibility and transmission of respiratory viruses. Nature Reviews Microbiology 2021, 19, 528-545.
Aerosol and surface stability of SARS-CoV-2 as compared with SARS-CoV-1. N Van Doremalen, T Bushmaker, D H Morris, M G Holbrook, A Gamble, B N Williamson, A Tamin, J L Harcourt, N J Thornburg, S I Gerber, J O Lloyd-Smith, E De Wit, New England Journal of Medicine. 382Van Doremalen, N.; Bushmaker, T.; Morris, D. H.; Holbrook, M. G.; Gamble, A.; Williamson, B. N.; Tamin, A.; Harcourt, J. L.; Thornburg, N. J.; Gerber, S. I.; Lloyd- Smith, J. O.; de Wit, E. Aerosol and surface stability of SARS-CoV-2 as compared with SARS-CoV-1. New England Journal of Medicine 2020, 382, 1564-1567.
Transmissibility of COVID-19 depends on the viral load around onset in adult and symptomatic patients. H Kawasuji, Y Takegoshi, M Kaneda, A Ueno, Y Miyajima, K Kawago, Y Fukui, Y Yoshida, M Kimura, H Yamada, I Sakamaki, H Tani, Y Morinaga, Y Yamamoto, PLOS. 2020Kawasuji, H.; Takegoshi, Y.; Kaneda, M.; Ueno, A.; Miyajima, Y.; Kawago, K.; Fukui, Y.; Yoshida, Y.; Kimura, M.; Yamada, H.; Sakamaki, I.; Tani, H.; Morinaga, Y.; Yamamoto, Y. Transmissibility of COVID-19 depends on the viral load around onset in adult and symptomatic patients. PLOS ONE 2020, 15, 1-8.
Coronavirus Disease 2019 Patients in Earlier Stages Exhaled Millions of Severe Acute Respiratory Syndrome Coronavirus 2 Per Hour. J Ma, X Qi, H Chen, X Li, Z Zhang, H Wang, L Sun, L Zhang, J Guo, L Morawska, S A Grinshpun, P Biswas, R C Flagan, M Yao, Clinical Infectious. 2020Ma, J.; Qi, X.; Chen, H.; Li, X.; Zhang, Z.; Wang, H.; Sun, L.; Zhang, L.; Guo, J.; Morawska, L.; Grinshpun, S. A.; Biswas, P.; Flagan, R. C.; Yao, M. Coronavirus Disease 2019 Patients in Earlier Stages Exhaled Millions of Severe Acute Respiratory Syndrome Coronavirus 2 Per Hour. Clinical Infectious Diseases 2020, ciaa1283.
A predictive model of the temperaturedependent inactivation of coronaviruses. T F Yap, Z Liu, R A Shveda, D J Preston, Applied Physics Letters. 11760601Yap, T. F.; Liu, Z.; Shveda, R. A.; Preston, D. J. A predictive model of the temperature- dependent inactivation of coronaviruses. Applied Physics Letters 2020, 117, 060601.
Anatomical and histological factors affecting intranasal drug and vaccine delivery. S Gizurarson, Current drug delivery. 9Gizurarson, S. Anatomical and histological factors affecting intranasal drug and vaccine delivery. Current drug delivery 2012, 9, 566-582.
The oral mucosal surface and blood vessels. E A Naumova, T Dierkes, J Sprang, W H Arnold, Head & face medicine. 9Naumova, E. A.; Dierkes, T.; Sprang, J.; Arnold, W. H. The oral mucosal surface and blood vessels. Head & face medicine 2013, 9, 1-5.
Comparison of conjunctival and corneal surface areas in rabbit and human. M A Watsky, M M Jablonski, H F Edelhauser, Current eye research. 7Watsky, M. A.; Jablonski, M. M.; Edelhauser, H. F. Comparison of conjunctival and corneal surface areas in rabbit and human. Current eye research 1988, 7, 483-486.
Determination Of Hand And Palm Surface Areas As A Percentage Of Body Surface Area In Turkish Young Adults. Trauma and Emergency Care. P Göker, M G Bozkir, 2Göker, P.; Bozkir, M. G. Determination Of Hand And Palm Surface Areas As A Per- centage Of Body Surface Area In Turkish Young Adults. Trauma and Emergency Care 2017, 2, 1-4.
Estimation of hand-to-mouth transfer efficiency of lead. J Sahmel, E I Hsu, H J Avens, E M Beckett, K D Devlin, Annals of Occupational Hygiene. 59Sahmel, J.; Hsu, E. I.; Avens, H. J.; Beckett, E. M.; Devlin, K. D. Estimation of hand-to-mouth transfer efficiency of lead. Annals of Occupational Hygiene 2015, 59, 210-220.
Avoid Airway Catastrophes on the Extremes of Minute Ventilation. R M Levitan, Last accessed onLevitan, R. M. Avoid Airway Catastrophes on the Extremes of Minute Ventilation. 2020; Last accessed on: 30th March 2021.
Deducing the Dose-response Relation for Coronaviruses from COVID-19, SARS and MERS Meta-analysis Results. X Zhang, J Wang, 2020Zhang, X.; Wang, J. Deducing the Dose-response Relation for Coronaviruses from COVID-19, SARS and MERS Meta-analysis Results. MedRxiv 2020,
Residual viral and bacterial contamination of surfaces after cleaning and disinfection. E Tuladhar, W C Hazeleger, M Koopmans, M H Zwietering, R R Beumer, E Duizer, Applied and Environmental Microbiology. 78Tuladhar, E.; Hazeleger, W. C.; Koopmans, M.; Zwietering, M. H.; Beumer, R. R.; Duizer, E. Residual viral and bacterial contamination of surfaces after cleaning and disinfection. Applied and Environmental Microbiology 2012, 78, 7769-7775.
Effectiveness of liquid soap and hand sanitizer against Norwalk virus on contaminated hands. Applied and Environmental Microbiology. P Liu, Y Yuen, H M Hsiao, L A Jaykus, C Moe, 76Liu, P.; Yuen, Y.; Hsiao, H. M.; Jaykus, L. A.; Moe, C. Effectiveness of liquid soap and hand sanitizer against Norwalk virus on contaminated hands. Applied and Envi- ronmental Microbiology 2010, 76, 394-399.
|
[
"https://github.com/Ishanki/VIRAS.including"
] |
[
"Cross-scale neutral ecology and the maintenance of biodiversity: Methods and Derivations",
"Cross-scale neutral ecology and the maintenance of biodiversity: Methods and Derivations"
] |
[
"James P O'dwyer \nDepartment of Plant Biology\nUniversity of Illinois\nUrbanaUSA\n",
"Stephen J Cornell \nInstitute of Integrative Biology\nUniversity of Liverpool\nL69 7ZBLiverpoolUK\n"
] |
[
"Department of Plant Biology\nUniversity of Illinois\nUrbanaUSA",
"Institute of Integrative Biology\nUniversity of Liverpool\nL69 7ZBLiverpoolUK"
] |
[] |
Backward Equation DerivationWe consider sessile individuals in d-dimensional space that give birth to conspecifics at a rate (b − ν) and die at rate b. Offspring are distributed relative to their parents according to a dispersal kernel B, i.e. the probability density that an offspring of an individual at position r is located at a position r is B(r − r ). We assume that there are no density dependent interactions, so that there is no zero-sum rule and that species (and, more generally, lineages) do not interact. The difference between the birth and death rates represents the production of offspring that speciated (to a novel species) at birth.We define P (k, A, r, t) as the probability that a single individual at position r at time t 0 has exactly k conspecific descendants in a region A at time t + t 0 (the dynamics are translationally invariant in time, so this probability does not depend on t 0 ). We consider a single individual at time time 0, and enumerate the possibilities for its lineage during a small ensuing interval ∆t, and the consequent value of P (k, A, r, t):ProcessProbability Probability of k descendants in A at time tIf the individual dies, then there will be zero descendants (δ ij [= 1 if i = j, 0 otherwise] is the Kroneker delta). If the individual gives birth, then there are two statistically independent lineages, and the total number of descendants in A is the sum of the number of descendants from the two lineages.
|
10.1038/s41598-018-27712-7
| null | 42,175,916 |
1705.07856
|
9fd94d9b6a206d99c188c08a50fefc90a400699b
|
Cross-scale neutral ecology and the maintenance of biodiversity: Methods and Derivations
James P O'dwyer
Department of Plant Biology
University of Illinois
UrbanaUSA
Stephen J Cornell
Institute of Integrative Biology
University of Liverpool
L69 7ZBLiverpoolUK
Cross-scale neutral ecology and the maintenance of biodiversity: Methods and Derivations
Backward Equation DerivationWe consider sessile individuals in d-dimensional space that give birth to conspecifics at a rate (b − ν) and die at rate b. Offspring are distributed relative to their parents according to a dispersal kernel B, i.e. the probability density that an offspring of an individual at position r is located at a position r is B(r − r ). We assume that there are no density dependent interactions, so that there is no zero-sum rule and that species (and, more generally, lineages) do not interact. The difference between the birth and death rates represents the production of offspring that speciated (to a novel species) at birth.We define P (k, A, r, t) as the probability that a single individual at position r at time t 0 has exactly k conspecific descendants in a region A at time t + t 0 (the dynamics are translationally invariant in time, so this probability does not depend on t 0 ). We consider a single individual at time time 0, and enumerate the possibilities for its lineage during a small ensuing interval ∆t, and the consequent value of P (k, A, r, t):ProcessProbability Probability of k descendants in A at time tIf the individual dies, then there will be zero descendants (δ ij [= 1 if i = j, 0 otherwise] is the Kroneker delta). If the individual gives birth, then there are two statistically independent lineages, and the total number of descendants in A is the sum of the number of descendants from the two lineages.
Backward Equation Derivation
We consider sessile individuals in d-dimensional space that give birth to conspecifics at a rate (b − ν) and die at rate b. Offspring are distributed relative to their parents according to a dispersal kernel B, i.e. the probability density that an offspring of an individual at position r is located at a position r is B(r − r ). We assume that there are no density dependent interactions, so that there is no zero-sum rule and that species (and, more generally, lineages) do not interact. The difference between the birth and death rates represents the production of offspring that speciated (to a novel species) at birth.
We define P (k, A, r, t) as the probability that a single individual at position r at time t 0 has exactly k conspecific descendants in a region A at time t + t 0 (the dynamics are translationally invariant in time, so this probability does not depend on t 0 ). We consider a single individual at time time 0, and enumerate the possibilities for its lineage during a small ensuing interval ∆t, and the consequent value of P (k, A, r, t):
Process
Probability Probability of k descendants in A at time t Nothing 1 − (2b − ν)∆t P (k, A, r, t − ∆t) Death b∆t δ k,0 Birth, offspring at r (b − ν)∆t B(r − r )dr k m=0 P (m, A, r, t − ∆t)P (k − m, A, r, t − ∆t)
If the individual dies, then there will be zero descendants (δ ij [= 1 if i = j, 0 otherwise] is the Kroneker delta). If the individual gives birth, then there are two statistically independent lineages, and the total number of descendants in A is the sum of the number of descendants from the two lineages.
Combining these three three mutually exclusive possibilities gives P (k, A, r, t) = (1 − (2b − ν)∆t) P (k, A, r, t − ∆t) + b∆tδ k,0 + (b − ν)∆t k m=0 P (m, A, r , t − ∆t)P (k − m, A, r, t − ∆t)B(r − r )dr .
(1)
Taking the limit of ∆t going to zero gives:
∂P (k, A, r, t) ∂t = −(2b − ν)P (k, A, r, t) + bδ k,0
+ (b − ν) k m=0 P (m, A, r , t)P (k − m, A, r, t)B(r − r )dr ,(2)
where the integral runs over all d-dimensional space.
We assume that the dispersal kernel B decays to zero rapidly enough that P (k − m, A, r, t) varies much more slowly in space, and perform a Taylor expansion for P (m, A, r , t) so that P (m, A, r , t)B(r − r )dr = B(r − r) 1 + (r − r) · ∇ + ((r − r) · ∇) 2 2 + . . . P (m, A, r, t)dr ≈ P (m, A, r, t) + K∇ 2 P (m, A, r, t),
where K = 1 2 (r − r) 2 B(r − r)dr and we have assumed that B is normalised so that B(r − r)dr = 1, and has rotational symmetry so that (r − r)B(r − r)dr = 0. The backward equation can then be written as
∂P (k, A, r, t) ∂t = −(2b − ν)P (k, A, r, t) + bδ k,0 + (b − ν) k m=0 P (k − m, A, r, t) P (m, A, r, t) + K∇ 2 P (m, A, r, t)(3)
This equation is to be solved for t > 0, with initial condition P (k, A, r, t) = δ k,1 if r is inside the region A, and zero otherwise, representing the fact that if t = 0 the lineage consists of the founding individual only.
Finally, we define the generating function of P as:
G(z, r, t, A) = ∞ k=1 z k P (k, A, r, t)(4)
so that
∂G ∂t = (−2b + ν) G + (b − ν)G 2 + b + DG∇ 2 G,(5)where D = K(b − ν). If ν b, then D = bσ 2 ,
where σ 2 is twice the variance of the dispersal kernel. The boundary condition is that G(z, r, 0, A) is = z if r is within the sampling region defined by A (e.g. a line segment of length L in the one dimensional problem, and an area of a particular geometry in the 2-dimensional case).
A similar backward equation can be derived for species that, rather than being sessile and dispersing at birth, performed random walks. This equation turns out to be identical to eqn. (5) except that the factor G in front of the Laplacian is replaced by 1. As discussed later, this factor does not affect the Species Area Curve or Species Abundance Distribution in the biologically relevant limit ν b.
Solutions
Species Area Curve
We now consider an approximation method to find solutions of Eq. (5). Considering spatial dimension 1, first we define:
φ(x, t, L) = 1 − G(x, 0, t, L)(6)
so that φ satisfies:
∂φ ∂t = −νφ − (b − ν)φ 2 + D(1 − φ) ∂ 2 φ ∂x 2 .(7)
with an initial condition φ(x, t = 0, L) = R(x, L), where R(x, L) is a rectangular function, equal to zero for x < −L/2 and x > L/2, and equal to one for −L/2 < x < L/2.
The function φ(x, t) is the probability that an individual appearing in a speciation event at a time t in the past, and at location x, will have one or more descendants in the focal region between −L/2 and +L/2 in the present day. In order to derive the Species-Area relationship from this probability distribution (where 'area' indicates the one-dimensional length of the focal region, L), we need to integrate over all speciation events, which in the neutral model occur at a rate νρ per unit time per unit area, where ρ is the equilibrium density of individuals in space. Hence, our goal is to derive a solution for:
S(L) = νρ ∞ −∞ dx ∞ 0 dt φ(x, t, L).(8)
Due to the nonlinearity in Eq. (7), solving for φ(x, t, L) exactly does not seem tractable. But the initial and final conditions for φ suggest a linear approximation, if we treat φ 2 (x, t, L) φ(x, t, L)R(x, L). While true at t = 0, and true at late times when φ → 0, this does not hold for general t, and with this approximation for S(L) we would underestimate the number of species at large values of L.
The problem is clear-at intermediate times, φ(x, t, L) will be non-zero outside of the focal region, and will interpolate between one and zero in side the focal region.
We therefore handle this discrepancy by approximating φ 2 (x, t, L) as φ 2 (x, t, L) = φ(x, t, L) while x is within the focal region, and outside of the focal region we set φ 2 (x, t, L) ∝ φ(x, t, L) with a (we expect small) constant of proportionality to be determined. In addition, we replace the factor (1 − φ) in front of the Laplacian by 1, because when ν b the dominant contribution to the integral that determined S comes from large values of t, by which time φ 1. This leads to an equation of the form:
∂φ ∂t = −bν eff φ − b(1 − ν eff )φR(x, L) + D ∂ 2 φ ∂x 2 .(9)
So in fact, we have two equations to solve, as the approximation we have used leads to different equations inside and outside of the focal region defined by L:
∂φ in ∂t = −bφ in + D ∂ 2 φ in ∂x 2 ∂φ out ∂t = −bν eff φ out + D ∂ 2 φ out ∂x 2(10)
We can now integrate over time, before solving these equations as a function of space, to obtain:
−bν eff ρ = −bΦ in + D ∂ 2 Φ in ∂x 2 0 = −bν eff Φ out + D ∂ 2 Φ out ∂x 2 (11) where Φ in (x, L) = ν eff ρ ∞ 0 dt φ in (x, t, L) and Φ out (x, L) = ν eff ρ ∞ 0 dt φ out (x, t, L)
, respectively. Note that we have also introduced a new, effective rate of introduction of new species per unit space and time, bν eff ρ, instead of νρ, for consistency at small values of L with the term ν eff φ out in Eq. (11). It may seem like we have introduced a free parameter or parameters by allowing for ν eff , but in fact this effective rate is fixed by the large scale behavior, i.e. as L → ∞. In this limit,
−ν eff ρ = −Φ in(12)
which leads to a solution
S(L → ∞) = ρLν eff .(13)
So in order to match the standard neutral result for a well-mixed community, at large scales we have that
ν eff (ν) = − ν b − ν log(ν/b)(14)
with no free parameters.
We can solve the pair of equations (11) by imposing that there is no singular behavior, that Φ out asymptotes to zero for large x, and that at the boundaries ±L/2 both Φ out and Φ in and their first derivatives match. The result is:
Φ in (x, L) = ν eff ρ 1 − cosh(mx) 1 √ ν eff sinh(mL/2) + cosh(mL/2) Φ out (x, L) = ρ e −m|x−L/2| sinh(mL/2) 1 √ ν eff sinh(mL/2) + cosh(mL/2)(15)
where we have defined the inverse length-scale m = b/D = 1/σ which is related to the standard deviation of the dispersal distance. We now integrate over space to obtain our approximate prediction for the one-dimensional Species-Area relationship.
S = ν eff ρ L + 2(b/ν eff − 1) m tanh(mL/2) 1 √ ν eff tanh(mL/2) + 1(16)
In the following subsections we will show that despite our approximation, this provides an extremely accurate prediction of the relationship across a broad range of areas.
We now provide the corresponding result in two spatial dimensions. We apply exactly the same approximation, but where now we interpret R(x, y, L) as the 'top-hat' function, which is equal to one inside a circular region of radius L, and is equal to zero outside. We then solve for functions inside and outside of this circular region:
−bν eff ρ = −bΦ in + D∇ 2 Φ in 0 = −bν eff Φ out + D∇ 2 Φ out(17)
This leads to solutions which depend only on a radial coordinate, r (distance from the origin), and not on the corresponding polar coordinate:
Φ in (r, L) = ν eff ρ 1 − I 0 (mr) 1 √ ν eff I 1 (mL) K 0( mL √ ν eff) K 1( mL √ ν eff) + I 0 (mL) Φ out (r, L) = ρ √ ν eff I 1 (mL) K 0( mr √ ν eff) K 1( mL √ ν eff) 1 √ ν eff I 1 (mL) K 0( mL √ ν eff) K 1( mL √ ν eff) + I 0 (mL)(18)
and integrating over all space we find:
S(radius = L) = ν eff ρ πL 2 + 2πL(1/ν eff − 1) m I 1 (mL) 1 √ ν eff I 1 (mL) K 0 (m √ ν eff L) K 1 (m √ ν eff L) + I 0 (mL) (19)
where again ν eff = − ν 1−ν log(ν/b). In terms of sample area A = πL 2 we can rewrite this as:
S(A) = ν eff ρ A + 2 √ π √ A(1/ν eff − 1) m I 1 ( m √ A √ π ) 1 √ ν eff I 1 ( m √ A √ π ) K 0 (m √ Aν eff /π) K 1 (m √ Aν eff /π) + I 0 ( m √ A √ π ) (20)
In the main text we replaced the inverse length-scale m with a length-scale σ = 1/m, but both can be related directly to the parameter D we introduced in the formulation of this problem.
Species Abundance Distribution
We now consider the same kind of approximation method to find solutions of Eq. (5), but instead of considering just total species richness, we define (again first considering the one-dimensional case):
φ(x, z, t, L) = 1 − G(x, z, t, L)(21)
so that φ again satisfies:
∂φ ∂t = −νφ − (b − ν)φ 2 + D(1 − φ) ∂ 2 φ ∂x 2 .(22)
with an initial condition φ(x, z, t = 0,
L) = (1 − z)R(x, L), where R(x, L)
is the same rectangular function as above. To obtain the Species Abundance Distribution, we need to integrate over all speciation events, which in the neutral model occur at a rate νρ per unit time per unit area, where ρ is the equilibrium density of individuals in space. Hence, our goal is to derive a solution for:
Ψ(z, L) = S(L) − νρ ∞ −∞ dx ∞ 0 dt φ(x, t, L).(23)
where using this definition, Ψ(z, L) is related to the Species Abundance Distribution in a sample taken from the region of size L by
Ψ(z, L) = ∞ k=1 S(k, L)z k .(24)
We now extend our previous approximation for the species area curve. We follow the same principle to set φ 2 (x, z, t, L) (1 − z)φ(x, z, t, L) when x is within the focal region, while outside of the focal region, we set φ 2 (x, z, t, L) = g(z)φ(x, z, t, L) for a function g(z) to be determined by the requirement that we match the known behavior at large values of L. We again replace the factor (1 − φ) in front of the Laplacian by 1, as this makes a vanishing difference when ν → 0. This leads to an equation of the form:
∂φ ∂t = −g(z)φ − (b(1 − z) + νz − g(z))φR(x, L) + D ∂ 2 φ ∂x 2 .(25)
Again, we have two equations to solve, as this approximation leads to different equations inside and outside of the focal region defined by L:
∂φ in ∂t = −(b(1 − z) + νz)φ in + D ∂ 2 φ in ∂x 2 ∂φ out ∂t = −bg(z)φ out + D ∂ 2 φ out ∂x 2(26)
We can now integrate over time, before solving these equations as a function of space, to obtain:
−bg(z)(1 − z)ρ = −(b(1 − z) + νz)Φ in + D ∂ 2 Φ in ∂x 2 0 = −bg(z)Φ out + D ∂ 2 Φ out ∂x 2 (27) where Φ in (x, z, L) = bg(z)ρ ∞ 0 dt φ in (x, z, t, L) and Φ out (x, z, L) = bg(z)ρ ∞ 0 dt φ out (x, z, t, L)
, respectively. Note that we have also introduced a new, effective rate of introduction of new species per unit space and time, bg(z)ρ, instead of νρ, for consistency at small values of L with the term bg(z)φ out in Eq. (27). The function g(z) is then fixed by the large scale behavior. In this limit,
bg(z)(1 − z)ρ = (b(1 − z) + νz)Φ in(28)
which leads to a large-scale solution
Ψ(z, L → ∞) = ρL ν b − ν log(b/ν) − ρL g(z)(1 − z) (1 − z + νz/b) .(29)
So in order to match the standard neutral result for a well-mixed, non-zero-sum community [1], which is
Ψ(z, L) = ∞ k=1 S nzs (k)z k = ∞ k=1 νρL b − ν b − ν b k z k = − νρL b − ν log 1 − b − ν b z(30)
we need to set
g(z) = 1 − z + νz/b 1 − z ν b − ν log b − (b − ν)z ν .(31)
We note also that g(0) = ν eff , and so this approximation simply reduces to the approximation we used to derive the Species-area Curve above; our solution above for φ(z, t, L) is equal to φ(x, z = 0, t, L) here, so that our approximation for the Species-area curve satisfies these same equations but with z = 0 (as it should do).
The result of solving this pair of equations is essentially the same as above. The dependence on z of the parameter does not affect the solution as a function of x, it just changes the parametrization in that solution:
Φ in (x, z, L) = f (z)ρ 1 − cosh(mx h(z)) 1−z f (z) sinh(mL h(z)/2) + cosh(mL h(z)/2) Φ out (x, z, L) = ρ f (z)(1 − z) e − √ g(z)m|x−L/2| sinh(mL h(z)/2) 1−z f (z) sinh(mL h(z)/2) + cosh(mL h(z)/2)(32)
where again m = b/D = 1/σ in the main text, and for ease of notation we introduce
h(z) = (1 − z) + ν b z f (z) = ν b − ν log b − (b − ν)z ν (33)
as in the main text, and such that g(z), f (z) and h(z) are related by g(z) = h(z)f (z)/(1 − z). We now integrate over space to obtain our approximate prediction for the one-dimensional Species Abundance Distribution.
Ψ 1d (z, L) = S(L) − ρf (z)L + 2ρ m √ h(z) (f (z) − (1 − z)) sinh(mL h(z)/2) 1−z f (z) sinh(mL h(z)/2) + cosh(mL h(z)/2)(34)
The same approach in 2 spatial dimensions for a circular region of radius L and area A = πL 2 leads to:
Φ in (r, L) = f (z)ρ 1 − I 0 (mr h(z)) 1−z f (z) I 1 (mL h(z)) K 0 mL √ g(z) K 1 mL √ g(z) + I 0 (mL h(z)) Φ out (r, L) = ρ f (z)(1 − z) I 1 (mL h(z)) K 0 mr √ g(z) K 1 mL √ g(z) 1−z f (z) I 1 (mL h(z)) K 0 mL √ g(z) K 1 mL √ g(z) + I 0 (mL h(z))(35)
and
Ψ 2d (z, A) = S(A) − ρf (z)A + 2(f (z)−(1−z)) √ h(z) ρ √ Aπσ 2 I 1 h(z)A πσ 2 I 0 Ah(z) πσ 2 + 1−z f (z) K 0 Ag(z) πσ 2 K 1 Ag(z)) πσ 2 I 1 Ah(z) πσ 2 .(36)
Universal behaviour as ν → 0
Here, we show that the species abundance distributions given by our model exhibit the same universality property found in simulations by Rosindell and Cornell [2] i.e. that the species abundance distributions form a family of curves, parametrised by the single paramemeter Aν bσ 2 .
First, we show that the exact solution to the backward equation for φ has this scaling property. If we define
Q = b − ν ν q T = νt X = x ν D ,
then eqn. ((7)) becomes
∂Q ∂T = −Q − Q 2 + 1 − ν b − ν ∂ 2 Q ∂X 2 → −Q − Q 2 + ∂ 2 Q ∂X 2 as ν → 0
and the initial condition becomes Q(X,
T = 0) = (b−ν)(1−z) ν R(X, L ν D )
. Therefore, Q only depends on the parameters through the combinations
Z = (b − ν)(1 − z) ν Y = Aν D = Aν bσ 2
i.e. Q =Q(X, T, Z, Y ) for some functionQ. The generating function for the abundance distribution in 2D is then given by (see eqn. ((23)))
Ψ = S(A) − νρ ∞ −∞ ∞ −∞ d 2 x ∞ 0 dt φ(x, t) = S(A) − ρbσ 2 b − ν ∞ −∞ ∞ −∞ d 2 X ∞ 0 dTQ(X, T, Z, Y ) = S(A) − ρbσ 2 b − νΨ (Z, Y ) → S(A) − ρσ 2Ψ (Z, Y ) + O(ν)(37)
for some functionΨ.
While we do not have an expression for the exact solution Ψ, we can verify that our approximate solution Ψ 2d has the same scaling behaviour.
Substituting 1 − z = Zν b−ν , A = Y bσ 2 ν we get h(z) = ν b (1 + Z) + O(ν 2 ) f (z) = ν b log (1 + Z) + O(ν 2 ) g(z) = ν b (1 + Z) log (1 + Z) + O(ν 2 ) f (z) − (1 − z) h(z) √ Aπσ 2 = (log (1 + Z) − Z)σ 2 √ πY + O(ν) h(z)A πσ 2 = Y (1 + Z) π + O(ν) g(z)A πσ 2 = Y (1 + Z) log(1 + Z) π + O(ν) so eqn. ((36)) becomes Ψ 2d = S(A)−ρσ 2 Y log(1+Z)+2ρσ 2 (log (1 + Z) − Z) √ πY I 1 Y (1+Z) π I 0 Y (1+Z) π + Z log(1+Z) K 0 Y (1+Z) log(1+Z) π K 1 Y (1+Z) log(1+Z) π I 1 Y (1+Z) π +O(ν)
which is of the same form as eqn. ((37)).
We have thus shown that the generating function of the abundance distribution is a function of two parameter combinations only. We will now show that this is equivalent to the observation that the species abundance distribution is a one-parameter family of curves [2]:
S(k, A) = νS νk, Aν bσ 2 .
(note that the expression in ref. [2] describes the scaling of logarithmic Preston classes of abundance, and hence is missing the prefactor ν). To see how this is related to our scaling expression for Ψ(z, A), we write
Ψ(z, A) = ∞ k=1 z k S(k, A) ≈ ∞ 1 e k log z νS νk, Aν bσ 2 dk = ∞ ν e m log z νS (m, Y ) dm,
where m = νk. We need to proceed with caution in caseS has a non-integrable singlarity in its first argument. Abundance in Preston classes of low order appears from Fig 2 in the paper to approach a finite limit, so we assume thatS ∼ s(Y ) νk at small (νk). Without loss of generality, we write
Ψ(z, A) = ∞ ν 1 m s (Y ) e −m + e m log z ν u (m, Y ) − 1 m s (Y ) e −m dm = s (Y ) Ei(ν) + f log z ν , Y + O(ν) = s (Y ) Ei(ν) + f log 1 − Zν b−ν ν , Y + O(ν) = s (Y ) Ei(ν) + f (Z, Y ) + O(ν), where Ei(x) = ∞ x exp(−y) y
dy is the exponential integral and f is a (finite) function of two arguments. The component of this expression that depends on Z takes the same scaling form as found above for the backward equation model described above. The term that is independent of Z does not contribute to any particular S(k, A), but does contribute to the total species richness, and indeed we can identify
S(A) = Ψ(1, A) = s (Y ) Ei(ν) + f (0, Y )
Biological Interpretation of the Approximation Method
By approximating the non-linear term in the defining backward Equation (7) by a heterogeneous linear term, we found the pair of equations (11) to solve for the species area relationship:
−bν eff ρ = −bΦ in + D ∂ 2 Φ in ∂x 2 0 = −bν eff Φ out + D ∂ 2 Φ out ∂x 2 .
(For simplicity we work in one spatial dimension but the interpretations are identical in 2d.) We could equally well interpret these not just as an approximation to Eq. (7), but as a biological model in their own right. If we do so, can we reinterpret these equations and understand biologically why this linear approximation works? Eqs. (11) constitute a system where there is only mortality (driving loss of species from the focal, sample area), dispersal, and input from speciation. This might be expected, since species are only removed from the focal region when there is a mortality event, and only added when there is a speciation event landing in the focal region, or dispersal in from outside. However, in these equations the effective rates of species loss are different for species which originated outside the focal region (rate ν eff ), versus those that originated inside the focal region (rate b), and that is what we must explain.
Remembering that in the original derivation above of Eq. (7), the per capita birth rate was b − ν and mortality rate was b, this interpretation of rate of species loss in the equation for Φ in becomes clear: species are lost from the focal region at the rate at which a single individual dies. I.e. we are approximating that species which originate within −L/2 < x < L/2, will only not be found at time t in this sample region if it goes extinct, and this rate of loss is approximated by the rate of loss b of a single individual. For species outside the sample region, the rate of loss from mortality is bν eff = bν b−ν log(b/ν). What is this number? In fact, it is equal to b/ n , where n is the expected population size of an extant species (i.e. total number of individuals divided by total number of extant species). On average, a species originating outside the focal region at any point in the past, is lost from the focal region at a effective rate, b/ n .
Comparison with Field Theory/Forward-in-time Equations
In an earlier paper, one of the authors derived a forward-in-time approach to these same spatial neutral models [3]. In that approach, we also began with the case of a spatially-discrete landscape. Here we will recap the basic features and approximation we made in that paper, and where they break down relative to our current approach. We work here with individuals that diffuse across the landscape, rather than sessile individuals, but as discussed above this does not change the patterns of abundance and occupancy when the speciation rate is small.
We first describe the state of the discrete system using the probability distribution P (. . . , n i . . . , t) that there are n i individuals at each spatial location, i, at time t, belonging to a focal species. Individuals in this spatially-discrete model die with a per capita mortality rate, d, produce new offspring at a per capita birth rate, b, and may transfer to nearest neighbour cells at a rateD. In addition, there is a speciation process modeled as immigration from outside the system at a ratẽ k from 0 to 1 individual:
∂P ({n i }, t) ∂t = d i (n i + 1)P (. . . , n i + 1, . . . , t) − d i n i P (. . . , n i , . . . , t) + b i (n i − 1)P (. . . , n i − 1, . . . , t) − b i n i P (. . . , n i , . . . , t) +D i {e} [(n i + 1)P (. . . , n i + 1, n e − 1, t) − n i P (. . . , n i , n e , . . . )] +k i δ n i 1 j =i δ n j 0 −k i j δ n j 0 (38)
We could also remove this last term, introducing speciation, and thus allow each species to reach permanent extinction. We would then sum the contributions to the present day state from all species that originated at some point in the past, assuming a uniform speciation rates across time and space. Before taking the limit of continuous space, we rewrite the dynamics of our discrete community in terms of a moment generating function. This generating function is defined by a sum over all spatial configurations of individuals:
Z(. . . , h i , . . . , t) = {n k } P (. . . , n i , . . . , t)e j h j n j .(39)
Rewriting Eq.(38) in terms of this generating function, we find a new defining equation:
∂Z ∂t = d ∞ i=−∞ ∂Z ∂h i e −h i − 1 + b ∞ i=−∞ ∂Z ∂h i e h i − 1 +D ∞ i=−∞ {e} ∂Z ∂h i e he−h i − 1 +k ∞ i=−∞ e h i − 1 .(40)
Taking a Continuum Limit
We denote the lattice spacing by ∆, and define the continuum limit as follows:
i → ∆ −d d d x h i → H(x) ∂ ∂h i → ∆ d δ δH(x) D → D ∆ 2 k → k∆ d(41)
Finally, to define the continuum limit for the sum over nearest neighbours, we consider a square, d-dimensional lattice:
{e} e he−h i − 1 → d k=1 exp ∆ ∂H ∂x k + ∆ 2 2 ∂ 2 H ∂x 2 k + . . . + exp −∆ ∂H ∂x k + ∆ 2 2 ∂ 2 H ∂x 2 k + . . . − 2 = ∆ 2 ∇ 2 H(x) + (∇H(x)) 2 + O(∆ 3 )(42)
With these identifications, the multivariate generating function Eq. (39) becomes a functional of the source H(x):
Z[H(x), t] = e dxH(x)n(x)(43)
where n(x) , n(x 1 )n(x 2 ) etc are expectation values of the number densities and correlations of individuals as a function of spatial location (express this better). This generating functional satisfies the continuum limit of Eq. (40), the following functional differential equation:
∂Z ∂t = d d d x δZ δH(x) e −H(x) − 1 + b d d x δZ δH(x) e H(x) − 1 + D d d x δZ δH(x) ∇ 2 H(x) + (∇H(x)) 2 + kZ d d x e H(x) − 1 = d d x e H(x) − 1 −de −H(x) δZ δH(x) + b δZ δH(x) + D∇ 2 e −H(x) δZ δH(x) + k .(44)
where we have performed an integration by parts and assumed that the source H(x) vanishes at infinity. Finally, we make a change of variables for the source,
J(x) = e H(x) − 1 (45) so that Z[J(x), t] = Z[log(J(x) + 1), t] satisfies ∂Z ∂t = d d x J(x) (b − d) δZ δJ(x) + bJ(x) δZ δJ(x) + D∇ 2 δZ δJ(x) + k .(46)
Equal Time Correlation Functions
The n-point spatial correlation functions for this model, taken at equal times, t, satisfy a set of partial differential equations. These equations are obtained by expanding Z[J(x), t] as a functional Taylor series:
Z[J(x), t] = d d x c 1 (x, t)J(x) + 1 2 d d x 1 d d x 2 c 2 (x 1 , x 2 , t)J(x 1 )J(x 2 ) + . . .(47)
The coefficients of this Taylor series are obtained by taking functional derivatives of Eq.(46) with respect to J, and then setting J = 0. For the first two orders we have:
∂c 1 ∂t = (b − d)c 1 + D∇ 2 c 1 + k ∂c 2 ∂t = 2bc 1 δ(x 1 − x 2 ) + D ∇ 2 1 + ∇ 2 2 + 2(b − d) c 2 (x 1 , x 2 , t)(48)
Each successive order relies only on solutions for correlation functions of lower order, and so the system of linear partial differential equations can be solved exactly, given a set of initial data.
Species Area Relationship
We now consider the time-independent probability P (N, L) that at late times there are N individuals in a given sample region extending from −L to +L. The generating function of this probability is:
ψ(j, L) = ∞ N =0 P (N, L)(1 + j) N = e log(1+j) L dx n(x) .(49)
To underline the interpretation: P (N, L) is the (assumed time-independent solution for the) probability distribution that we will find N individuals in the region between −L and +L at late times. The second equality arises because this generating function can be obtained by setting J(x) = jRect(x, L) in the late-time, time-independent solution for Z[J, t], where Rect(x, L) is the rectangular function in 1d. I.e. we will set J(x) zero outside the sample region and equal to j inside. The expected number of species in the sampling region defined by L is proportional to 1 − P (0, L), and so this is the quantity we are aiming to solve for. If we can solve for this generating function then we have P (0, L) = ψ(−1, L). This is also known as the empty interval function, e.g. [4]. To find ψ(−1, L), we next define the modified moments which have insertions of e log(1+j) L n(x) compared with the usual moments:
f 1 (x, j, L) = δZ[J] δJ(x) J=jRect(x,L) = 1 1 + jRect(x, L) n(x)e log(1+j) L n(x) (50) f 2 (x, y, j, L) = δ 2 Z[J] δJ(x)δJ(y) J=jRect(x,L) = ...(51)
We note that the reason for using these functions is that:
∂ψ ∂j = 1 1 + j L dxn(x)e log(1+j) L n(x) = L dxf 1 (x, j, L).(52)
and hence if we can solve for f 1 (x, k, L) we will have the empty interval function, P (0, L).
We can obtain differential equations for these modified moments by taking successive functional derivatives of Eq.(46):
∂ ∂t δZ[J, t] δJ(x) = D∇ 2 δZ δJ(x) + (bJ(x) + b − d) δZ δJ(x) + k + dy J(y) δ δJ(x) D∇ 2 δZ δJ(y) + (bJ(y) + b − d) δZ δJ(y) + k ∂ ∂t δ 2 Z[J, t] δJ(x)δJ(y) = ...(53)
etc. So setting J(x) = jRect(x, L) and time derivatives equal to zero in these equations we have a kind of moment hierarchy:
0 = D∇ 2 x f 1 (x, j, L) + (bjRect(x, L) + b − d)f 1 (x, j, L) + k + j L dy D∇ 2 y f 2 (x, y, j, L) + (bj + b − d)f 2 (x, y) + bδ(x − y)f 1 (y, j, L) (54) 0 = D∇ 2 x f 2 (x, y, j, L) + ...(55)
So far there is no approximation. In our earlier paper we truncated and solved the first equation in this hierarchy:
0 = D∇ 2 f 1 (x, j, L) + (bjRect(x, L) + b − d)f 1 (x, j, L) + k(56)
While giving qualitatively accurate description of the shape of the SAR, this earlier approximation becomes quantitatively inaccurate as a speciation rate becomes small. It also fails to give a good description of the Species Abundance Distribution.
Numerical Inversion of the SAD Generating Function
We implemented a method described in [5] for numerical contour integration using Cauchy's theorem. Our annotated and documented code will be freely available on the O'Dwyer lab GitHub repository.
Uncertainty in Estimating Speciation Rates
In our analysis of empirical data showin in Fig.3 of the main text, we draw from earlier work fitting speciation rate ν and dispersal length-scale σ using [6] using multiple 1ha plots distributed across Panama. One other important conclusion from this work was that the best fit parameter for ν was highly variable across different locations (e.g. in comparison with Panama, two South American locations were fitted with speciation rates that were orders of magnitude smaller). Here, we repeat the analysis of Figure 3, but using one of these lower fitted speciation rates (and the corresponding value of σ, alongside the local density of individuals in Ecuador). Clearly, this cannot be a complete analysis, as we are no longer providing the best fit to the large-scale Panamanian data. But it is interesting to ask whether these much lower speciation rates can significantly change either the local prediction for overall species richness, or whether the shape of the local SAD becomes more realistic. We show this analysis in Fig S1. The result is that we do find a larger expected species richness locally, but still not close to the observed number of species at BCI. At the same time, the shape of the species abundance distribution is even more skewed away from rare species. Comparison of Observed and Predicted Abundances Figure S1 Neutral predictions at BCI, using Yasuni fitted parameters. Neutral predictions are generated by fitting our spatial neutral model using large-scale data reported and analyzed in [6], but now using the lower reported speciation rate fitted using data surrounding the Yasuni CTFS plot in Ecuador (ν = 1.7.10 −11 , rather than ν = 5.10 8 ), alongside the other parameters σ and density ρ fitted at Yasuni (which were of the same order of magnitude as those in Panama). The results show that these large-scale fits still produce a local-scale prediction for species abundances that both underestimates local species richness, compared with observed data, and still skews abundances from rare to more abundant species.
Neutral theory and relative species abundance in ecology. I Volkov, J R Banavar, S P Hubbell, A Maritan, Nature. 424I. Volkov, J. R. Banavar, S. P. Hubbell, and A. Maritan. Neutral theory and relative species abundance in ecology. Nature, 424:1035-1037, 2003.
Universal scaling of species-abundance distributions across multiple scales. James Rosindell, J Stephen, Cornell, Oikos. 1227James Rosindell and Stephen J Cornell. Universal scaling of species-abundance distributions across multiple scales. Oikos, 122(7):1101-1111, 2013.
Field theory for biogeography: a spatially-explicit model for predicting patterns of biodiversity. J L Jp O'dwyer, Green, Ecology Letters. 13JP O'Dwyer and JL Green. Field theory for biogeography: a spatially-explicit model for predict- ing patterns of biodiversity. Ecology Letters, 13:87-95, 2010.
Diffusion-Limited Coagulation in the Presence of Particle Input: Exact Results in One Dimension. C Doering, D Ben-Avraham, Phys Rev Lett. 62C. Doering and D. Ben-Avraham. Diffusion-Limited Coagulation in the Presence of Particle Input: Exact Results in One Dimension. Phys Rev Lett, 62:2563-2566, 1989.
Accuracy and stability of computing high-order derivatives of analytic functions by cauchy integrals. Folkmar Bornemann, Foundations of Computational Mathematics. 111Folkmar Bornemann. Accuracy and stability of computing high-order derivatives of analytic functions by cauchy integrals. Foundations of Computational Mathematics, 11(1):1-63, 2011.
. R Condit, N Pitman, E G Leigh, J Chave, J Terborgh, R B Foster, P Nunez, S Aguilar, S Valencia, G Villa, H C Muller-Landau, E Losos, S P Hubbell, Beta-diversity in tropical forest trees. Science. 2955555R. Condit, N. Pitman, E.G. Leigh, J. Chave, J. Terborgh, R.B. Foster, P. Nunez, S. Aguilar, S. Valencia, G. Villa, H.C. Muller-Landau, E. Losos, and S.P. Hubbell. Beta-diversity in tropical forest trees. Science, 295(5555):666-669, 2002.
|
[] |
[
"Verification of Artifact-Centric Systems: Decidability and Modeling Issues",
"Verification of Artifact-Centric Systems: Decidability and Modeling Issues"
] |
[
"Dmitry Solomakhin \nFree University of Bozen-Bolzano\nPiazza Domenicani 339100BolzanoItaly\n",
"Marco Montali \nFree University of Bozen-Bolzano\nPiazza Domenicani 339100BolzanoItaly\n",
"Sergio Tessaris \nFree University of Bozen-Bolzano\nPiazza Domenicani 339100BolzanoItaly\n",
"Riccardo De Masellis [email protected] \nSapienza Università di Roma\nVia Ariosto, 2500185RomeItaly\n"
] |
[
"Free University of Bozen-Bolzano\nPiazza Domenicani 339100BolzanoItaly",
"Free University of Bozen-Bolzano\nPiazza Domenicani 339100BolzanoItaly",
"Free University of Bozen-Bolzano\nPiazza Domenicani 339100BolzanoItaly",
"Sapienza Università di Roma\nVia Ariosto, 2500185RomeItaly"
] |
[] |
Artifact-centric business processes have recently emerged as an approach in which processes are centred around the evolution of business entities, called artifacts, giving equal importance to control-flow and data. The recent Guard-State-Milestone (GSM) approach provides means for specifying business artifacts lifecycles in a declarative manner, using constructs that match how executive-level stakeholders think about their business. However, it turns out that formal verification of GSM is undecidable even for very simple propositional temporal properties. We attack this challenging problem by translating GSM into a well-studied formal framework. We exploit this translation to isolate an interesting class of "state-bounded" GSM models for which verification of sophisticated temporal properties is decidable. We then introduce some guidelines to turn an arbitrary GSM model into a state-bounded, verifiable model.
|
10.1007/978-3-642-45005-1_18
|
[
"https://arxiv.org/pdf/1304.1697v1.pdf"
] | 15,820,521 |
1304.1697
|
d589043b73dd71b8ead861689e5e6a6587544be8
|
Verification of Artifact-Centric Systems: Decidability and Modeling Issues
Dmitry Solomakhin
Free University of Bozen-Bolzano
Piazza Domenicani 339100BolzanoItaly
Marco Montali
Free University of Bozen-Bolzano
Piazza Domenicani 339100BolzanoItaly
Sergio Tessaris
Free University of Bozen-Bolzano
Piazza Domenicani 339100BolzanoItaly
Riccardo De Masellis [email protected]
Sapienza Università di Roma
Via Ariosto, 2500185RomeItaly
Verification of Artifact-Centric Systems: Decidability and Modeling Issues
artifact-centric systemsguard-stage-milestoneformal verification
Artifact-centric business processes have recently emerged as an approach in which processes are centred around the evolution of business entities, called artifacts, giving equal importance to control-flow and data. The recent Guard-State-Milestone (GSM) approach provides means for specifying business artifacts lifecycles in a declarative manner, using constructs that match how executive-level stakeholders think about their business. However, it turns out that formal verification of GSM is undecidable even for very simple propositional temporal properties. We attack this challenging problem by translating GSM into a well-studied formal framework. We exploit this translation to isolate an interesting class of "state-bounded" GSM models for which verification of sophisticated temporal properties is decidable. We then introduce some guidelines to turn an arbitrary GSM model into a state-bounded, verifiable model.
Introduction
In the last decade, a plethora of graphical notations (such as BPMN and EPCs) have been proposed to capture business processes. Independently from the specific notation at hand, formal verification has been generally considered as a fundamental tool in the process design phase, supporting the modeler in building correct and trustworthy process models [16]. Intuitively, formal verification amounts to check whether possible executions of the business process model satisfy some desired properties, like generic correctness criteria (such as deadlock freedom or executability of activities) or domain-dependent constraints. To enable formal verification and other forms of reasoning support, the business process language gets translated into a corresponding formal representation, which typically relies on variants of Petri nets [1], transition systems [2], or process algebras [18]. Properties are then formalized using temporal logics, using model checking techniques to actually carry out verification tasks [8].
A common drawback of classical process modeling approaches is being activitycentric: they mainly focus on the control-flow perspective, lacking the connection between the process and the data manipulated during its executions. This reflects also in the corresponding verification techniques, which often abstract away from the data component. This "data and process engineering divide" affects many contemporary arXiv:1304.1697v1 [cs.SE] 5 Apr 2013 process-aware information systems, incrementing the amount of redundancies and potential errors in the development phase [12]. To tackle this problem, the artifact-centric paradigm has recently emerged as an approach in which processes are guided by the evolution of business data objects, called artifacts [17,9]. A key aspect of artifacts is coupling the representation of data of interest, called information model, with lifecycle constraints, which specify the acceptable evolutions of the data maintained by the information model. On the one hand, new modeling notations are being proposed to tackle artifact-centric processes. A notable example is the Guard-State-Milestone (GSM) graphical notation [10], which corresponds to way executive-level stakeholders conceptualize their processes [7]. On the other hand, formal foundations of the artifact-centric paradigm are being investigated in order to capture the relationship between processes and data and support formal verification [11,5,15]. Two important issues arise in this setting. First, verification formalisms must go beyond propositional temporal logics, and incorporate first-order formulae to express constraints about the evolution of data and to query the information model of artifacts. Second, formal verification becomes much more difficult than for classical activity-centric approaches, even undecidable in the general case.
In this work, we tackle the problem of automated verification of GSM models. First of all, we show that verifying GSM models is indeed a very challenging issue, being undecidable in general even for simple propositional reachability properties. We then provide a sound and complete encoding of GSM into Data-Centric Dynamic Systems (DCDSs), a recently developed formal framework for data-and artifact-centric processes [15]. This encoding allows to reproduce in the GSM context the decidability and complexity results recently established for DCDSs with bounded information models (state-bounded DCDSs). These are DCDSs where the number of tuples does not exceed a given maximum value. This does not mean that the system must contain an overall bounded number of data: along a run, infinitely many data can be encountered and stored into the information model, provided that they do not accumulate in the same state. We lift this property in the context of GSM, and show that verification of statebounded GSM models is decidable for a powerful temporal logic, namely a variant of first-order µ-calculus supporting a restricted form of quantification [13]. We then isolate an interesting class of GSM models for which state-boundedness is guaranteed, and introduce guidelines that can be employed to turn any GSM model into a state-bounded, verifiable model. The rest of the paper is organized as follows. Section 2 gives an overview of GSM and provides a first undecidability result. Section 3 introduces DCDSs and presents the GSM-DCDS translation. Section 4 introduces "state-bounded" GSM models and provides key decidability results. Discussion and conclusion follow.
GSM modeling of Artifact-Centric Systems
The foundational character of artifact-centric business processes is the combination of static properties, i.e., the data of interest, and dynamic properties of a business process, i.e., how it evolves. Artifacts, the key business entities of a given domain, are characterized by (i) an information model that captures business-relevant data, and (ii) Verification of Artifact-Centric Systems: Decidability and Modeling Issues 3 a lifecycle model that specifies how the artifact progresses through the business. In this work, we focus on the Guard-Stage-Milestone (GSM) approach for artifact-centric modeling, recently proposed by IBM [10]. GSM is a declarative modeling framework that has been designed with the goal of being executable and at the same time enough high-level to result intuitive to executive-level stakeholders. The GSM information model uses (possibly nested) attribute/value pairs to capture the domain of interest.
The key elements of a lifecycle model are stages, milestones and guards. Stages are (hierarchical) clusters of activities (tasks), intended to update and extend the data of the information model. They are associated to milestones, business operational objectives to be achieved when the stage is under execution. Guards control the activation of stages and, like milestones, are described in terms of data-aware expressions, called sentries, involving events and conditions over the artifact information model. Sentries have the form on e if cond, where e is an event and cond is an (OCL-based) condition over data. Both parts are optional, supporting pure event-based or condition-based sentries. Tasks represent the atomic units of work. Basic tasks are used to update the information model of some artifact instance (e.g., by using the data payload associated to an incoming event). Other tasks are used to add/remove a nested tuple. A specific create-artifact-instance task is instead used to create a new instance of a given artifact type; this is done by means of a two-way service call, where the result is used to create a new tuple for the artifact instance, assign a new identifier to it, and fill it with the result's payload. Obviously, another task exists to remove a given artifact instance. In the following, we use model for the intensional level of a specific business process described in GSM, and instance to denote a GSM model with specific data for its information model.
The execution of a business process may involve several instances of artifact types described by a GSM model. At any instant, the state of an artifact instance (snapshot) is stored in its information model, and is fully characterised by: (i) values of attributes in the data model, (ii) status of its stages (open or closed) and (iii) status of its milestones (achieved or invalidated). Artifact instances may interact with the external world by exchanging typed events. In fact, tasks are considered to be performed by an external agent, and their corresponding execution is captured with two event types: a service call, whose instances are populated by the data from information model and then sent to the environment; and a service call return, whose instances represent the corresponding answer from the environment and are used to incorporate the obtained result back into the artifact information model. The environment can also send unsolicited (one-way) events, to trigger specific guards or milestones. Additionally, any change of a status attribute, such as opening a stage or achieving a milestone, triggers an internal event, which can be further used to govern the artifact lifecycle.
Example 1. Figure 1 shows a simple order management process modeled in GSM. The process centers around an order artifact, whose information model is characterized by a set of status attributes (tracking the status of stages and milestones), and by an extendible set of ordered items, each constituted by a code and a quantity. The order lifecycle contains three top-level atomic stages (rounded rectangles), respectively used to manage the manipulation of the order, its payment, and the delivery of a payment receipt. The order management stage contains a task (rectangle) to add items to the order. It opens every time an itemRequest event is received, provided that the order has not yet been paid. This is represented using a logical condition associated to a guard (diamond). The stage closes when the task is executed, by achieving an "item added" milestone Fig. 1: GSM model of a simple order management process (circle). A payment can be executed once a payRequest event is issued, provided that the order contains at least one item (verified by the OCL condition order.items → exists). As soon as the order is paid, and the corresponding milestone achieved, the receipt delivery stage is opened. This direct dependency is represented using a dashed arrow, which is a shortcut for the condition on Order paid, representing the internal event of achieving the "Order paid" milestone.
Operational semantics of GSM
GSM is associated to three well-defined, equivalent execution semantics, which discipline the actual enactment of a GSM model [10]. Among these, the GSM incremental semantics is based on a form of Event-Condition-Action (ECA) rules, called Prerequisite-Antecedent-Consequent (PAC) rules, and is centered around the notion of GSM Business steps (B-steps). An artifact instance remains idle until it receives an incoming event from the environment. It is assumed that such events arrive in a sequence and get processed by artifact instances one at a time. A B-step then describes what happens to an artifact snapshot Σ, when a single incoming event e is incorporated into it, i.e., how it evolves into a new snapshot Σ (see Figure 5 in [10]). Σ is constructed by building a sequence of pre-snapshots Σ i , where Σ 1 results from incorporating e into Σ by updating its attributes, one at a time, according to the event payload (i.e., its carried data). Each consequent pre-snaphot Σ i is obtained by applying one of the PAC rules to the previous pre-snapshot Σ i−1 . Each of such transitions is called a micro-step. During a micro-step some outgoing events directed to the environment may be generated. When no more PAC rules can be applied, the last pre-snapshot Σ is returned, and the entire set of generated events is sent to the environment. Each PAC rule is associated to one or more GSM constructs (e.g. stage, milestone) and has three components:
-Prerequisite: this component refers to the initial snapshot Σ and determines if a rule is relevant to the current B-step processing an incoming event e. -Ancedent: this part refers to the current pre-snapshot Σ i and determines whether the rule is eligible for execution, or executable, at the next micro-step. -Consequent: this part describes the effect of firing a rule, which can be nondeterministically chosen in order to obtain the next-pre-snapshot Σ i+1 . Due to nondeterminism in the choice of the next firing rule, different orderings among the PAC rules can exist, leading to non-intuitive outcomes. This is avoided in the GSM operational semantics by using an approach reminiscent of stratification in logic programming. In particular, the approach (i) exploits implicit dependencies between the (structure of) PAC rules to fix an ordering on their execution, and (ii) applies the rules Verification of Artifact-Centric Systems: Decidability and Modeling Issues 5 according to such ordering [10]. To guarantee B-step executability, avoiding situations in which the execution indefinitely loops without reaching a stable state, the GSM incremental semantics implements a so-called toggle-once principle. This guarantees that a sequence of micro-steps, triggered by an incoming event, is always finite, by ensuring that each status attribute can change its value at most once during a B-step. This requirement is implemented by an additional condition in the prerequisite part of each PAC rule, which prevents it from firing twice.
The evolution of a GSM system composed by several artifacts can be described by defining the initial state (initial snapshot of all artifact instances) and the sequence of event instances generated by the environment, each of which triggers a particular B-step, producing a sequence of system snapshots. This perspective intuitively leads to the representation of a GSM model as an infinite-state transition system, depicting all possible sequences of snapshots supported by the model. The initial configuration of the information model represents the initial state of this transition system, and the incremental semantics provides the actual transition relation. The source of infinity relies in the payload of incoming events, used to populate the information model of artifacts with fresh values (taken from an infinite/arbitrary domain). Since such events are not under the control of the GSM model, the system must be prepared to process such events in every possible order, and with every acceptable configuration for the values carried in the payload. The analogy to transition systems opens the possibility of using a formal language, e.g., a (first-order variant of) temporal logic, to verify whether the GSM system satisfies certain desired properties and requirements. For example, one could test generic correctness properties, such as checking whether each milestone can be achieved (and each stage will be opened) in at least one of the possible systems' execution, or that whenever a stage is opened, it will be always possible to eventually achieve one of its milestones. Furthermore, the modeler could also be interested in verifying domain-specific properties, such as checking whether for the GSM model in Figure 1 it is possible to obtain a receipt before the payment is processed.
Undecidability in GSM
In this section, we show that verifying the infinite-state transition system representing the execution semantics of a given GSM model is an extremely challenging problem, undecidable even for a very simple propositional reachability property.
Theorem 1.
There exists a GSM model for which verification of a propositional reachability property is undecidable.
Proof. To show undecidability of verification, we illustrate that a Turing machine can be easily captured in GSM, and that the halting problem can be stated in terms of a verification problem. In particular, we consider a deterministic, single tape Turing machine M = Q, Σ, q 0 , δ, q f , , where Q is a finite set of (internal) states, Σ = {0, 1, } is the tape alphabet (with the blank symbol), q 0 ∈ Q and q f ∈ Q are the initial and final state, and
δ ⊆ Q \ {q f } × Σ × Q × Σ × {L, R} is a transition relation.
We assume, wlog, that δ consists of k right-shift transitions R 1 , . . . , R k (those having R as last component), and n left-shift transitions L 1 , . . . , L n (those having L as last component . . . . . . attributes, the GSM information model is constituted by: (i) a curState slot containing the current internal state q ∈ Q; (ii) a curCell slot pointing to the cell where the head of M is currently located. (iii) a collection of cells representing the current state of the tape. Each cell is a complex nested record constituted by a value v ∈ Σ, and two pointers prev and next used to link the cell to the previous and next cells. In this way, the tape is modeled as a linked list, which initially contains a single, blank cell, and which is dynamically extended as needed. To mark the initial (resp., last) cell of the tape, we assume that its prev (next) cell is null.
On top of this information model, a GSM lifecyle that mimics M is shown in Figure 2, where, due to space constraints, only the right-shift transitions are depicted (the left-shift ones are symmetric). The schema consists of two top-level stages. Init stage is used to initialize the tape. Transition stage is instead used to mimic the execution of one of the transitions in δ. Each transition is decomposed into two sub-stages: state update and head shift. The state update is modeled by one among k + n atomic sub-stages, each handling the update that corresponds to one of the transitions in δ. These stages are mutually exclusive, being M deterministic. Consider for example a right-shift transition R i = δ(qR i , vR i , qR i , vR i , R) (the treatment is similar for a left-shift transition). The corresponding state update stage is opened whenever the current state is qR i , and the value contained in the cell pointed by the head is vR i (this can be extracted from the information model using the query curCell.value). The incoming arrows from the two parent's guards ensures that this condition is evaluated as soon as the parent stage is opened; hence, if the condition is true, the state update stage is immediately executed.
When the state update stage is closed, the achievement of the corresponding milestone triggers one of the guards of the Right shift stage that handles the head shift. It contains two sub-stages: the first one extends the tape if the head is currently pointing to the last cell, while the second one just perform the shifting. Whenever a right or left shift stage achieves the corresponding milestone, then also the parent, transition stage is closed, achieving milestone "Transition done". This has the effect of re-opening the transition stage again, so as to evaluate the next transition to be executed. An alternative way of immediately closing the transition stage occurs when the current state corresponds to the final state q f . In this case, milestone "Halt" is achieved, and the execution terminates (no further guards are triggered).
By considering this construction, the halting problem for M can be rephrased as the following verification problem: given the GSM model encoding M, and starting from an initial state where the information model is empty, is it possible to reach a state where the "Halt" milestone is achieved? Notice that, since M is deterministic, the B-steps of the corresponding GSM model constitute a linear computation, which could eventually reach the "Halt" milestone or continue indefinitely. Therefore, reaching a state where "Halt" is achieved can be equivalently formulated using propositional CTL or LTL.
Translation into Data-Centric Dynamic Systems
We discuss a translation procedure that faithfully rewrites a GSM model into a corresponding formal representation in terms of a Data-Centric Dynamic System (DCDS), for which interesting decidability results have been recently obtained.
DCDSs are a formal framework for the specification of data-aware business processes, i.e., systems where the connection between the process perspective and the manipulated data is explicitly tackled [3]. Technically, a DCDS is a pair S = D, P , where D is a data layer and P is a process layer over D. D maintains all the relevant data in the form of a relational database with integrity constraints. In the artifact-centric context, the database is constituted by the union of all artifacts information models. The process layer P changes and evolves the data maintained by D. It is constituted by a tuple P = F, A, . F is a finite set of functions representing interfaces to external services, used to import new, fresh data into the system. A is a set of actions of the form α(p 1 , ..., p n ) : {e 1 , ..., e m }, where α is the action name, p 1 , ..., p n are input parameters, and e i are effect specifications. Each effect specification defines how a portion of the next database instance is constructed starting from the current one. Technically, its form is Q E, where: (i) Q is a query over D that could involve action parameters, and is meant to extract tuples from the current database; (ii) E is a set of effects, specified in terms of facts over D that will be asserted in the next state; these facts can contain variables of Q (which are then replaced with actual values extracted from the current database), and also service calls, which are resolved by calling the service with actual input parameters and substituting them with the obtained result. 1 Finally, is a declarative process specified in terms of Condition-Action (CA) rules that determine, at any moment, which actions are executable. Technically, each CA rule has the form Q → α, where Q is a query over D, and α is an action. Whenever Q has a positive answer over the current database, then α becomes executable, with actual values for its parameters given by the answer to Q.
The execution semantics of a DCDS S is defined by a possibly infinite-state transition system Υ S , where states are instances of the database schema in D and each transition corresponds to the application of an executable action in P. Similarly to GSM, where the source of infinity comes from the fact that incoming events carry an arbitrary payload, in DCDSs the source of infinity relies in the service calls, which can inject arbitrary fresh values into the system.
We recall some key (un)decidability and complexity results related to DCDSs, which will be then used to study the formal verification of GSM. This result comes from the high expressiveness of DCDSs. In fact, we will see that DCDSs can encode GSM. However, alongside this undecidability result, [3] identifies an interesting class of state-bounded DCDSs, for which decidability of verification holds for a sophisticated (first-order) temporal logic called µL P . Intuitively, state boundedness requires the existence of an overall bound that limits, at every point in time, the size of the database instance of S (without posing any restriction on which values can appear in the database). Equivalently, the size of each state contained in Υ S cannot exceed the preestablished bound. Hence, in the following we will indifferently talk about state-bounded DCDSs or state-bounded transition systems.
Theorem 3 ([3]
). Verification of µL P properties over state-bounded DCDS is decidable, and can be reduced to finite-state model checking of propositional µ-calculus. µL P is a first-order variant of µ-calculus, a rich branching-time temporal logic that subsumes all well-known temporal logics such as PDL, CTL, LTL and CTL* [13]. µL P employs first-order formulae to query data maintained by the DCDS data layer, and supports a controlled form of first-order quantification across states (within and across runs). In particular, µL P requires that the values in the scope of quantification continuously persist for the quantification to take effect. As soon as a value is not present in the current database anymore, a formula talking about it collapses to true or f alse. This restriction is in line with the artifact-centric setting, where a given artifact identifier points to the same artifact until such an artifact is live, but as soon as the artifact is destroyed, it can be recycled to identify a completely different artifact (and it would be incorrect to consider it the same as before).
Example 2. µLP can express two variants of a correctness requirement for GSM:
it is always true that, whenever an artifact id is present in the information model, the corresponding artifact will be destroyed (i.e., the id will disappear) or reach a state where all its stages are closed; it is always true that, whenever an artifact id is present in the information model, the corresponding artifact will persist until a state is reached where all its stages are closed.
Verification of Artifact-Centric Systems: Decidability and Modeling Issues 9
Translating GSM into DCDS
For the sake of space, we only discuss the intuition behind the translation and provide the main results. For a full technical development, we refer the interested reader to a technical report [19]. As introduced in Section 2.1, the execution of a GSM instance is described by a sequence of B-steps. Each B-step consists of an initial micro-step which incorporates incoming event into current snapshot, a sequence of micro-steps executing all applicable PAC-rules, and finally a micro-step sending a set of generated events at the termination of the B-step. The translation relies on the incremental semantics: given a GSM model G, we encode each possible micro-step as a separate condition-action rule in the process of a corresponding DCDS system S, such that the effect on the data and process layers of the action coincides with the effect of the corresponding micro-step in GSM. However, in order to guarantee that the transition system induced by a resulting DCDS mimics the one of the GSM model, the translation procedure should also ensure that all semantic requirements described in Section 2.1 are modeled properly: (i) "one-message-at-a-time" and "toggle-once" principles, (ii) the finiteness of micro-steps within a B-step, and (iii) their order imposed by the model. We sustain these requirements by introducing into the data layer of S a set of auxiliary relations, suitably recalling them in the CA-rules to reconstruct the desired behaviour.
Restricting S to process only one incoming message at a time is implemented by the introduction of a blocking mechanism, represented by an auxiliary relation R block (id R , blocked) for each artifact in the system, where id R is the artifact instance identifier, and blocked is a boolean flag. This flag is set to true upon receiving an incoming message, and is then reset to f alse at the termination of the corresponding B-step, once the outgoing events accumulated in the B-step are sent the environment. If an artifact instance has blocked = true, no further incoming event will be processed. This is enforced by checking the flag in the condition of each CA-rule associated to the artifact.
In order to ensure "toggle once" principle and guarantee the finiteness of sequence of micro-steps triggered by an incoming event, we introduce an eligibility tracking mechanism. This mechanism is represented by an auxiliary relation R exec (id R , x 1 , ..., x c ), where c is the total number of PAC-rules, and each x i corresponds to a certain PAC-rule of the GSM model. Each x i encodes whether the corresponding PAC rule is eligible to fire at a given moment in time (i.e., a particular micro-step). The initial setup of the eligibility tracking flags is performed at the beginning of a B-step, based on the evaluation of the prerequisite condition of each PAC rule. More specifically, when x i = 0, the corresponding CA-rule is eligible to apply and has not yet been considered for application. When instead x i = 1, then either the rule has been fired, or its prerequisite turned out to be false. This flag-based approach is used to propagate in a compact way information related to the PAC rules that have been already processed, following a mechanism that resembles dead path elimination in BPEL. In fact, R exec is also used to enforce a firing order of CA-rules that follows the one induced by G. This is achieved as follows. For each CA-rule Q → α corresponding to a given PAC rule r, condition Q is put in conjunction with a further formula, used to check whether all the PAC rules that precede r according to the ordering imposed by G have been already processed. Figure 4. For simplicity, multiple parameters are compacted using an "array" notation (e.g., x1, . . . , xn is denoted by x). In particular: (1) represents a condition part of a CA-rule, ensuring the "toggle-once" principle (xk = 0), the compliant firing order (exec(k)) and the "one-message-at-a-time" principle (RBlocked(idR, true)); (2) describes the action signature; (3) is an e↵ect encoding the invalidation a milestone if the stage has just been activated; (4) propagates an internal event denoting the milestone invalidation, What does it mean "just been activated"?
Rexec(id R , x) ∧ x k = 0 ∧ exec(k) ∧ R block (id R , true) → (1) a k exec (id R , a , x) : { (2) Ratt(id R , a, s, m) ∧ R S j chg (id R , true) {Ratt(id R , a, s, m)[mj /f alse]} (3) Ratt(id R , a, s, m) ∧ R S j chg (id R , true) {R m j chg (id R , f alse)} (4) R M exec (id R , x) ∧ x k = 0 {R M exec (id R , x)[x k /1]}(
What does it mean "just been activated"? if needed; (5) flags the encoded micro-step corresponding to PAC rule k as processed; (6) and (7) are macros used to transport the una↵ected data into the next snapshot.
Given a GSM model G with initial snapshot s 0 , we denote by ⌥ G its B-step transition system, i.e., the infinite-state transition system obtained by iteratively applying the incremental GSM semantics starting from s 0 and nondeterministically considering each possible incoming event. The states of ⌥ G corresponds to stable snapshots of G, and each transition corresponds to a B-step. We abstract away from the single micro-steps constituting a B-step, because they represent temporary intermediate states that are not interesting for verification. Similarly, given the DCDS S obtained from the translation of G, we denote by ⌥ S the transition system obtained by starting from s 0 , and iteratively applying nondeterministically the CA-rules of the process, and the corresponding actions, in all the possible ways. As for states, we only consider those database instances where all artifact instances are not blocked; these correspond in fact to stable snapshots of G. We also project away from those states all the auxiliary relations introduced by the translation mechanism. We then connect two such states provided that there is a sequence of (intermediate) states that lead from the first to the second one, t precede r. Once all x i flags are switched to 1, then the B-step is about to finish: a cific CA-rule is enabled to send the outgoing events to the environment, and artifact instance blocked flag is released.
Rexec(idR, x)^xk = 0^exec(k)^RBlocked(idR, true) 7 ! (1) a k exec (idR, a 0 , x) : {(R M exec (idR, x)^xk = 0 {R M exec (idR, x)[xk/1]}(5)
[CopyMessagePools] ( 6 ) [CopyRest] } (7) Fig. 4: CA-rule encoding a milestone invalidation upon stage activation ample 2. An example of a translation of a GSM PAC-rule (indexed by k) is presented igure 4. For simplicity, multiple parameters are compacted using an "array" notation ., x1, . . . , xn is denoted by x). In particular: (1) represents a condition part of a -rule, ensuring the "toggle-once" principle (xk = 0), the compliant firing order ec(k)) and the "one-message-at-a-time" principle (RBlocked(idR, true)); (2) describes action signature; (3) is an e↵ect encoding the invalidation a milestone if the stage has t been activated; (4) propagates an internal event denoting the milestone invalidation, What does it mean "just been activated"?
What does it mean "just been activated"? eeded; (5) flags the encoded micro-step corresponding to PAC rule k as processed; and (7) are macros used to transport the una↵ected data into the next snapshot.
Given a GSM model G with initial snapshot s 0 , we denote by ⌥ G its B-step nsition system, i.e., the infinite-state transition system obtained by iteratively plying the incremental GSM semantics starting from s 0 and nondeterministily considering each possible incoming event. The states of ⌥ G corresponds to ble snapshots of G, and each transition corresponds to a B-step. We abstract ay from the single micro-steps constituting a B-step, because they represent porary intermediate states that are not interesting for verification. Similarly, en the DCDS S obtained from the translation of G, we denote by ⌥ S the nsition system obtained by starting from s 0 , and iteratively applying nondeternistically the CA-rules of the process, and the corresponding actions, in all the ssible ways. As for states, we only consider those database instances where all ifact instances are not blocked; these correspond in fact to stable snapshots of We also project away from those states all the auxiliary relations introduced by translation mechanism. We then connect two such states provided that there sequence of (intermediate) states that lead from the first to the second one, Fig. 4: Construction of the B-step transition system Υ G and unblocked-state transition system Υ S for a GSM model G with initial snapshot s 0 and the corresponding DCDS S Only in this case r can be considered for application, consequently applying its effect α to the current artifact snapshot. More specifically, the corresponding CA-rule becomes Q ∧ exec(r) → α, where exec(r) = i x i such that i ranges over the indexes of those rules that precede r.
Once all x i flags are switched to 1, the B-step is about to finish: a dedicated CA-rule is enabled to send the outgoing events to the environment, and the artifact instance blocked flag is released. Example 3. An example of a translation of a GSM PAC-rule (indexed by k) is presented in Figure 3. For simplicity, multiple parameters are compacted using an "array" notation (e.g., x1, . . . , xn is denoted by x). In particular: (1) represents a condition part of a CA-rule, ensuring the "toggle-once" principle (x k = 0), the compliant firing order (exec(k)) and the "one-message-at-atime" principle (R block (idR, true)); (2) describes the action signature; (3) is an effect encoding the invalidation a milestone once the stage has been activated; (4) propagates an internal event denoting the milestone invalidation, if needed; (5) flags the encoded micro-step corresponding to PAC rule k as processed; (6) transports the unaffected data into the next snapshot. Given a GSM model G with initial snapshot S 0 , we denote by Υ G its B-step transition system, i.e., the infinite-state transition system obtained by iteratively applying the incremental GSM semantics starting from S 0 and nondeterministically considering each possible incoming event. The states of Υ G correspond to stable snapshots of G, and each transition corresponds to a B-step. We abstract away from the single micro-steps constituting a B-step, because they represent temporary intermediate states that are not interesting for verification purposes. Similarly, given the DCDS S obtained from the translation of G, we denote by Υ S its unblocked-state transition system, obtained by starting from S 0 , and iteratively applying nondeterministically the CA-rules of the process, and the corresponding actions, in all the possible ways. As for states, we only consider those database instances where all artifact instances are not blocked; these correspond in fact to stable snapshots of G. We then connect two such states provided that there is a sequence of (intermediate) states that lead from the first to the second one, and for which at least one artifact instance is blocked; these sequence corresponds in fact to a series of intermediate-steps evolving the system from a stable state to another stable state. Finally, we project away all the auxiliary relations introduced by the translation mechanism, obtaining a filtered version of Υ S , which we denote as Υ S | G . The intuition about the construction of these two transition systems is given in Figure 4. Notice that the intermediate micro-steps in the two transition systems can be safely abstracted away because: (i) thanks to the toggle-once principle, they do not contain any "internal" cycle; (ii) respecting the firing order imposed by G, they all lead to reach the same next stable/unblocked state. We can then establish the one-to-one correspondence between these two transition systems in the following theorem (refer to [19] for complete proof):
Theorem 4. Given a GSM model G and its translation into a corresponding DCDS S, the corresponding B-step transition system Υ G and filtered unblocked-state transition system Υ S | G are equivalent, i.e., Υ G ≡ Υ S | G . present in Υ S are bounded. We discuss each auxiliary relation separately. The artifact blocking relation R block keeps a boolean flag for each artifact instance, so its cardinality depends on the number of instances in the model. Since the model is state-bounded, the number of artifact instances is bounded and so is R block . The eligibility tracking table R exec stores for each artifact instance a boolean vector describing the applicability of a certain PAC rule. Since the number of instances is bounded and so is the set of PAC rules, then the relation R exec is also bounded. Similarly, one can show the boundedness of R mi chg , R sj chg due to the fact that the number of stages and milestones is fixed a-priori. Let us now analyze internal message pools. By construction, S may contain at most one tuple in R msg k data and R srvp data for each artifact instance. This is enforced by the blocking mechanism R block , which blocks the artifact instance at the beginning of a B-step and prevents the instance from injecting further events in internal pools. The outgoing message pool R msgq out may contain as much tuples per artifact instance as the amount of atomic stages in the model, which is still bounded. However, neither incoming nor outgoing messages are accumulated in the internal pool along the B-steps execution, since the final micro-step of the B-step is designed not to propagate any of the internal message pools to the next snapshot. Therefore, Υ S is state-bounded.
From the combination of Theorems 3 and 4 and Lemma 1, we directly obtain:
Theorem 5. Verification of µL P properties over state-bounded GSM models is decidable, and can be reduced to finite-state model checking of propositional µ-calculus.
Obviously, in order to guarantee verifiability of a given GSM model, we need to understand whether it is state-bounded or not. However, state-boundedness is a "semantic" condition, which is undecidable to check [15]. We mitigate this problem by isolating a class of GSM models that is guaranteed to be state-bounded. We show however that even very simple GSM models (such as Fig. 1), are not state-bounded, and thus we provide some modelling strategies to make any GSM model state-bounded.
GSM Models without Artifact Creation. We investigate the case of GSM models that do not contain any create-artifact-instance tasks. Without loss of generality, we assimilate the creation of nested datatypes (such as those created by the "add item" task in Example 1) to the creation of new artifacts. From the formal point of view, we can in fact consider each nested datatype as a simple artifact with an empty lifecycle, and its own information model including a connection to its parent artifact. Corollary 1. Verification of µL P properties over GSM models without create-artifactinstance tasks is decidable.
Proof. Let G be a GSM model without create-artifact-instance tasks. At each stable snapshot Σ k , G can either process an event representing an incoming one-way message, or the termination of a task. We claim that the only source of state-unboundedness can be caused by service calls return related to the termination of create-artifact-instance tasks. In fact, one-way incoming messages, as well as other service call returns, do not increase the size of the data stored in the GSM information model, because the payload of such Fig. 5: Unbounded execution of the GSM model in Fig. 1 messages just substitutes the values of the corresponding data attributes, according to the signature of the message. Similarly, by an inspection of the proof of Lemma 1, we know that across the micro-steps of a B-step, status attributes are modified but their size does not change. Furthermore, a bounded number of outgoing events could be accumulated in the message pools, but this information is then flushed at the end of the B-step, thus bringing the size of the overall information model back to the same size present at the beginning of the B-step. Therefore, without create-artifact-instance tasks, the size of the information model in each stable state is constant, and corresponds to the size of the initial information model. We can then apply Theorem 5 to get the result.
Arbitrary GSM Models. The types of models studied in paragraph above are quite restrictive, because they forbid the possibility of extending the number of artifacts during the execution of the system. On the other hand, as soon as this is allowed, even very simple GSM models, as the one shown in Fig. 1, may become state unbounded. In that example, the source of state unboundedness lies in the stage containing the "add item" task, which could be triggered an unbounded number of times due to continuous itemRequest incoming events, as pointed out in Fig. 5. This, in turn, is caused by the fact that the modeler left the GSM model underspecified, without providing any hint about the maximum number of items that can be included in an order. To overcome this issue, we require the modeler to supply such information (stating, e.g., that each order is associated to at most 10 items). Technically, the GSM model under study has to be parameterized by an arbitrary but finite number N max , which denotes the maximum number of artifact instances that can coexist in the same execution state. We call this kind of GSM model instance bounded. A possible policy to provide such bound is to allocate available "slots" for each artifact type of the model, i.e. to specify a maximum number N Ai for each artifact type A i , then having N max = i N Ai . In order to incorporate the artifact bounds into the execution semantics, we proceed as follows. First, we pre-populate the initial snapshot of the considered GSM instance with N max blank artifact instances (respecting the relative proportion given by the local maximum numbers for each artifact type). We refer to one such blank artifact instance as artifact container. Along the system execution, each container may be: (i) filled with concrete data carried by an actual artifact instance of the corresponding type, or (ii) flushed to the initial, blank state. To this end, each artifact container is equipped with an auxiliary flag f r i , which reflects its current state: f r i is false when the container stores a concrete artifact instance, true otherwise. Then, the internal semantics of create-artifact-instance is changed so as to check the availability of a blank artifact container. In particular, when the corresponding service call is to be invoked with the new artifact instance data, the calling artifact instance selects the next available blank artifact container, sets its flag f r i to f alse, and fills it with the payload of the service call. If all containers are occupied, the calling artifact instance waits until some container is released. Symmetrically to artifact creation, the deletion procedure for an artifact instance is managed by turning the corresponding container flag f r i to true. Details on the DCDS CA-rules formalizing creation/deletion of artifact instances according to these principles can be found in [19].
We observe that, following this container-based realization strategy, the information model of an instance-bounded GSM model has a fixed size, which polinomially depends on the total maximum number N max . The new implementation of createartifact-instance does not really change the size of the information model, but just suitably changes its content. Therefore, Corollary 1 directly applies to instance-bounded GSM models, guaranteeing decidability of their verification. Finally, notice that infinitely many different artifact instances can be created and manipulated, provided that they do not accumulate in the same state (exceeding N max ).
Discussion and related work
In this work we have provided the foundations for the formal verification of the GSM artifact-centric paradigm. After having proven undecidability of verification in the general case, we have shown decidability of verification for a very rich first-order temporal logic, tailored to the artifact-centric setting, for an interesting class of "statebounded" GSM models.
So far, only few works have investigated verification of GSM models. The closest approach to ours is [6], where state-boundedness is also used as a key property towards decidability. The main difference between the two approaches is that decidability of statebounded GSM models is proven for temporal logics of incomparable expressive power. In addition to [6], in this work we also study modeling strategies to make an arbitrary GSM model state-bounded, while they assume that the input model is guaranteed to be state-bounded. Hence, our strategies could be instrumental to [6] as well. In [14], another promising technique for the formal verification of GSM models is presented. However, the current implementation cannot be applied to general GSM models, because of assumptions over the data types and the fact that only one instance per artifact type is supported. Furthermore, a propositional branching-time logic is used for verification, restricting to the status attributes of the artifacts. The results presented in our paper can be used to generalize this approach towards more complex models (such as instancebounded GSM models) and more expressive logics, given, e.g., the fact that "one-instance artifacts" fall inside the decidable cases we discussed in this paper.
It is worth noting that all the presented decidability results are actually even stronger: they state that verification can be reduced to standard model checking of propositional µcalculus over finite-state transition systems (thanks to the abstraction techniques studied in [15]). This opens the possibility of actually implementing the discussed techniques, by relying on state-of-the-art model checkers. We also inherit from [15] the complexity boundaries: they state that verification is EXPTIME in the size of the GSM model which, Verification of Artifact-Centric Systems: Decidability and Modeling Issues 15 in the case of instance-bounded GSM models, means in turn EXPTIME in the maximum number of artifact instances that can coexist in the same state.
Beside implementation-related issues, we also aim to reassess the results presented here in a setting where GSM relies on a rich knowledge base (a description logic ontology) for its information model, in the spirit of [4].
Fig. 2 :
2GSM model of a Turing machine
Theorem 2 (
2[3]). There exists a DCDS for which verification of a propositional safety property expressible in LTL ∩ CTL is undecidable.
Fig. 4 :
4CA-rule encoding a milestone invalidation upon stage activation Example 2. An example of a translation of a GSM PAC-rule (indexed by k) is presented in
4
Dmitry Solomakhin et al.add item
on itemRequest
if not Order paid
Item added
execute
payment
on payRequest
if order.items -> exists
Order paid
send receipt
Receipt sent
...
status attributes
items
. . .
code qty
). The idea of translation into a GSM model is the following. Beside status Dmitry Solomakhin et al.6
Halt
curState == qf
Transition done
...
status attributes
curState
cells
curCell
curCell = curCell.next;
Head moved
if curCell.next == null
newCell = createCell();
newCell.value = "_";
curCell.next = newCell;
newCell.prev = curCell;
newCell.next = null;
Tape extended
if curCell.next != null
curCell = createCell();
curCell.value = "_";
curState = q0;
Initialized
if curCell == null
MovedR
. . .
curCell.value = vR1';
curState = qR1';
if curState = qR1
&& curCell.value = vR1
R1 state updated
. . .
curCell.value = vRk';
curState = qRk';
if curState = qRk
&& curCell.value = vRk
Rk state updated
. . .
value prev next
Transition stage
State update stages
Init stage
Right shift stage
(left transitions)
(Left shift stage)
Dmitry Solomakhin et al.
In[3], two semantics for services are introduced: deterministic and nondeterministic. Here we always assume nondeterministic services, which is in line with GSM.
Dmitry Solomakhin et al.
State-bounded GSM modelsWe now take advantage of the key decidability result given in Theorem 3, and study verifiability of state-bounded GSM models. Observe that state-boundedness is not a too restrictive condition. It requires each state of the transition system to contain a bounded number of tuples. However, this does not mean that the system in general is restricted to encounter only a limited amount of data: infinitely many values may be distributed across the states (i.e. along an execution), provided that they do not accumulate in the same state. Furthermore, infinitely many executions are supported, reflecting that whenever an external event updates a slot of the information system maintained by a GSM artifact, infinitely many successor states in principle exist, each one corresponding to a specific new value for that slot. To exploit this, we have first to show that the GSM-DCDS translation preserves state-boundedness, which is in fact the case.Lemma 1. Given a GSM model G and its DCDS translation S, G is state-bounded if and only if S is state-bounded.Proof. Recall that S contains some auxiliary relations, used to restrict the applicability of CA-rules in order to enforce the execution assumptions of GSM: (i) the eligibility tracking table R exec , (ii) the artifact instance blocking flags R block , (iii) the internal message pools R msg k data , R srvp data , R msgq out , and (iv) the tables of status changes R mi chg , R sj chg . (⇐) This is directly obtained by observing that, if Υ S is state-bounded, then also Υ S | G is state-bounded. From Theorem 4, we know that Υ S | G ≡ Υ G , and therefore Υ G is state-bounded as well.(⇒) We have to show that state boundedness of G implies that also all auxiliary relations
Modeling Business Processes -A Petri Net-Oriented Approach. W M P Van Der Aalst, C Stahl, Springervan der Aalst, W.M.P., Stahl, C.: Modeling Business Processes -A Petri Net-Oriented Ap- proach. Springer (2011)
Model checking of security-sensitive business processes. A Armando, S E Ponta, Proc. of FAST. of FASTSpringer5983Armando, A., Ponta, S.E.: Model checking of security-sensitive business processes. In: Proc. of FAST. LNCS, vol. 5983, pp. 66-80. Springer (2009)
Verification of conjunctive-query based semantic artifacts. B Bagheri Hariri, D Calvanese, G De Giacomo, R De Masellis, CEUR-WS.orgProceedings of the 24th International Workshop on Description Logics. the 24th International Workshop on Description Logics745CEUR Workshop ProceedingsBagheri Hariri, B., Calvanese, D., De Giacomo, G., De Masellis, R.: Verification of conjunctive-query based semantic artifacts. In: Proceedings of the 24th International Workshop on Description Logics (DL 2011). CEUR Workshop Proceedings, vol. 745. CEUR-WS.org (2011)
Verification of description logic knowledge and action bases. B Bagheri Hariri, D Calvanese, G D Giacomo, R D Masellis, P Felli, M Montali, Proc. of ECAI. of ECAIIOS Press242Bagheri Hariri, B., Calvanese, D., Giacomo, G.D., Masellis, R.D., Felli, P., Montali, M.: Verification of description logic knowledge and action bases. In: Proc. of ECAI. vol. 242, pp. 103-108. IOS Press (2012)
An abstraction technique for the verification of artifact-centric systems. F Belardinelli, A Lomuscio, F Patrizi, Proc. of KR. of KRAAAI PressBelardinelli, F., Lomuscio, A., Patrizi, F.: An abstraction technique for the verification of artifact-centric systems. In: Proc. of KR. AAAI Press (2012)
Verification of gsm-based artifact-centric systems through finite abstraction. F Belardinelli, A Lomuscio, F Patrizi, Proc. of ICSOC. of ICSOCSpringer7636Belardinelli, F., Lomuscio, A., Patrizi, F.: Verification of gsm-based artifact-centric systems through finite abstraction. In: Proc. of ICSOC. LNCS, vol. 7636, pp. 17-31. Springer (2012)
Artifact-centered operational modeling: Lessons from customer engagements. K Bhattacharya, N S Caswell, S Kumaran, A Nigam, F Y Wu, IBM Systems Journal. 464Bhattacharya, K., Caswell, N.S., Kumaran, S., Nigam, A., Wu, F.Y.: Artifact-centered opera- tional modeling: Lessons from customer engagements. IBM Systems Journal 46(4), 703-721 (2007)
. E M Clarke, O Grumberg, D A Peled, Model checking. The MIT PressClarke, E.M., Grumberg, O., Peled, D.A.: Model checking. The MIT Press (1999)
Business artifacts: A data-centric approach to modeling business operations and processes. D Cohn, R Hull, IEEE Data Eng. Bull. 323Cohn, D., Hull, R.: Business artifacts: A data-centric approach to modeling business operations and processes. IEEE Data Eng. Bull. 32(3) (2009)
On the equivalence of incremental and fixpoint semantics for business artifacts with guard-stage-milestone lifecycles. E Damaggio, R Hull, R Vaculin, Information Systems. Damaggio, E., Hull, R., Vaculin, R.: On the equivalence of incremental and fixpoint semantics for business artifacts with guard-stage-milestone lifecycles. Information Systems (2012)
Automatic verification of data-centric business processes. A Deutsch, R Hull, F Patrizi, V Vianu, Proc. of ICDT. of ICDTACMICDT '09Deutsch, A., Hull, R., Patrizi, F., Vianu, V.: Automatic verification of data-centric business processes. In: Proc. of ICDT. pp. 252-267. ICDT '09, ACM (2009)
On the convergence of data and process engineering. M Dumas, ADBIS. LNCS. Eder, J., Bieliková, M., Tjoa, A.M.6909SpringerDumas, M.: On the convergence of data and process engineering. In: Eder, J., Bieliková, M., Tjoa, A.M. (eds.) ADBIS. LNCS, vol. 6909, pp. 19-26. Springer (2011)
Model checking and the mu-calculus. E A Emerson, Descriptive Complexity and Finite Models. Emerson, E.A.: Model checking and the mu-calculus. In: Descriptive Complexity and Finite Models (1996)
Verifying gsm-based business artifacts. P Gonzalez, A Griesmayer, A Lomuscio, Proc. of ICWS. of ICWSIEEEGonzalez, P., Griesmayer, A., Lomuscio, A.: Verifying gsm-based business artifacts. In: Proc. of ICWS. pp. 25-32. IEEE (2012)
Verification of relational data-centric dynamic systems with external services. B B Hariri, D Calvanese, G D Giacomo, A Deutsch, M Montali, CoRR abs/1203.0024Hariri, B.B., Calvanese, D., Giacomo, G.D., Deutsch, A., Montali, M.: Verification of rela- tional data-centric dynamic systems with external services. CoRR abs/1203.0024 (2012)
A survey of formal verification for business process modeling. S Morimoto, Computational Science (ICCS 2008). Springer5102Morimoto, S.: A survey of formal verification for business process modeling. In: Computa- tional Science (ICCS 2008), LNCS, vol. 5102, pp. 514-522. Springer (2008)
Business artifacts: An approach to operational specification. A Nigam, N S Caswell, IBM Systems Journal. 423Nigam, A., Caswell, N.S.: Business artifacts: An approach to operational specification. IBM Systems Journal 42(3) (2003)
Using the pi-calculus for formalizing workflow patterns. F Puhlmann, M Weske, Proceedings of the 3rd International Conference on Business Process Management. the 3rd International Conference on Business Process Management3649Puhlmann, F., Weske, M.: Using the pi-calculus for formalizing workflow patterns. In: Pro- ceedings of the 3rd International Conference on Business Process Management. vol. 3649, pp. 153-168 (2005)
Formalizing guard-stage-milestone meta-models as data-centric dynamic systems. D Solomakhin, M Montali, S Tessaris, BolzanoKRDB Research Centre, Faculty of Computer Science, Free University of BozenTech. Rep. KRDB12-4Solomakhin, D., Montali, M., Tessaris, S.: Formalizing guard-stage-milestone meta-models as data-centric dynamic systems. Tech. Rep. KRDB12-4, KRDB Research Centre, Faculty of Computer Science, Free University of Bozen-Bolzano (2012)
|
[] |
[
"Secure Logical Schema and Decomposition Algorithm for Proactive Context Dependent Attribute Based Access Control",
"Secure Logical Schema and Decomposition Algorithm for Proactive Context Dependent Attribute Based Access Control"
] |
[
"Ugur Turan [email protected]@ceng.metu.edu.tr \nDepartment of Computer Engineering\nMiddle East Technical University\n06800AnkaraTurkey\n",
"İsmail Hakkı \nDepartment of Computer Engineering\nMiddle East Technical University\n06800AnkaraTurkey\n",
"Toroslu #2 \nDepartment of Computer Engineering\nMiddle East Technical University\n06800AnkaraTurkey\n"
] |
[
"Department of Computer Engineering\nMiddle East Technical University\n06800AnkaraTurkey",
"Department of Computer Engineering\nMiddle East Technical University\n06800AnkaraTurkey",
"Department of Computer Engineering\nMiddle East Technical University\n06800AnkaraTurkey"
] |
[] |
Traditional database access control mechanisms use role based methods, with generally row based and attribute based constraints for granularity, and privacy is achieved mainly by using views. However if only a set of views according to policy are made accessible to users, then this set should be checked against the policy for the whole probable query history. The aim of this work is to define a proactive decomposition algorithm according to the attribute based policy rules and build a secure logical schema in which relations are decomposed into several ones in order to inhibit joins or inferences that may violate predefined privacy constraints. The attributes whose association should not be inferred, are defined as having security dependency among them and they form a new kind of context dependent attribute based policy rule named as security dependent set. The decomposition algorithm works on a logical schema with given security dependent sets and aims to prohibit the inference of the association among the elements of these sets. It is also proven that the decomposition technique generates a secure logical schema that is in compliance with the given security dependent set constraints.
| null |
[
"https://arxiv.org/pdf/1402.5742v2.pdf"
] | 15,827,946 |
1402.5742
|
7f0ead64bd72f15300595e30508493be105cd5b0
|
Secure Logical Schema and Decomposition Algorithm for Proactive Context Dependent Attribute Based Access Control
17 Jul 2014
Ugur Turan [email protected]@ceng.metu.edu.tr
Department of Computer Engineering
Middle East Technical University
06800AnkaraTurkey
İsmail Hakkı
Department of Computer Engineering
Middle East Technical University
06800AnkaraTurkey
Toroslu #2
Department of Computer Engineering
Middle East Technical University
06800AnkaraTurkey
Secure Logical Schema and Decomposition Algorithm for Proactive Context Dependent Attribute Based Access Control
17 Jul 2014
Traditional database access control mechanisms use role based methods, with generally row based and attribute based constraints for granularity, and privacy is achieved mainly by using views. However if only a set of views according to policy are made accessible to users, then this set should be checked against the policy for the whole probable query history. The aim of this work is to define a proactive decomposition algorithm according to the attribute based policy rules and build a secure logical schema in which relations are decomposed into several ones in order to inhibit joins or inferences that may violate predefined privacy constraints. The attributes whose association should not be inferred, are defined as having security dependency among them and they form a new kind of context dependent attribute based policy rule named as security dependent set. The decomposition algorithm works on a logical schema with given security dependent sets and aims to prohibit the inference of the association among the elements of these sets. It is also proven that the decomposition technique generates a secure logical schema that is in compliance with the given security dependent set constraints.
I. INTRODUCTION
Business technology era has increased the importance of logical data storage and retrieval from the point of security, since many users and roles with different access privileges act in the same database environment. As an important topic in security, granularity is also essential in database access control methods. Traditional database security approaches mainly use relation based action rules for users such as allowing querying but disallowing updating the relation and sometimes they also try to define policy rules on attributes to increase granularity [1]. These approaches have very simple purpose; that is, to determine whether to grant or deny the access, based on the predefined constraints related to the role of the user. Especially for the attribute based access control; the attributes that are going to be related with each other by executing a query, also taking query history into consideration, are the main factor in making the grant or deny decision of the query.
For example, let ′ s consider a relation: STUDENT = (id, email, name, surname, address, age, gender)
on which a survey about the characteristics of students are being carried out. As an example, the extraction of email and gender relationship should be forbidden in order to preserve the assumed privacy of a student. In addition to that, if id and email fields are two keys and id is selected as primary key, probable join queries should also be checked in order to guarantee that email and gender fields cannot be related with each other by the help of the id attribute. For this case, decomposing the STUDENT relation into two views as:
STUDENT 1 = (id, name, surname, address, age, gender) STUDENT 2 = (id, email, name, surname, address, age)
is an example of faulty decomposition since email and gender can be related as follows.
SELECT s1.gender, s2.email FROM STUDENT 1 s1, STUDENT 2 s2 WHERE s1.id = s2.id
To prevent this kind of queries, a correct decomposition can be given as: Note that STUDENT 1 is a keyless relation as it is usual in views and the situation will be discussed in the following sections of the paper. In addition to that, it is assumed there exists no functional dependency other than the ones which make email and id candidate keys for the relation. By this decomposition, the queries which relates email and gender cannot achieve the inference of association among the attributes, since equijoins on keys cannot be done by decomposed relations.
STUDENT 1 = (name,
As in this example, a view based solution can be generated to satisfy privacy policy, which is very popular approach in enterprise database systems. There can be several policy rules, and views should be constructed in order to satisfy all constraints of these policies. The need for defining different external layers for different access control policies has increased by web based data sharing trend [2]. Therefore, a formal approach is needed to build a secure external layer by decomposing the relations into sub relations according to policy rules, in order to generate relevant secure logical schema.
Most of the research on this kind of security (access control) is mainly focused on dynamic mechanisms employing query investigation or modification methods, and by also tracking the query history [1], [3]- [7]. On the other hand, the strategy used in this paper is to decompose the relation into views in advance, for preventing the time spent by query modification or history tracking operations which may be costly in high utilized database systems [8]. To the best of our knowledge, this is the first attempt in the literature to handle privacy for context dependent attribute based access control by a proactive approach. Our method can be easily adapted for validating existing external schema against given attribute based policy rules. The proactive decomposition method described in this paper can easily be combined with other constraints such as row based policy rules during implementation. In addition to that the method can be used in a "Private Record Matching" engine when required context dependent attribute sets are supplied [9].
In the rest of this paper, we first present a formal method to define secure logical schema for preserving context dependent attribute based privacy, then we define a decomposition algorithm that guarantees to produce a secure logical schema. A detailed real life example is also given to clearly show the steps of the algorithm and use of the decomposed relations in sample applications. This paper is organized as follows: Section 2 describes the related works in the field of security and privacy in databases. Then, the Section 3 gives the preliminaries and definitions used through the rest of the paper. Section 4 presents the decomposition algorithm that satisfies the access policies, and the proof of the algorithm. The next section, Section 5, contains a real life example for demonstration of the algorithm and Section 6 briefly discusses the future work. Finally Section 7 contains the conclusion.
II. RELATED WORK
The field of database security is very popular, and several works in this field have influenced the idea proposed in this paper [1], [3]- [7], [10]- [21]. The approach of updating the query dynamically depending on the context and the policy has been studied for a long time in the literature [4]. In this method, the query can be modified by adding predicates and the main purpose is the row based security. Adding more predicates to where clause can only restrict the rows extracted by the query [4]. Actually the security mechanism in [4] gives user a set of views which are permitted to be queried and then performs row based elimination by adding predicates whose idea can be treated as an additional functionality for the work in this paper. However another work, [5], states that the former algorithm is not maximal and limits some permitted answers. In [5] some flexibility has been added as the query may depend on any view or sub view or metarelations. That means extra work should be conducted in order to find which permitted views are involved. These two approaches may have performance problems and modifying the query can be costly [8], nevertheless it should be noted that their query modification strategies are done mainly for row based access control, whereas this paper focuses on context dependent attribute based access control in a proactive manner.
In addition to this, Oracle presents Virtual Private Database term [7] and performs the security totally by query modification on real relations. The modification can be as row based by adding predicates or column based by making null of the unwanted attributes. Bertino [13] calls this type of query modification approaches as Truman Models [12], since they answer each time, nevertheless the answer may not be maximal because of restrictions. These models have simple attribute based policy rules as just checking the existence of attributes in the query result. Beside this, data perturbation [17] is another run-time consuming method and may be used for Truman Models. In addition to that, k-anonymity term [3] has been proposed to divide the relation to views which are targeted not to extract "id"s. Moreover the security policy need not to be satisfying the anonymity only; for instance one can define a policy rule as gender and address should not be obtained together as even both of them is not adequate for identification [16].
Furthermore, Purpose Oriented Access Control scheme [10] offers role -purpose -column mapping, however two purposes may serve to another unwanted purpose. For instance let a, b, c to be attributes and purpose-1 needs a, b; purpose-2 needs a, c and non-existing and unwanted purpose-3 needs b, c. In this example first two purposes can serve to the unwanted third purpose. That example presents the notion of query history [18] whose deep investigation makes the computation costly. To get rid of these, attribute mutability term [11] provides a mechanism as Chinese-Wall method [14] with historical data, but performance requirements may be again critical.
Beside this, Non-Truman models have been proposed [12] which reject the unauthorized query according to the authorized views. Hippocratic databases [6] combine many security issues stated in this section however the addressed problem in this paper is a bit different. The main problem is to maintain security and privacy in all these works; nevertheless, dynamic security modeling with query modification, attribute mutation, historical query tracking or grant/reject mechanism may have performance problems because of their run time executions. This paper constructs a proactive security mechanism as building an external layer with a secure logical schema to user by a decomposition algorithm in which user is free to query anything on decomposed relations. The term Attribute-Based in this paper is used for the ability of defining the access control rules on the attributes of relations. The same term has been used differently in [21] to build the access control with the help of dedicated attributes. It is important to note that the notion of modeling access control rules on attributes according to the application semantics is another important work discussed in [22] which is not in the scope of this paper.
The most relevant study, targeted a similar problem with this work, is reducing inference control to access control [1]. However their solution labels the normalized schema relations and the solution is not proactive, only more efficient than query controlling.
III. PRELIMINARIES AND PROBLEM DEFINITION
In this section, we give the basic terms and concepts used in the paper. This paper has two main objectives, namely, formally defining a secure logical schema which is in compliance with the given security constraints (security dependent sets), and developing a decomposition algorithm which divides relations into sub-relations to be able to satisfy the security constraints. The main reason for decomposition is to prevent obtaining securely dependent attributes together directly in a relation or through a join.
Therefore, first, the definition of the logical schema is given in terms of two sets as relational schema and (nonreflexive and non-partial) functional dependencies. After that, the closures of relations and functional dependencies (again non-reflexive and non-partial) are defined. The closure of relation schema is very important, since it describes how new relations can be generated using only equijoins on foreign keys. Moreover, the closure of functional dependencies is used to define identifiers for attributes, how they can be inferred, and how two or more attributes can be associated with each other. Combining these definitions, we then define a secure logical schema, which simply prevents obtaining the attributes of each given security dependent set together by joins. We also prove that secure logical schema guarantees that it is not possible to obtain any association among the set of attributes of security dependent set.
Following these, we define a decomposition operation which decomposes a logical schema according to a given secure dependent sets in order to form a secure logical schema. Afterwards, we prove that the new schema obtained by employing the decomposition operation is secure logical schema, which means that it is not possible to associate attributes of secure dependent sets by joins using the relations constructed after the decomposition. By this way, the inference of association of the attributes together in each security dependent set can be prevented.
Definition 1 (Relational Schema).
A relation schema is defined a set of attribute names concatenated with relation name (using underscore) in order to prevent the vagueness caused by having same attribute name in different relations. For the sake of simplicity, relation schema is referred as relation and the concatenation on attribute names will not be shown unless needed.
For example a relation
USERS={id_users,name_users,surname_users, email_users} is defined as a set of concatenated attribute names. Using this definition, it is guaranteed that all attribute names in a database will be unique owing to unique relation names by default.
Definition 2 (Logical Schema).
A logical schema for a database is defined as a tuple L = (R, F ) such as;
• R is defined as set of all relation schemas in a database. • F is defined as set of functional dependencies among attributes in all relation schemas in R excluding reflexive and partial functional dependencies.
Since a foreign and relevant key in two different relations have different names according to the definition of relational schema, the functional dependency in between them is not treated as reflexive and should be in F as given in the example below. It should be noted that this kind of functional dependencies have a special importance since they are used to perfom equijoins on keys while inference, which will be discussed below. It is important to point out that the functional dependencies which were lost while building R, are not in the scope of this paper; together with the key like behavior for a non-key attribute seen statistically through data for a schema.
F = (userid logs → id users), (id users → userid logs), (id users → name users),
(id users → surname users), (userid logs → action logs),
(userid logs → date logs)
The functional dependencies given in bold expresses the dependency between original key and a foreign key.
Using the join operation, the new relations can be obtained from the relations of logical schema. Similarly, the properties of functional dependencies can be used to generate new functional dependencies from the existing ones. Below, we define two closures for these two. Definition 3 (F + : Closure of F ). Closure set of given F that can be obtained by using the properties of functional dependencies [23], excluding reflexive and partial functional dependencies.
Definition 4 (R + : Closure of R). Closure of R is defined as R + , composing all probable relation schemas obtained by performing any database query over R for which join operations used in the query should only be equijoins on foreign keys (named meaningful join thereafter) in order not to produce spurious tuples.
For example, by using the sample decomposed relations given in introduction section:
STUDENT 1 = (name, surname, address,
age, gender) STUDENT 2 = (email, name, surname, address, age) the query which produces spurious tuples can be given as:
SELECT * FROM STUDENT 1 s1, STUDENT 2 s2 WHERE s1.name = s2.name
The join operation is an equijoin but not on keys, therefore spurious tuples are generated by associating different students having the same name.
Let U R be union of all attributes existing in all relations of R (i.e., U R = Ri∈R { x | x ∈ R i }). Rather than defining which relations can be constructed from R as subsets of U R , we can specify the set of attributes that cannot be obtained together in R + as follows:
Property of R + : An attribute set A cannot be subset of any derived relation schema in R + , if and only if, A cannot be a subset of existing relation schema or cannot be functionally dependent to any set of attributes in U R so it becomes impossible to relate with equijoins on foreign keys by the definition of R + . Note that, according to Definition 2 if there is a foreign key relationship, then it is represented as functional dependency as ((A i → A j ) ∈ F ). Since all meaningful joins can only be executed using this kind of functional dependencies, the following logical formula expresses that a set of attributes A cannot be a subset of any relation in R + if and only if, there is no functional dependency relationship to A in F + and there is no R containing A.
∀R k ∈ R + , ∀A ⊆ U R [A ⊆ R k ⇔ ∀R j ∈ R(A ⊆ R j ) ∧∀A i ⊆ U R ((A i → A) / ∈ F + )](1)
As it can be seen from above definitions, there is a strong condition which says that in order not to be able to obtain a subset of attributes from a logical schema it should not be possible to perform a meaningful join. In order to perform meaningful join; foreign keys are used, and they correspond to functional dependencies. Therefore, we need the following definitions to represent these relationships.
Definition 5 (Set of Identifier Sets).
The set of identifier set of an attribute α for a given F , as i F α , is defined as follows:
i F α = x x ⊇ {α} ∧ (x → α) ∈ F +(2)
Each element of i F α is also called as identifier set of attibute α.
For example,
α = name F + = id → name, id → surname, id → age, id → email, email → name, email → surname i F α = {{id}, {email}}
The definition could be simply extended for an attribute set A as follows:
I F A = x x ⊇ A ∧ (x → A) ∈ F +(3)
These two definitions can be related as for an attribute set A, as I F A contains the shared elements in i F α for all α which are attributes in A.
I F A = x ∀α ∈ A(x ∈ i F α )(4)
Identifiable Property: Each attribute of an identifiable set (i.e, (I F A = ∅)), should be in the same relational schema with at least one of its identifier set. In other words, for a L = (R, F );
∀A ⊆ U R (I F A = ∅ ⇔ ∀α ∈ A, ∃R i ∈ R, ∃D ∈ i F α ((α ∪ D) ⊆ R i ))(5)
The same property can be thought as it is impossible to identify an attribute in a logical schema if the attribute does not have any identifier set in its relation schema, which makes it impossible to discover other identifier sets by using meaningful joins. This issue is also a matter of database normalization however in this paper no assumption about the normal form of database has been done.
Definition 6 (Inferability). A set of attributes
A 1 ⊆ U R can be inferable from a set of attributes A 2 ⊆ U R for a given L = (R, F ), shown as A 1 F ⇒ A 2 , as defined below. ∀A 1 ∀A 2 ((A 1 F ⇒ A 2 ) ⇔ ((A 1 → A 2 ) ∈ F + ))(6)
The definition of inferability is given by using closure set of functional dependencies since relation based key constraints may not be adequate as there may be functional dependencies among non-prime attributes in a schema which is not normalized.
Definition 7 (Inference of Association among a Set of Attributes). For a given L = (R, F ), the inference of association among a set of attributes A ⊆ U R ,shown as X L (A), means that either A should be inferable from a subset of U R or be subset of any existing relation schema in order to be associated. More formally:
X L (A) ⇔ ∃A i ⊆ U R (A i F ⇒ A) ∨ ∃R i ∈ R(A ⊆ R i ) (7)
In this paper, the set of attributes are defined as to have security dependency among them if the inference of association among them should be prevented. In addition to that, the purpose of this paper is to inhibit the inference of association among a given subset of U R with at least two attributes (each named a security dependent set thereafter) for a logical schema L = (R, F ) by building a secure logical schema. Definition 8 (Secure Logical Schema). A secure logical schema is a logical schema L sec S = (R, F ) such that for a given set of security dependent sets S, there should not be any relation in R + , containing the attributes of any set in S. Formally:
L sec S ⇔ ∀S i ∈ S, ∄R i ∈ R + (S i ⊆ R i )(8)
It should be emphasized that by the definition of R + , only meaningful joins are taken into consideration as queries should only have equijoins on foreign keys and by this way spurious tuples cannot be generated.
By the definition of secure logical schema, it can be stated that the inference of association among attributes of each security dependent set is impossible with a secure logical schema since the attributes of any security dependent sets cannot be functionally dependent to any subset of attributes in logical schema (excluding reflexive and partial dependencies as given in Definition 2) or in the same relation as given as a theorem below. Theorem 1. The inference of association among attributes of each security dependent set cannot be performed in secure logical schemas; that is,
∀S i ∈ S(¬X L sec S (S i ))(9)
Proof: The formal proof given in Appendix briefly states that in order to perform the disallowed inference, either the attributes should be in the same relation or a meaningful join should be done for an inter relation inference as both cases are impossible because of secure logical schema definition. The next step is to define transformation of a logical schema to a secure logical schema for given security dependency sets.
Definition 9 (Secure Decomposition).
A secure decomposition is decomposition of L = (R, F ) according to the set of security dependency sets S to a new logical schema L ′ S = (R ′ , F ′ ) , having the following features: 1) Any attribute should not be lost after decomposition. In other words:
U R = U R ′(10)
2) The new set of functional dependencies should be subset of existing set of functional dependencies as there can't be any new functional dependency moreover a loss in existing functional dependencies is expected to inhibit the inference of associations among the elements of security dependent sets.
(F ′ ⊆ F ) ∧ (F ′ + ⊆ F + )(11)
3) Any of the decomposed relations should not be a superset of any security dependent set.
∀S i ∈ S, ∄R i ∈ R ′ (S i ⊆ R i )(12)
4) Any of the attributes in a security dependent set should not coexist in the same decomposed relation with any of its identifier set.
∀S i ∈ S, ∀R i ∈ R ′ , ∀σ ∈ S i , ∄τ ∈ i F σ (({σ} ∪ τ ) ⊆ R i )(13)
The fourth property of secure decomposition is a strong requirement since it makes all the attributes in any security dependent set, uninferrable after the decomposition. It should be noted that this property is a requirement for a totally proactive solution. If any mechanism intends to have proactive and run time components together, then this requirement can be relaxed.
The aim of the secure decomposition is to transform a logical schema to a secure logical schema by the help of security dependency sets, which is given as a theorem below.
Theorem 2. If L ′ S = (R ′ , F ′ )
is the logical schema obtained after performing secure decomposition to L = (R, F ) with the set of security dependency sets S, then L ′ S is a L sec S . Proof: The formal proof given in Appendix briefly states that in order not to be a secure logical schema, a security dependent set should be inferable or should be a part of an original relation. The former is impossible as the attributes of security dependent sets cannot be in the same relation with any of their identifier sets by the fourth property of the definition of secure decomposition. It is also impossible for the latter due to the third property of secure decomposition.
IV. DECOMPOSITION ALGORITHM
The main purpose of the decomposition algorithm is to achieve the secure decomposition (Definition 9) which is defined as resulting in a secure logical schema. In order to satisfy the goal, it is clear that the elements of each security dependent set should not be in the same sub-relation obtained after the decomposition of original relations. Furthermore, it should not be possible to meaningfully join two sub-relations containing securely dependent attributes separately. Below we define an algorithm which exhaustively generates all the subsets of the attributes of all relations and eliminates the ones that do not satisfy the conditions mentioned above. After that, it also eliminates redundant sub-relations. Secure decomposition algorithm for the L = (R, F ) with the given security dependencies set S is given in Algorithm 1.
Algorithm 1 Decomposition Algorithm
Input:
L: logical schema as (R, F ), S: set of security dependent sets for L Output:
P R : set of maximal subsets of R according to S 1: begin 2: P R = ∅ 3: for each R x in R do 4: P Rx = Power Set of R x
5:
for each S i in S do 6: for each Z i in P Rx do 7: if S i ⊆ Z i then 8: remove Z i from P Rx 9: end if 10: for each α in S i do 11: for each λ in i F α do 12: if ({α} ∪ λ) ⊆ Z i then 13: remove Z i from P Rx
14:
end if 15: end for 16: end for 17: end for 18: end for 19: for each V i in P Rx do 20: for each W i in P Rx do 21: if V i ⊆ W i then 22: remove V i from P Rx The secure decomposing algorithm works as follows:
For all relational schemas in R, 1) Firstly, powerset of the a relational schema is generated, which is called as P Rx in the algorithm (line (4)). 2) Then, for each security dependency set in S (line (5)) each element of P Rx (line (6)) is processed. The set is eliminated if:
• it contains all attributes of that security dependent set together (lines (7-9)), or, • it contains one of the attributes of the security dependent set with the attribute's any identifier set together (lines (10-16)) 3) After that, among the remaining subsets; redundant ones (used for unnecessary sub-relations composed by other sub-relations) are also eliminated (lines (19-25)).
The elimination strategy is aimed to create a secure logical schema. It is important to note that all of the work in this paper is concentrated on security dependent sets. Actually there may be some basic policy rules as a single attribute should not be accessed in any context and these basic cases can be easily handled with simple extensions to the algorithm. However it is left as a future work to define a complete mechanism. Theorem 3. Decomposition algorithm performs secure decomposition on given L = (R, F ) for a given S.
Proof:
For the proof, we should revisit the properties of secure decomposition given in Definition 9. Each explanation is used in the same item number with the definition.
1) Any attribute cannot be lost after decomposition algorithm as the algorithm cannot remove one element subsets of each relation since;
• Security dependency sets should have at least two elements by its definition and cannot be included by a one element subset. • An attribute in a security dependent set cannot be its identifier as reflexive functional dependencies are excluded in the definition of F + and the set of identifier set thereby. So again, at least two element set (attribute and its identifier) cannot be subset of one element subset. 2) Any new functional dependencies cannot be presented besides some existing ones may be lost because some subsets of each relation schema are eliminated. 3) Subsets containing security dependent sets are eliminated (lines (7-9)). 4) All subsets containing an attribute from a security dependent set and it's any identifier set are eliminated (lines (10-16)).
The following parameters of L = (R, F ) and S affect the performance of decomposition algorithm.
• π : #relations ∈ R • ǫ : max Ri∈R {|R i |} • η : max Si∈S {|S i |} • µ : max α∈Si(Si∈S) { i F α }
The algorithm works at a cost of O(π · 2 ǫ · η · µ). The problem is relevant to generating maximal independent sets problem [24] in an undirected graph, in which attributes can be thought as vertices and dependencies as edges. In [24] it is shown that generating maximal independent sets is NP-Hard.
It is important to note that being a proactive solution, the exponential complexity of decomposition algorithm is not a critical problem since it is only executed once as preprocessing phase.
Another point for the decomposition is that, this may lead to relations with no key and because of that reason, duplicate rows may occur in views. Handling mechanism for duplicate rows can change up to the implementation strategy, and keyless relations are not a problem since they are a fact of anonymity.
V. REAL LIFE EXAMPLE
Consider a retail store database with a logical schema L = (R, F ) as the following three relations in R: R = {CU ST OM ER, P RODU CT, BU Y } CU ST OM ER table is used for storing customer details, P RODU CT table is for product information and BU Y relation stores the purchase transactions of customers. It should be noted that many tables and attributes that can be useful for a retail store has been omitted not to overcomplicate the example.
The relation schemas are given as:
CU ST OM ER = customerId(cid), P RODU CT = productId(pid), name, model, year, price BU Y = customerId, productId, date, quantity F = cid
In addition to these, F is defined as a set of functional dependencies. As given in Definition 2, F includes dependencies for the foreign keys which are given in bold. It is important that each foreign key based functional dependency always exists with its symmetric pair, since the foreign and its relevant key are the same attribute.
Sample tuples of these three relations are illustrated in Tables I, II, and III respectively. The fields customerId and productId are the keys of the CU ST OM ER and P RODU CT relations respectively, and they form a composite key in the BU Y relation together with the date attribute. It should be mentioned that some non-key attributes may be used as a pseudo-key to recover the original relation after decomposition. However, that is due to the distribution of data values and it is not in the scope of our work.
As an application, consider the business development department of the retail store which checks the BU Y relation and then investigates the correlation among purchases and customer characteristics as gender and age. However, there can be a malicious worker in the department who can share customer details with other stores as customer access information. The attributes address and phoneN umber may be used to present promotions to customer by a different store.
In addition to that, customerId need not to be related with address and phoneN umber since the department's major objective is to use age and gender attributes. To prevent this situation, following security dependency sets should be defined.
S = cid
It is important to note that to give a single security dependency set as
S f aulty = cid
is an example of faulty definition since decomposition will not inhibit the inference of association among any two of the attributes.
To figure out the identifiers, dependencies in F + should be reproduced. After decomposition algorithm is applied to the CU ST OM ER relation, the following sub-relations are going to be obtained.
CU ST OM ER 1 = cid, name, surname CU ST OM ER 2 = name, surname, pN o, address CU ST OM ER 3 = name, surname, gender, age
It should be noted that name and surname fields are not keys, if they may perform key-like behavior, then S should be arranged to compose new security dependent sets considering name and surname. However it is not in the scope of this paper to identify key-like behaviour. Three decomposed relations are given in Tables IV, V and VI respectively and no decomposition has been made to BU Y and P RODU CT relations. In order to be able to find the number of customer at each age, CU ST OM ER 3 relation can be used. The query is given below: QUERY-1:
SELECT age, COUNT( * ) as 'count' FROM CUSTOMER 3
GROUP BY age
Moreover, name and surname of the customers who has purchased "PS" can be found as;
QUERY-2:
SELECT c1.name, c1.surname FROM CUSTOMER 1 c1, BUY b, PRODUCT p WHERE c1.customerId = b.customerId and c1.productId = p.productId and b.name = 'PS'
The result of these two queries are given in Tables VII and VIII. It should be noted that, any attempt to infer the association among the attributes of each security dependent set cannot be done since the decomposed relations containing the securely dependent attributes cannot be joined on shared identifiers.
VI. FUTURE WORK
Being the first attempt in the literature to formalize proactive context dependent attribute based access control, the paper is touching on many different applicational and theoretical researches about the subject. Firstly, the way of usage of this work in the access control mechanisms of database management systems should be investigated. The proposed solution could work as a part of a trusted access control system of a database which has proactive and run-time components. Moreover, security dependent sets are not the only policy rule for database access control, it is also left as a future work for the model and algorithm to be expanded to satisfy all attribute and row based policy rules. By this generalization, performance and scalability should be investigated for new rule types (attribute or row based) about being "proactive", as some modules may be designed to execute in run time. Also it should be noted that, the last property of secure decomposition given in Definition 9 should be revisited if the mechanism employs some run-time components. The property may be relaxed with some run-time work, as all attributes existing in a security dependent set should not be made unidentifiable in advance, since this decision can be made depending on the query during run time. To sum up, the proposed secure decomposition solution in this paper performs in a proactive manner totally, and if future researches will present some runtime work into the mechanism, then this property of secure decomposition will be revisited for relaxation by also taking how performance affected into consideration.
Furthermore, access control strategy should be extended to data insertion or update policies according to the security dependencies. For this issue, implementation alternatives of the proposed solution should be defined, and data modification strategy with proactive and run-time components should be suggested accordingly. In addition to that, the changes done to the secure logical schema during run-time are another challenge for that paper which should be dealt in future. The modification may be an alternation of any relational schema, meanwhile, the functional dependencies may also be changed according to the relational schema. These changes will be expected to cause modifications to security dependent sets and it should be clearly examined how to handle these types of modifications for the proposed model.
Lastly, some practical applications of this work can be proposed in future. One of them is an applicational module, which inputs current external logical schema with security dependent sets of each user role and investigates whether the external schema has any security leak for any defined roles. This application can be useful in current database management systems for a verification of access control mechanism. Beside this, another application can be built to determine the allowed inferences of associations among attributes for any roles, since it is important to clarify what is allowed as what is inhibited in an access control strategy. This application can be used for a cross control for the design and requirements of an application.
VII. CONCLUSION
The given theorem, algorithm and examples in this paper aims a construct proactive context dependent attribute based security mechanism schema for database users, using given security dependent sets. The main objective in this work is to prevent inference of association of the attributes in each security dependent set, and this is accomplished by performing a secure decomposition which transforms the relevant logical schema to a secure logical schema for which it is proven to be impossible to infer association among any security dependent set. Furthermore, an algorithm is proposed and proven to perform secure decomposition. It should be noted that all work in this paper are about building the external schema of the database according to the given logical schema (including relational schemas and functional dependencies) and security dependent sets, and it can be implemented independently from conceptual and physical model. As a result, different external schemas for all different roles of users in database has been achieved, and each role can access to the database through a different view from the point of security. By this work, granularity problem for access control methods for databases has been addressed and a formal context dependent proactive access control method has been proposed to be used in access control mechanisms of database management systems.
APPENDICES
The step by step proofs are given below with a brief description of each step.
Proof For Theorem-1 Proof:
1) Assuming L as a secure logical schema, the formula given in (8) should be satisfied.
∀S i ((S i ∈ S) ⇒ ∀R j (R j ∈ R + ⇒ S i ⊆ R j ))
2) Let S i be a security dependent set for L.
S i ∈ S
3) Line (1) can be instantiated by using S i .
(S i ∈ S) ⇒ ∀R j (R j ∈ R + ⇒ S i ⊆ R j )
4) When modus ponens is applied using lines (2) and (3).
∀R j (R j ∈ R + ⇒ S i ⊆ R j )
5) Bu using the formula (1) of the property of R + , S i should not be element of any existing relation R and there should not exist any attibute set to which S i is functionally dependent since according to line (4), S i is not a part of any relation in R + so any new relation composing S i should not be produced by meaningful joins.
∀R k (R k ∈ R ⇒ S i ⊆ R k ) ∧ ∀A l (A l ⊆ U R ⇒ (A l → S i ) ∈ F + )
6) Assume that the inference of association among the attributes in S i can be done. This assumption is the negation of the theorem 1, so proof by contradiction starts here.
X L (S i ) 7)
According to line (6), the formula (7) states that S i should be inferable or subset of any existing relation in R.
∃A n (A n F ⇒ S i ∧ A n ⊆ U R ) ∨ ∃R o (R o ∈ R ∧ S i ⊆ R o ) 8)
In order to contradict the ∨ expression in line (7), both sides of ∨ should be contradicted. Accordingly, the first assumption is given below as S i should be inferable.
∃A n (A n F ⇒ S i ∧ A n ⊆ U R )
9) Let the expression in line (8) be instantiated using bound variable A n denoting a attribute set which infers S i .
A n F ⇒ S i ∧ A n ⊆ U R
10) First ∧ instantiation using line (9).
A n F ⇒ S i 11) Second ∧ instantiation using line (5).
∀A l (A l ⊆ U R ⇒ (A l → S i ) ∈ F + )
12) The universal quantifier in line (11) is instantiated using
A n A n ⊆ U R ⇒ (A n → S i ) ∈ F +
13) Second ∧ instantiation using line (9).
A n ⊆ U R 14) When modus ponens is applied using lines (12) and (13). (14) can be used to perform modus ponens to the contrapositive of formula (6). (14) using formula 6 16) Lines (10) and (15) are leading to a contradiction.
(A n → S i ) ∈ F + 15) Line¬(A n F ⇒ S i ),
⊥ 17) First assumption of ∨ expression in line (7) in line (8) has been contradicted. Next, the second assumption is given below as S i should be a part of an existing relation.
∃R o (R o ∈ R ∧ S i ⊆ R o )
18) First ∧ instantiation using line (5).
∀R k (R k ∈ R ⇒ S i ⊆ R k )
19) The existential quantifier in line (17) R o ∈ R ⇒ S i ⊆ R o 21) First ∧ instantiation using line (19).
R o ∈ R 22) Second ∧ instantiation using line (19).
S i ⊆ R o 23) When modus ponens is applied using lines (20) and (21). (20,21) 24) Lines (22) and (23) are leading to a contradiction for line (17).
S i ⊆ R o ,
⊥ 25) Lines (16) and (24) are leading to a contradiction for line (7) which means that it is impossible to make an inference of association among the attributes in S i . 1) Assume that L ′ is a secure logical schema, then it should satisfy the following property given in formula (9).
∀S i (S i ∈ S) ⇒ ¬X L ′ (S i )
2) Let S i be a security dependent set for L ′ .
S i ∈ S
3) Line (1) can be instantiated by using S i .
(S i ∈ S) ⇒ ¬X L ′ (S i ) 4) When modus ponens is applied using lines (2) and (3). (4) as if L ′ is not a secure logical schema, then inference of association among the attributes of a security dependent set (S i ) should be possible according to theorem (1).
¬X L ′ (S i )
5) Proof by contradiction begins by assuming negation of line
X L ′ (S i )
6) According to line (5), the formula (7) states that S i should be inferable or subset of any existing relation in R.
∃A n (A n F ′ ⇒ S i ∧A n ⊆ U R ′ )∨∃R o (R o ∈ R ′ ∧S i ⊆ R o )
7) Let the expression in line (6) be instantiated using bound variable A n denoting a attribute set which infers S i and R o denoting the relation which contains S i .
(A n F ′ ⇒ S i ∧ A n ⊆ U R ′ ) ∨ (R o ∈ R ′ ∧ S i ⊆ R o )
8) In order to contradict the ∨ expression in line (7), both sides of ∨ should be contradicted. Accordingly, the first assumption is given below.
(A n F ′ ⇒ S i ∧ A n ⊆ U R ′ )
9) First ∧ instantiation using line (8).
A n F ′ ⇒ S i ,(8)
10) Formula (6) states that S i should be functionally dependent to an attribute set when the statement in line (9) exists.
(A n → S i ) ∈ F ′ + 11) Using formula (11), the dependency in line (10) can be transformed as below.
(A n → S i ) ∈ F + 12) S i in line (11) cannot be contained by A n as partial functional dependencies are excluded in definition of F + in Definition (3).
A n ⊇ S i 13) Using lines (11) and (12), it can be stated that A n is an identifier set for S i according to formula (3).
A n ∈ I F Si 14) If L ′ is a secure logical schema, then formula (13) should be satisfied.
∀S i ∈ S, ∀R i ∈ R ′ , ∀σ ∈ S i , ∄τ ∈ i F σ (({σ} ∪ τ ) ⊆ R i )
15) There should not be any identifier for S i according to the contrpositive of Identifiable Property's formula (5) and line (14) since it is prevented for any attribute in S i to be in the same relation with an identifier.
I F Si = ∅
16)
A n cannot be an identifier according to line (15).
A n ∈ I F Si 17) Lines (13) and (16) are leading to a contradiction.
⊥
18)
First assumption of ∨ expression in line (7) in line (8) has been contradicted. Next, the second assumption is given below as S i should be a part of an existing relation.
(R o ∈ R ′ ∧ S i ⊆ R o )
19) Second ∧ instantiation using line (18).
S i ⊆ R o 20) First ∧ instantiation using line (18).
R o ∈ R ′ 21) S i cannot be part of any relation according to formula (12). (19) and (21) are leading to a contradiction. (17) and (22) are leading to a contradiction for line (5) which means that it is impossible to make an inference of association among the attributes in S i .
S i ⊆ R o
22) Lines
⊥
23) Lines
⊥ 24) End of proof by contradiction is reached, hence the theorem holds.
¬X L ′ (S i )
For example , R
,= {U SERS, LOGS} as USERS = {id_users, name_users, surname_users} LOGS = {userid_logs, action_logs, date_logs} userid_logs is a foreign key referencing USERS(id_users).
P
R = P R ∪ P Rx 27: end for 28: return P R 29: end
⊥ 26 )
26End of proof by contradiction is reached, hence the theorem holds.¬X L (S i )
customer → name customer, cid customer → surname customer, cid customer → pN o customer, cid customer → address customer, cid customer → age customer, cid customer → gender customer, pid product → name product, pid product → model product, pid product → year product, cid customer → cid buy, cid buy → cid customer, pid product → pid buy, pid buy → pid productpid product → price product,
cid buy
pid buy
date buy
→ quantity buy,
TABLE I CU
IST OM ER RELATION SAMPLE DATAcid name surname
pNo
address age gender
1
John
Doe
5555555
NYC
21
M
2
Mary
Doe
6666666
NYC
28
F
3
Mary
White
7777777
York
28
F
TABLE II P
IIRODU CT RELATION SAMPLE DATApid
name
model year
price
1
PS
3
2012 599,00
2
XBOX
360
2013 799,00
3
PS
4
2014 899,00
TABLE III
IIIBU Y RELATION SAMPLE DATAcid pid
date
quantity
1
2
20140701-16:28:47
1
1
3
20140702-19:07:11
2
3
3
20140703-12:30:05
2
1 )
1cid buy → cid customer ∈ F2) cid customer →
age customer,
gender customer,
address customer,
pN o customer
∈ F
3) cid buy →
age customer,
gender customer,
address customer,
pN o customer
∈ F + (transitive
property (1, 2))
As a result of the dependencies in F + , identifier sets are
constructed as follows:
• i F
age customer = {{cid buy}, {cid customer}}
• i F
gender customer = {{cid buy}, {cid customer}}
• i F
address customer = {{cid buy}, {cid customer}}
• i F
pN o customer = {{cid buy}, {cid customer}}
• i F
cid buy = {{cid customer}}
Decomposition Algorithm will work as follows for
CU ST OM ER relation:
1) All subsets of attributes in CU ST OM ER relation will
be generated (2 7 = 128 subsets).
2) The subsets which will be removed firstly, are the ones
containing securely dependent attributes such as:
• {cid buy, address customer}
• {cid buy, pN o customer}
• {address customer, age customer}
• {address customer, gender customer}
• {pN o customer, age customer}
• {pN o customer, gender customer}
3) The subsets which will be removed next, are the ones
containing an attribute in a security dependent set with
its identifier such as:
• {cid buy, cid customer}
• {cid buy, address customer}
• {cid buy, pN o customer}
• {cid buy, age customer}
• {cid buy, gender customer}
• {cid customer, address customer}
• {cid customer, pN o customer}
• {cid customer, age customer}
• {cid customer, gender customer}
4) Unnecessary subsets which are composed by other sub-
sets, are also removed.
TABLE IV CU
IVST OM ER 1 RELATION SAMPLE DATAcid name surname
1
John
Doe
2
Mary
Doe
3
Mary
White
TABLE V CU
VST OM ER 2 RELATION SAMPLE DATAname surname
pNo
address
John
Doe
5555555
NYC
Mary
Doe
6666666
NYC
Mary
White
7777777
York
TABLE VI CU
VIST OM ER 3 RELATION SAMPLE DATA name surname age genderJohn
Doe
21
M
Mary
Doe
28
F
Mary
White
28
F
TABLE VII QU
VIIERY 1 RESULTING RELATIONage count
21
1
28
2
TABLE VIII QU
VIIIERY 2 RESULTING RELATIONname surname
John
Doe
Mary
White
is instantiated using bounded variable R o . R o ∈ R ∧ S i ⊆ R o , let R o be a bound variable for R o 20) Line(18) can be instatiated again by using R o .
Reducing inference control to access control for normalized database schemas. J Biskup, D W Embley, J.-H Lochner, Inf. Process. Lett. 1061J. Biskup, D. W. Embley, and J.-H. Lochner, "Reducing inference control to access control for normalized database schemas," Inf. Process. Lett., vol. 106, no. 1, pp. 8-12, Mar. 2008.
Security and privacy for web databases and services. E Ferrari, B Thuraisingham, ; E Bertino, S Christodoulakis, D Plexousakis, Advances in Database Technology -EDBT. Koubarakis, K. Bhm, and E. FerrariBerlin HeidelbergSpringer2992E. Ferrari and B. Thuraisingham, "Security and privacy for web databases and services," in Advances in Database Technology - EDBT 2004, ser. Lecture Notes in Computer Science, E. Bertino, S. Christodoulakis, D. Plexousakis, V. Christophides, M. Koubarakis, K. Bhm, and E. Ferrari, Eds. Springer Berlin Heidelberg, 2004, vol. 2992, pp. 17-28.
K-anonymity: A model for protecting privacy. L Sweeney, Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 105L. Sweeney, "K-anonymity: A model for protecting privacy," Int. J. Uncertain. Fuzziness Knowl.-Based Syst., vol. 10, no. 5, pp. 557-570, Oct. 2002.
Access control in a relational data base management system by query modification. M Stonebraker, E Wong, ser. ACM '74Proceedings of the 1974 Annual Conference. the 1974 Annual ConferenceNew York, NY, USAACM1M. Stonebraker and E. Wong, "Access control in a relational data base management system by query modification," in Proceedings of the 1974 Annual Conference -Volume 1, ser. ACM '74. New York, NY, USA: ACM, 1974, pp. 180-186.
An access authorization model for relational databases based on algebraic manipulation of view definitions. A Motro, Proceedings of the Fifth International Conference on Data Engineering. the Fifth International Conference on Data EngineeringWashington, DC, USAIEEE Computer SocietyA. Motro, "An access authorization model for relational databases based on algebraic manipulation of view definitions," in Proceedings of the Fifth International Conference on Data Engineering. Washington, DC, USA: IEEE Computer Society, 1989, pp. 339-347.
Hippocratic databases. R Agrawal, J Kiernan, R Srikant, Y Xu, Proceedings of the 28th International Conference on Very Large Data Bases, ser. VLDB '02. VLDB Endowment. the 28th International Conference on Very Large Data Bases, ser. VLDB '02. VLDB EndowmentR. Agrawal, J. Kiernan, R. Srikant, and Y. Xu, "Hippocratic databases," in Proceedings of the 28th International Conference on Very Large Data Bases, ser. VLDB '02. VLDB Endowment, 2002, pp. 143-154.
Oracle database: Security guide. b14266.pdf. O Cooperation, O. Cooperation. (2012, Jul.) Oracle database: Security guide. b14266.pdf. [Online].
On the soundness property for sql queries of fine-grained access control in dbmss. J Shi, H Zhu, G Fu, T Jiang, Computer and Information Science. J. Shi, H. Zhu, G. Fu, and T. Jiang, "On the soundness property for sql queries of fine-grained access control in dbmss," in Computer and Information Science, 2009. ICIS 2009. Eighth IEEE/ACIS International Conference on, June 2009, pp. 469-474.
A hybrid approach to private record matching. A Inan, M Kantarcioglu, G Ghinita, E Bertino, IEEE Transactions on. 95Dependable and Secure ComputingA. Inan, M. Kantarcioglu, G. Ghinita, and E. Bertino, "A hybrid ap- proach to private record matching," Dependable and Secure Computing, IEEE Transactions on, vol. 9, no. 5, pp. 684-698, Sept 2012.
Purpose based access control for privacy protection in relational database systems. J.-W Byun, N Li, The VLDB Journal. 174J.-W. Byun and N. Li, "Purpose based access control for privacy protection in relational database systems," The VLDB Journal, vol. 17, no. 4, pp. 603-619, Jul. 2008.
Attribute mutability in usage control. J Park, X Zhang, R , Proceedings of the Proceedings of 18th Annual IFIP WG 11.3 Working Conference on Data and Applications Security. the 18th Annual IFIP WG 11.3 Working Conference on Data and Applications SecurityKluwerJ. Park, X. Zhang, and R. S, "Attribute mutability in usage control," in In Proceedings of the Proceedings of 18th Annual IFIP WG 11.3 Working Conference on Data and Applications Security. Kluwer, 2004, pp. 15-29.
Extending query rewriting techniques for fine-grained access control. S Rizvi, A Mendelzon, S Sudarshan, P Roy, Proceedings of the 2004 ACM SIGMOD International Conference on Management of Data, ser. SIGMOD '04. the 2004 ACM SIGMOD International Conference on Management of Data, ser. SIGMOD '04New York, NY, USAACMS. Rizvi, A. Mendelzon, S. Sudarshan, and P. Roy, "Extending query rewriting techniques for fine-grained access control," in Proceedings of the 2004 ACM SIGMOD International Conference on Management of Data, ser. SIGMOD '04. New York, NY, USA: ACM, 2004, pp. 551- 562.
Foundations of security analysis and design iii. E Bertino, J.-W Byun, N Li, Privacy-Preserving Database Systems. A. Aldini, R. Gorrieri, and F. MartinelliBerlin, HeidelbergSpringer-VerlagE. Bertino, J.-W. Byun, and N. Li, "Foundations of security analysis and design iii," A. Aldini, R. Gorrieri, and F. Martinelli, Eds. Berlin, Heidelberg: Springer-Verlag, 2005, ch. Privacy-Preserving Database Systems, pp. 178-206.
The chinese wall security policy. D Brewer, M Nash, Security and Privacy, 1989. Proceedings., 1989 IEEE Symposium on. D. Brewer and M. Nash, "The chinese wall security policy," in Security and Privacy, 1989. Proceedings., 1989 IEEE Symposium on, May 1989, pp. 206-214.
Database security: Research and practice. E Bertino, S Jajodia, P Samarati, Information Systems. 207E. Bertino, S. Jajodia, and P. Samarati, "Database security: Research and practice," Information Systems, vol. 20, no. 7, pp. 537 -556, 1995.
Context sensitivity in role-based access control. A Kumar, N Karnik, G Chafle, SIGOPS Oper. Syst. Rev. 363A. Kumar, N. Karnik, and G. Chafle, "Context sensitivity in role-based access control," SIGOPS Oper. Syst. Rev., vol. 36, no. 3, pp. 53-66, Jul. 2002.
A general additive data perturbation method for database security. K Muralidhar, R Parsa, R Sarathy, Management Science. 4510K. Muralidhar, R. Parsa, and R. Sarathy, "A general additive data per- turbation method for database security," Management Science, vol. 45, no. 10, pp. pp. 1399-1415, 1999.
Protection of database security via collaborative inference detection. Y Chen, W Chu, Intelligence and Security Informatics, ser. Studies in Computational Intelligence. H. Chen and C. YangBerlin HeidelbergSpringer135Y. Chen and W. Chu, "Protection of database security via collaborative inference detection," in Intelligence and Security Informatics, ser. Stud- ies in Computational Intelligence, H. Chen and C. Yang, Eds. Springer Berlin Heidelberg, 2008, vol. 135, pp. 275-303.
Flexible support for multiple access control policies. S Jajodia, P Samarati, M L Sapino, V S Subrahmanian, ACM Trans. Database Syst. 262S. Jajodia, P. Samarati, M. L. Sapino, and V. S. Subrahmanian, "Flexible support for multiple access control policies," ACM Trans. Database Syst., vol. 26, no. 2, pp. 214-260, Jun. 2001.
Optimizing queries with materialized views. S Chaudhuri, R Krishnamurthy, S Potamianos, K Shim, Proceedings of the Eleventh International Conference on. the Eleventh International Conference onData EngineeringS. Chaudhuri, R. Krishnamurthy, S. Potamianos, and K. Shim, "Op- timizing queries with materialized views," in Data Engineering, 1995. Proceedings of the Eleventh International Conference on, Mar 1995, pp. 190-200.
Rabac: Role-centric attribute-based access control. X Jin, R Sandhu, R Krishnan, Computer Network Security, ser. Lecture Notes in Computer Science. I. Kotenko and V. Skormin7531SpringerX. Jin, R. Sandhu, and R. Krishnan, "Rabac: Role-centric attribute-based access control," in Computer Network Security, ser. Lecture Notes in Computer Science, I. Kotenko and V. Skormin, Eds. Springer Berlin Heidelberg, 2012, vol. 7531, pp. 84-96.
The semantic data model for security: representing the security semantics of an application. G Smith, Proceedings. Sixth International Conference on. Sixth International Conference onData EngineeringG. Smith, "The semantic data model for security: representing the security semantics of an application," in Data Engineering, 1990. Proceedings. Sixth International Conference on, Feb 1990, pp. 322-329.
Decomposition and functional dependencies in relations. W W Armstrong, C Delobel, ACM Trans. Database Syst. 54W. W. Armstrong and C. Delobel, "Decomposition and functional dependencies in relations," ACM Trans. Database Syst., vol. 5, no. 4, pp. 404-430, 1980.
Generating all maximal independent sets: Np-hardness and polynomial-time algorithms. E Lawler, J Lenstra, A. Rinnooy Kan, SIAM Journal on Computing. 93E. Lawler, J. Lenstra, and A. Rinnooy Kan, "Generating all maximal independent sets: Np-hardness and polynomial-time algorithms," SIAM Journal on Computing, vol. 9, no. 3, pp. 558-565, 1980.
|
[] |
[
"OL-DN: Online learning based dual-domain network for HEVC intra frame quality enhancement",
"OL-DN: Online learning based dual-domain network for HEVC intra frame quality enhancement"
] |
[
"Renwei Yang ",
"Member, IEEEShuyuan Zhu ",
"Xiaozhen Zheng ",
"Fellow, IEEEBing Zeng "
] |
[] |
[] |
Convolution neural network (CNN) based methods offer effective solutions for enhancing the quality of compressed image and video. However, these methods ignore using the raw data to enhance the quality. In this paper, we adopt the raw data in the quality enhancement for the HEVC intra-coded image by proposing an online learning-based method. When quality enhancement is demanded, we online train our proposed model at encoder side and then use the parameters to update the model of decoder side. This method not only improves model performance, but also makes one model adoptable to multiple coding scenarios. Besides, quantization error in discrete cosine transform (DCT) coefficients is the root cause of various HEVC compression artifacts. Thus, we combine frequency domain priors to assist image reconstruction. We design a DCT based convolution layer, to produce DCT coefficients that are suitable for CNN learning. Experimental results show that our proposed online learning based dual-domain network (OL-DN) has achieved superior performance, compared with the state-of-the-art methods.
|
10.48550/arxiv.2208.04661
|
[
"https://export.arxiv.org/pdf/2208.04661v1.pdf"
] | 251,442,333 |
2208.04661
|
455428345311dabcfb5454e9753ff41e986fdd1c
|
OL-DN: Online learning based dual-domain network for HEVC intra frame quality enhancement
Renwei Yang
Member, IEEEShuyuan Zhu
Xiaozhen Zheng
Fellow, IEEEBing Zeng
OL-DN: Online learning based dual-domain network for HEVC intra frame quality enhancement
1Index Terms-Chrominancecompressionquality enhance- mentHEVCCNN
Convolution neural network (CNN) based methods offer effective solutions for enhancing the quality of compressed image and video. However, these methods ignore using the raw data to enhance the quality. In this paper, we adopt the raw data in the quality enhancement for the HEVC intra-coded image by proposing an online learning-based method. When quality enhancement is demanded, we online train our proposed model at encoder side and then use the parameters to update the model of decoder side. This method not only improves model performance, but also makes one model adoptable to multiple coding scenarios. Besides, quantization error in discrete cosine transform (DCT) coefficients is the root cause of various HEVC compression artifacts. Thus, we combine frequency domain priors to assist image reconstruction. We design a DCT based convolution layer, to produce DCT coefficients that are suitable for CNN learning. Experimental results show that our proposed online learning based dual-domain network (OL-DN) has achieved superior performance, compared with the state-of-the-art methods.
I. INTRODUCTION
Over the past few years, deep learning based approaches are became popular in video coding research [1], [2], [3], among which convolution neural network-based methods have demonstrated impressive performance on enhancing quality of HEVC coded frames. A variable-filter-size CNN with residual learning (VR-CNN) is proposed in [4] to remove HEVC compression artifacts. Frame-enhancement CNN (FE-CNN) proposed in [5] employs long and short skip connections to improve frame quality. Multi-frame guided attention network (MGANet) [6] enhances video quality based on long-shortterm frame dependency and coding unit boundaries. Contentaware CNN (CA-CNN) [7] contains multiple models and selects the most appropriate one to enhance each coding tree unit. Adaptive-switching network (ASN) [8] switches to different enhancement models according to frame content. A network adopting recursive design and residual learning (RR-CNN) is proposed in [9] as HEVC intra frame postprocessing filter. Video coding prediction modes and units partition map are utilized as side information in frame-wise quality enhancement CNN (FQE-CNN) [10] to boost quality enhancement of HEVC coded frames.
Different from other image restoration tasks, the raw data is available at the encoder side in image and video coding task. However, these existing CNN-based methods neglect employing it to enhance coded frame quality. In this work, we use the raw frame at encoder as ground-truth to online R [11], but with less computation complexity. In OL-DN, we combine wide activation [12] and channel attention mechanism to design the wide block (WB) as an effective feature extraction block. And furthermore, we use AL to implement channel attention in WB to obtain online learning based wide block (OL-WB).
Besides, since quantization error in discrete cosine transform (DCT) coefficients is the root cause of various quality degradation in HEVC, Guo et al. [13] and Zhang et al. [14] introduce the priors of DCT coefficients as side information to remove the JPEG artifacts. Nevertheless, the strength of CNN lies in learning the inter dependencies between adjacent elements, while DCT coefficients have weak correlation with adjacent ones. Therefore the frequency coefficients are suboptimal for CNN learning and frequency priors are not efficiently extracted in those methods. Therefore, we propose a DCT-based convolution layer (DCT-conv), which clusters DCT coefficients of the same frequency spectrum into one channel, and remains their relative positional relationship in spatial domain consistent with that in frequency domain. DCT-conv strengthens the frequency coefficients correlation and makes them more compatible with CNN learning. Consequently, our method can effectively extract frequency priors to improve quality enhancement performance.
The coding information was also adopted in the CNNbased quality enhancement solution. The coding unit mask was defined in [10] as the prior information for the CNNbased post-processing. Also, the transform unit partition map was employed in [6] to improve the quality of video frames. Besides the coding information, the channel correlation between the luma image (Y) and the chrom images (U and V) may also be utilized to improve the quality of compressed images.However, most of the existing CNN-based methods process each channel independently, which ignores the channel correlation and cannot achieve a high efficiency. In this work, we design the deep network in which we use the luma component as the guidance to restore the chroma component in both spatial and frequency domains.
II. PROPOSED METHOD
A. Online Learning
Our proposed online learning method updates the OL-DN to gain performance improvement, and the its procedure is summarized in Fig. 2. Specifically, an OL-DN model is offline trained as the baseline model and it is arranged at both encoder and decoder sides. During the practical coding, OL-DN is online trained at the encoder side with raw data as the groundtruth image. Then, the online trained model parameters are coded via Huffman coding and transmitted to decoder side. Finally, the received parameters update the baseline model at decoder side. Note that to limit the extra encoder complexity and bit-rate for online learning, only the parameters of the designed adaptive layer (AL) are updated during online learning, and only the residual of parameters between the baseline model and online trained model is transmitted.
WB × 8 WB × 8 n=16 Pixel Unshuffle n=64 WB × 8 DCT- conv n=64 n=64 n=64 WB × 2 WB × 2 n=64 WB × 2 n=64 stride=2 n=64 stride=2 DCT- conv n=64 WB × 2 n=64 stride=2 DCT- conv n=64 n=64 n=64 WB × 2 WB × 2 n=64 WB × 2 DCT- conv n=64 WB × 2 cat cat n=64 n=64 n=64 OL-WB )×4 WB ( OL-WB )×4 WB ( n=64 n=64 n=64 × 8 IDCT -conv Pixel Unshuffle n=64 × 8 IDCT -conv Pixel Unshuffle n=64 n=64 n=64 Pixel Shuffle n=1 n=1 w0,0 w1,0 w0,1 w7,7 w0,0 w1,0 w0,1 w7,7 F(0,0) F(1,0) F(7,7) 64 w0,0 w1,0 w0,1 w7,7 F(0,0) F(1,0) F(7,7) 64 Compressed chroma component (U/V) Compressed luma component Final output Global Avg. FC (N,N/16) FC (N/16,N) Sigmoid CAB Global Avg. FC (N,N/16) FC (N/16,N) Sigmoid CAB x 1 ... x n AL x 1 · w 1 ... x n · w n w 1 ... w n x 1 ... x n AL x 1 · w 1 ... x n · w n w 1 ...
Online learning enables the model to learn the mapping from input to desired output directly, bringing considerable performance improvement to OL-DN. Therefore, OL-DN does not require either extremely deep network or multiple offline trained models to can outperform previous methods.
B. Network Architecture of OL-DN
Overview: The architecture of OL-DN is illustrated in Fig. 1. It takes luma and chroma inputs, and outputs enhanced chroma image. OL-DN mainly consists of wide block (WB), online learning-based wide block (OL-WB), and DCT-based convolution layer (DCT-conv). We employ WB to extract input features and reconstruct output image. In reconstruction process, we insert adaptive layer (AL) in WB to obtain OL-WB, allowing the network online learnable. The DCT-conv converts image from pixel domain into frequency domain, and IDCT-conv conducts the inverse process. In OL-DN, the pixel shuffle and unshuffle layers are utilized to reduce computation complexity [16]. Moreover, to avoid the exploding and vanishing gradient problems we apply skip connections [17] between inputs and outputs of OL-DN, WB, and OL-DB. Wide block: WB contains two wide-activated convolution layers and a channel attention block (CAB). We expand WB input channels before ReLU activation, to achieve better model performance for given computation complexity [12]. Then, CAB implements channel attention mechanism to boost the learning ability of network [11]. A CAB input ∈ R N ×H×W is globally averaged to a 1-D vector and passes through two fully connected layers and sigmoid activation function, then it recalibrates the channel weights of CAB input via multiplication, producing the CAB output.
Let conv k n denote the convolutional layer with k × k kernel and n output channels. With input x, the output of WB is obtained by
y = x + CAB conv 3 64 (ReLU (conv 3 256 (x)))(1)
Online learning-based wide block: We substitute AL for CAB in WB to obtain OL-WB. Our proposed AL consists of N weighting parameters {w 1 , . . . , w N }, which are corresponding to N channels of the input X, where X = {x 1 , . . . , x N } and x i is the i-th channel of X. The output of AL is obtained by AL(X) = {x 1 · w 1 , . . . , x N · w N }. Its lightweight structure guarantees low time complexity for online learning, and low bit-rate for transmitting the parameter difference. Since the neural network channel weights might be similar at near depths, we employ OL-WB at intervals to maximize the efficiency of online learning. Compared to CAB, AL also implements channel attention via channel-wise multiplication but its re-calibration parameters are directly online learned from the current input. Thus, it can achieve more accurate re-calibration for channels to improve the learning ability of the network. In addition, it can reduce computational complexity for the model. DCT-based convolution layer: DCT-conv converts image from pixel domain to frequency domain and IDCT-conv accomplishes the inverse process. More specifically, DCTconv consists N 2 kernels sliding over input image in a non-overlapped manner, with each kernel corresponding to a frequency spectrum. To obtain a DCT coefficient y (u,v) , the corresponding DCT-conv kernel F (u,v) contains N 2 weights
w (i,j) = c(u)c(v) cos (i+0.5)π N u cos (j+0.5)π N v(2)
where u, v, i, j ∈ {0, ..., N − 1}, and c(u) = 1/N if u = 0 otherwise c(u) = 2/N . Moreover, N = 8 in this work. The process of DCT-conv is illustrated in Fig. 3. For a given image x ∈ R 1×H×W , the DCT-conv outputs y ∈ R N 2 ×(H/N )×(W/N ) . Each N × N block in x is converted into N 2 DCT coefficients in output channels from y (0,0) to y (N −1,N −1) , representing frequencies from low to high. The corresponding pixel blocks and frequency coefficients are marked with the same color in Fig. 3. In conventional block-wise DCT, adjacent coefficients represent amplitudes of different frequency spectrum, so they are weakly correlated especially at block boundaries. In contrast, our proposed DCTconv clusters coefficients of the same frequency spectrum into one channel, and adjacent coefficients also correspond to block neighbors in spatial domain. Therefore, we have strengthen elements correlation to match CNN's ability in learning local relationship, allowing the model to effectively extract frequency priors to improve quality enhancement performance. It is observed that although luma and chroma components are obtained via different weighting factors, they are still highly correlated. Moreover, luma contains clearer textures compared with the down-sampled chroma component, so luma can potentially provide high-quality structure information to improve the chroma quality. Consequently, we extract its features to guide chroma quality enhancement.
In both spatial and frequency domain, we employ 3×3 convolution layers and WBs to extract luma features and aggregate with chroma features via element-wise addition. Then, the aggregated spatial and frequency domain features are concatenated in channel-wise to produce the dual-domain Y-guided chroma feature, which is used to reconstruct the final output.
III. EXPERIMENTAL RESULTS
A. Implementation Details
To form the training set, we compress Flickr2K [18] images by HM-16.7 with YUV-420 format under the all intra (AI) configuration, where the quantization parameter (QP) is 27. Then, we crop Y and U components into 64×64 and 32×32 small patches, respectively. Moreover, 15 videos from Common-Test Dataset for HEVC [19] are compressed under AI configuration with QP = [22,27,32,37] to generate the test set. We only train one baseline model and apply it on both U/V components and 4 QPs. Our method is implemented with PyTorch and NVIDIA GTX 2080Ti GPU. The baseline network is offline trained for 20 epochs with the batch size of 64 and learning rate of 1e-4.
B. Comparison of R-D performance
Compared with other methods, the R-D performance in terms of BD-rate (%) is given Table I. According Table I, it is found that our method outperforms the other methods for each video class, and achieves -32.2% and -35.7% (on average) for U and V components. Moreover, one can see that OL-DN brings more coding gains at high-resolution classes. Because high-resolution frames offer more pixel information for online training, thus improving model performance. Also, they are at larger bit rate so OL-DN extra bits occupy less portion. Fig. 4 shows the visual results of our method enhancing HEVC compressed frames, and it is seen that our method can offer authentic textures after quality enhancement.
C. Visual Results Comparison
D. Ablation Study
In this ablation study, we verify the effectiveness of proposed method via removing correspondent module of OL-DN, including removing online learning (w/o O), frequency domain priors (w/o F), and Y component guidance (w/o Y). The results are listed in Table II, and they demonstrate that our methods are effective in improving quality enhancement performance. Table III and Table IV show the running time at encoder side and decoder side, respectively. The relative time is defined as ∆t = t /t, where t is neural network running time and t is HEVC running time. The results indicate that OL-DN has lower time complexity compared with other methods.
E. Comparison of Complexity
IV. CONCLUSION
In this paper we propose a quality enhancement network for HEVC chroma component termed as OL-DN. It online updates the model parameters to gain performance improvement. Also, we design a DCT-conv to efficiently utilize frequency priors to assist quality enhancement. Furthermore, we extract Y component feature and aggregate it with chroma feature to guide chroma component reconstruction.
Experimental results demonstrate that this network has achieved superior performance compared with other state-ofthe-art methods. Moreover, running time analysis has verified that OL-DN has acceptable complexity at both encoder and decoder sides.
Fig. 1 .Fig. 2 .
12The framework of our proposed OL-DN. Green layer indicates 3×3 convolution layer with n output channels. The flow chart of our online learning method.
Fig. 3 .
3The process of DCT-conv converting image into frequency domain.
Fig. 4 .
4Visual results of Cactus 1920x1080 50 at QP=37. The first and second rows show Cb and Cr components, respectively. (a) raw frame. (b): compressed frame. (c): our results.
. Yang, S. Zhu and B. Zeng are with School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China.X. Zheng is with SZ DJI Technology Co., Ltd., Shenzhen, Chinatrain OL-DN to overfit to this frame. Then, we transmit the
new parameters to update the OL-DN at decoder. Therefore,
the model gains high enhancement performance, as well as
being adoptable to various coding scenarios. Moreover, we
design a lightweight adaptive layers (AL) as the only online
trainable part to reduce online training complexity. AL imple-
ments channel attention similarly to SE block
TABLE I
IBD-RATE (%) SAVINGS FOR HEVC INTRA CHROMINANCE FRAMES (U/V)
Resolution
Sequence
VR-CNN[4]
FE-CNN[5]
RR-CNN[9]
FQE-CNN[10]
OL-DN (Ours)
2560×1600
A1 Traffic
-3.5
-4.1
-4.2
-5.8
-11.4
-15.4
-14.1
-17.1
-28.3
-38.6
A2 PeopleOnStreet
-5.9
-5.7
-8.2
-8.6
-33.3
-29.8
-30.3
-29.5
-46.2
-38.5
1920×1080
B1 Kimono
-1.5
-1.4
-5.0
-5.2
-20.2
-8.0
-24.0
-11.6
-32.5
-38.5
B2 ParkScene
-3.3
-2.5
-4.4
-4.1
-30.9
-6.9
-27.2
-11.1
-40.1
-46.1
B3 Cactus
-3.9
-6.3
-5.5
-10.7
-9.2
-11.7
-14.6
-22.5
-27.4
-37.1
B4 BQTerrace
-3.7
-3.0
-5.3
-6.4
-15.9
-16.4
-21.6
-33.0
-34.2
-41.5
B5 BasketballDrive
-3.3
-5.3
-10.8
-12.6
-23.3
-26.7
-25.8
-30.4
-38.4
-38.3
832×480
C1 RaceHorses
-6.7
-11.0
-8.4
-12.5
-16.5
-23.2
-15.2
-23.4
-21.1
-31.3
C2 BQMall
-5.3
-5.3
-6.9
-7.6
-29.3
-32.3
-28.8
-33.0
-27.7
-27.7
C3 PartyScene
-4.4
-4.4
-5.4
-5.7
-17.8
-21.4
-18.9
-23.2
-26.4
-24.1
C4 BasketballDrill
-5.8
-6.8
-12.2
-14.9
-26.7
-31.8
-34.6
-41.4
-41.4
-44.6
416×240
D1 RaceHorses
-8.5
-11.5
-9.8
-12.8
-28.9
-33.7
-26.9
-32.9
-27.0
-31.0
D2 BQSquare
-4.2
-6.4
-3.8
-6.8
-26.6
-26.5
-22.1
-29.2
-26.9
-37.7
D3 BlowingBubbles
-8.4
-7.9
-8.5
-9.0
-19.7
-23.6
-25.2
-27.7
-25.5
-30.5
D4 BasketballPass
-4.4
-6.5
-8.2
-10.3
-27.1
-29.7
-31.9
-30.1
-31.1
-28.3
2560×1600
class A
-4.7
-4.9
-6.2
-7.2
-22.3
-22.6
-22.2
-23.3
-37.3
-38.6
1920×1080
class B
-3.1
-3.7
-6.2
-7.8
-19.9
-13.9
-22.6
-21.7
-34.7
-40.3
832×480
class C
-5.6
-6.9
-8.2
-10.2
-22.6
-27.2
-24.4
-30.2
-29.1
-31.9
416×240
class D
-6.4
-8.1
-7.6
-9.7
-25.6
-28.4
-26.5
-30.0
-27.7
-31.9
Overall Average
-5.0
-5.9
-7.1
-8.7
22.6
-23.0
-23.9
-26.3
-32.2
-35.7
TABLE II BD
II-RATE(%) RESULTS OF ABLATION STUDY ON U/V COMPONENTSClass
OL-DN(w/o O)
OL-DN(w/o F)
OL-DN(w/o Y)
OL-DN
A
-21.5 / -25.0
-34.2 / -34.7
-11.8 / -11.9
-37.3 / -38.6
B
-16.9 / -16.2
-32.0 / -34.2
-11.0 / -16.3
-34.7 / -40.3
C
-20.4 / -19.3
-27.8 / -30.0
-13.1 / -16.3
-29.1 / -31.9
D
-18.7 / -17.9
-26.6 / -30.9
-13.0 / -15.0
-27.7 / -31.9
Average
-19.4 / -19.6
-30.2 / -32.5
-12.2 / -14.9
-32.2 / -35.7
TABLE III COMPARISON
IIIOF INCREASED TIME ∆t AT ENCODER SIDEClass
RR-CNN[9]
FQE-CNN[10]
OL-DN
A
18.00%
22.57%
5.21%
B
19.00%
23.48%
3.60%
C
18.00%
19.77%
2.80%
D
39.00%
19.91%
4.90%
Average
23.50%
21.43%
4.13%
TABLE IV COMPARISON
IVOF COMPLEXITY AT DECODER SIDE Decoding Time (ms) Increased Decoding Time Ratio ∆tclass
FQE-CNN[10]
OL-DN
RR-CNN[9]
OL-DN
A
565
337
2947%
24%
B
285
177
2947%
29%
C
55
42
1667%
22%
D
15
28
1478%
56%
Average
230
146
1825%
33%
(a)
(b)
(c)
Deep multi-domain prediction for 3D video coding. Jianjun Lei, Yanan Shi, Zhaoqing Pan, Dong Liu, Dengchao Jin, Ying Chen, Nam Ling, IEEE Trans. Broadcast. 674Jianjun Lei, Yanan Shi, Zhaoqing Pan, Dong Liu, Dengchao Jin, Ying Chen, and Nam Ling, "Deep multi-domain prediction for 3D video coding," IEEE Trans. Broadcast., vol. 67, no. 4, pp. 813-823, 2021.
High efficiency intra video coding based on data-driven transform. Na Li, Yun Zhang, C.-C. Jay Kuo, IEEE Trans. Broadcast. Na Li, Yun Zhang, and C.-C. Jay Kuo, "High efficiency intra video coding based on data-driven transform," IEEE Trans. Broadcast., pp. 1-14, 2021.
Fast depth intra coding based on depth edge classification network in 3D-HEVC. Chang Liu, Kebin Jia, Pengyu Liu, IEEE Trans. Broadcast. Chang Liu, Kebin Jia, and Pengyu Liu, "Fast depth intra coding based on depth edge classification network in 3D-HEVC," IEEE Trans. Broadcast., pp. 1-13, 2021.
A convolutional neural network approach for post-processing in HEVC intra coding. Yuanying Dai, Dong Liu, Feng Wu, Proc. Int. Conf. Multimedia Modeling (MMM). Int. Conf. Multimedia Modeling (MMM)10132Yuanying Dai, Dong Liu, and Feng Wu, "A convolutional neural network approach for post-processing in HEVC intra coding," in Proc. Int. Conf. Multimedia Modeling (MMM), Jan. 2017, vol. 10132, pp. 28-39.
Deep residual network for enhancing quality of the decoded intra frames of HEVC. Fan Li, Weimin Tan, Bo Yan, Proc. IEEE Int. Conf. Image Process. (ICIP). IEEE Int. Conf. Image ess. (ICIP)Fan Li, Weimin Tan, and Bo Yan, "Deep residual network for enhancing quality of the decoded intra frames of HEVC," in Proc. IEEE Int. Conf. Image Process. (ICIP), Oct. 2018, pp. 3918-3922.
A robust quality enhancement method based on joint spatialtemporal priors for video coding. Xiandong Meng, Xuan Deng, Shuyuan Zhu, Xinfeng Zhang, Bing Zeng, IEEE Trans. Circuits Syst. Video Technol. 316Xiandong Meng, Xuan Deng, Shuyuan Zhu, Xinfeng Zhang, and Bing Zeng, "A robust quality enhancement method based on joint spatial- temporal priors for video coding," IEEE Trans. Circuits Syst. Video Technol., vol. 31, no. 6, pp. 2401-2414, June 2021.
Content-aware convolutional neural network for in-loop filtering in high efficiency video coding. Chuanmin Jia, Shiqi Wang, Xinfeng Zhang, Shanshe Wang, Jiaying Liu, Shiliang Pu, Siwei Ma, IEEE Trans. Circuits Syst. Video Technol. 287Chuanmin Jia, Shiqi Wang, Xinfeng Zhang, Shanshe Wang, Jiaying Liu, Shiliang Pu, and Siwei Ma, "Content-aware convolutional neural network for in-loop filtering in high efficiency video coding," IEEE Trans. Circuits Syst. Video Technol., vol. 28, no. 7, pp. 3343-3356, July 2019.
Partition-aware adaptive switching neural networks for post-processing in HEVC. Weiyao Lin, Xiaoyi He, Xintong Han, Dong Liu, John See, Junni Zou, Hongkai Xiong, Feng Wu, IEEE Trans. Multimedia. 2211Weiyao Lin, Xiaoyi He, Xintong Han, Dong Liu, John See, Junni Zou, Hongkai Xiong, and Feng Wu, "Partition-aware adaptive switching neural networks for post-processing in HEVC," IEEE Trans. Multimedia, vol. 22, no. 11, pp. 2749-2763, Nov. 2020.
Recursive residual convolutional neural network-based in-loop filtering for intra frames. Shufang Zhang, Zenghui Fan, Nam Ling, Minqiang Jiang, IEEE Trans. Circuits Syst. Video Technol. 307Shufang Zhang, Zenghui Fan, Nam Ling, and Minqiang Jiang, "Recur- sive residual convolutional neural network-based in-loop filtering for intra frames," IEEE Trans. Circuits Syst. Video Technol., vol. 30, no. 7, pp. 1888-1900, July 2020.
Frame-wise CNNbased filtering for intra-frame quality enhancement of HEVC videos. Hongyue Huang, I Schiopu, A Munteanu, IEEE Trans. Circuits Syst. Video Technol. 316Hongyue Huang, I. Schiopu, and A. Munteanu, "Frame-wise CNN- based filtering for intra-frame quality enhancement of HEVC videos," IEEE Trans. Circuits Syst. Video Technol., vol. 31, no. 6, pp. 2100-2113, June 2021.
Squeezeand-excitation networks. Jie Hu, Li Shen, Samuel Albanie, Gang Sun, Enhua Wu, IEEE Trans. Pattern Anal. Mach. Intell. 428Jie Hu, Li Shen, Samuel Albanie, Gang Sun, and Enhua Wu, "Squeeze- and-excitation networks," IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 8, pp. 2011-2023, Apr. 2019.
Wide-activated Deep Residual Networks based Restoration for BPG-compressed Images. Yuchen Fan, Jiahui Yu, Thomas S Huang, Proc. Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW). Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW)Yuchen Fan, Jiahui Yu, and Thomas S. Huang, "Wide-activated Deep Residual Networks based Restoration for BPG-compressed Images," in Proc. Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), 2018.
Building dual-domain representations for compression artifacts reduction. Jun Guo, Hongyang Chao, Proc. Eur. Conf. Comput. Vis. (ECCV). Eur. Conf. Comput. Vis. (ECCV)Jun Guo and Hongyang Chao, "Building dual-domain representations for compression artifacts reduction," in Proc. Eur. Conf. Comput. Vis. (ECCV), Oct. 2016.
Dmcnn: Dual-Domain Multi-Scale Convolutional Neural Network for Compression Artifacts Removal. Xiaoshuai Zhang, Wenhan Yang, Yueyu Hu, Jiaying Liu, Proc. IEEE Int. Conf. Image Process. (ICIP). IEEE Int. Conf. Image ess. (ICIP)Xiaoshuai Zhang, Wenhan Yang, Yueyu Hu, and Jiaying Liu, "Dmcnn: Dual-Domain Multi-Scale Convolutional Neural Network for Compres- sion Artifacts Removal," in Proc. IEEE Int. Conf. Image Process. (ICIP), Oct. 2018, pp. 390-394.
Multi-scale residual network for image super-resolution. Juncheng Li, Faming Fang, Kangfu Mei, Guixu Zhang, in ECCV. Juncheng Li, Faming Fang, Kangfu Mei, and Guixu Zhang, "Multi-scale residual network for image super-resolution," in ECCV, 2018.
Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR). IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR)Wenzhe Shi and month = jun pages = 1874-1883 others, year = 2016Wenzhe Shi and month = jun pages = 1874-1883 others, year = 2016, "Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR).
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR). IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR)Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, "Deep residual learning for image recognition," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2016, pp. 770-778.
NTIRE 2017 challenge on single image superresolution: Methods and results. Radu Timofte, Proc. Conf. Comput. Vis. Pattern Recognit. Workshops. Conf. Comput. Vis. Pattern Recognit. WorkshopsCVPRWRadu Timofte et al., "NTIRE 2017 challenge on single image super- resolution: Methods and results," in Proc. Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), 2017, pp. 1110-1121.
Common test conditions and software reference configurations. F Bossen, JCTVC-L1100. F. Bossen, "Common test conditions and software reference configura- tions," in JCTVC-L1100, Jan. 2013, pp. 1-4.
|
[] |
[
"A new rule for almost-certain termination of probabilistic-and demonic programs",
"A new rule for almost-certain termination of probabilistic-and demonic programs"
] |
[
"Annabelle Mciver \nMacquarie University\nAustralia\n",
"Carroll Morgan \nUniversity of New South Wales\nand Data61Australia\n"
] |
[
"Macquarie University\nAustralia",
"University of New South Wales\nand Data61Australia"
] |
[] |
Extending our own and others' earlier approaches to reasoning about termination of probabilistic programs, we propose and prove a new rule for termination with probability one, also known as "almostcertain termination". The rule uses both (non-strict) super martingales and guarantees of progress, together, and it seems to cover significant cases that earlier methods do not. In particular, it suffices for termination of the unbounded symmetric random walk in both one-and two dimensions: for the first, we give a proof; for the second, we use a theorem of Foster to argue that a proof exists. Non-determinism (i.e. demonic choice) is supported; but we do currently restrict to discrete distributions. 3
| null |
[
"https://arxiv.org/pdf/1612.01091v2.pdf"
] | 10,125,221 |
1612.01091
|
894f13c65115ad04add841abd9c5b1386d2e02c3
|
A new rule for almost-certain termination of probabilistic-and demonic programs
Annabelle Mciver
Macquarie University
Australia
Carroll Morgan
University of New South Wales
and Data61Australia
A new rule for almost-certain termination of probabilistic-and demonic programs
Extending our own and others' earlier approaches to reasoning about termination of probabilistic programs, we propose and prove a new rule for termination with probability one, also known as "almostcertain termination". The rule uses both (non-strict) super martingales and guarantees of progress, together, and it seems to cover significant cases that earlier methods do not. In particular, it suffices for termination of the unbounded symmetric random walk in both one-and two dimensions: for the first, we give a proof; for the second, we use a theorem of Foster to argue that a proof exists. Non-determinism (i.e. demonic choice) is supported; but we do currently restrict to discrete distributions. 3
Introduction
This paper concerns proof of almost-sure termination for probabilistic-and demonic programs, ones that move from one state to another by first choosing a (discrete) distribution demonically from a set of distributions, and then choosing a new state probabilistically according to that distribution.
Thus we view a program abstractly as a (probabilistic/demonic) transition system; and we are interested in proving the eventual reachability with probability one of a given set of target states (when it is indeed the case). Our strategic aim is to express our techniques in a form that can be applied to probabilistic program code in situ, i.e. without the need to construct the programs' underlying transition systems explicitly (although of course we rely on their existence). That is, we seek (and find) proof-rules that require no more than local reasoning in the source code.
In probabilistic programming over a finite state-space S, say, a typical rule is one that generalises the "variant rule" for standard (non-probabilistic but still demonic) programs: to each state is assigned a non-negative integer, a variant bounded above and below, with all states inside some target set S 0 ⊆S assigned variant 0. One then shows by local reasoning, typically over the source-code of a loop body, that from each state in S * = S−S 0 there is a non-zero probability of transiting to a (different) state with strictly smaller (integer) variant. 4 (1) If that can be shown, then the rule guarantees that almost all paths in the transition system eventually lead to S 0 , where "almost all" means that the paths not included have probability zero even if taken all together. This probabilistic rule's soundness follows from an appeal to a zero-one law [12,23,20] which roughly says:
If there is some ε>0 such that the probability of eventually reaching a target set of states is everywhere at least ε, then that probability is one.
For infinite-state systems however, although such zero-one laws are still valid, their ε-conditions are not so easily met by local reasoning. In particular the actual values of the probabilities attached to the transitions, which in fact are irrelevant in finite-state transition systems [24,21], now can make the difference between almost-certain termination or not. A typical response to this issue is to replace "non-zero probability" in (1) above with "probability bounded away from zero". And that bound can depend intricately on the transitions' actual probabilities.
That challenge notwithstanding, recent important work [4,8,3] has shown how local reasoning with super-martingales can be applied to solve the termination problem in a wide class of infinite-state probabilistic programs.
In this paper we combine those successes with some of our own earlier work, showing in this paper how to use super-martingale reasoning together with a progress rule to reason about to an important class of transition systems whose termination seems to be beyond the state-of-the-art, for source-level reasoning at least. Our key insight is the observation that the combination of a supermartingale with a local but parametrised progress-condition (in a sense we explain below) implies the conditions of the zero-one rule.
Our specific contributions are:
1. A new rule that generalises a number of currently known rules (including our own) for establishing almost-certain termination; 2. A demonstration of a general zero-one proof technique which can be applied in arbitrary infinite state systems; 3. A thorough analysis of the applicability of the new rule together with a suite of representative examples; and 4. A limited survey of some pre-computer-science mathematical results that contribute to this endeavour [16,9,1] Finally we note that our strategic goal, to translate this and other rules to ones that can be applied directly to program code, lies in the seminal work of Kozen [18] for probabilistic semantics, later generalised by us to include demonic non-determinism and abstract transition systems [25,20] and even more recently expanded to include explicit Markov-chain models [10].
2 Informal description of the new rule for termination 2.1 Setting of the new rule, and its purpose Let S be a state space, possibly infinite, and let S be divided into two disjoint subsets: one is S 0 , the states where termination is deemed to have occurred; and the other is S * for the rest. A transition function T is given taking any state s in S * to a set of discrete distributions on all of S; and a transition from some s in S * occurs by first selecting arbitrarily a distribution δ from T (s) and then choosing probabilistically a next-state s according to the probabilities given by δ. (We discuss this treatment of demonic nondeterminism more thoroughly in §3 below.) Our purpose is to give a method for proving that from any s in S * , repeated transitions according to T will reach S 0 with probability one eventually. That property is conventionally called almost-certain termination. For brevity, from now on we will write AC for "almost certain(ly)" and ACT for "almostcertain(ly) terminat(ion/ing)".
Informal description of the rule
By analogy with existing approaches to proof of termination, we base our technique on a "variant" function over the states and require it to have certain properties. Informally described, they are as follows:
Define a variant -Non-negative variant function V from (all of) S into the non-negative reals is such that V is 0 on all of S 0 and V is strictly positive on S * . It can be unbounded (above), but not infinite. Note that V need not be integer-valued. Impose a super-martingale property -Variant V is a super-martingale wrt. transitions T , i.e. for any s in S * and any distribution δ in T (s), the expected value of V on δ, i.e. on the states reached in one δ-mediated transition of T from s, is no more than the value V (s) that V had at s itself.
That is,
For all s in S * and δ in T (s) we have E δ V ≤ V (s) ,
where we write E δ V for the expected value, over discrete distribution δ on S, of real-valued function V on S. Note that we do not require a strict decrease of the expected value, and although V is defined on S 0 , we do not require that T be defined there. 5
Impose a progress property -The transitions T makes progress towards S 0 . We require two fixed strictly positive functions p (for "probability") and d (for "decrease"), defined for all positive reals, such that in a state s of S * with V (s) equal to some v, any transition δ in T (s) is guaranteed with probability at least p(v) to decrease the variant by at least d(v). Furthermore d(v) and and p(v) must be non-increasing as v itself increases. That is,
There are fixed functions p, d on positive reals v, with 0<p(v)≤1 and 0<d(v), such that whenever v=V (s) for some s in S * , and δ in T (s), we have
δ {s |V (s )≤v−d(v)} ≥ p(v) ,
where for S ⊆S we write δ S for s : S δ s , and for any 0<v<v we have p(v )≤p(v) and d(v )≤d(v).
Note that p, d in progress are functions of the variant, defined over all positive reals, and that even for v not in the V -image of S, still the non-increasing
conditions for p(v), d(v) must be satisfied. (See §4.2's "What happens when V is bounded".)
The rule is proved in §4.1 below.
Discussion and comparison
Our main innovation in §2.2 is, in our progress condition, to impose the usual "bounded away from zero" criterion not on S * as a whole but instead only on successively larger subsets of it. That is, we apply it with respect to certain functions p and d, and the effect of their non-increasing criteria is to ensure that, as the subsets {s: S * | V (s)≤v} of S * grow larger, the progress conditions imposed on them grow weaker but never decrease to "none". This avoids the treacherous Zeno-effects that can occur when some progress is always made but only with ever-smaller steps: the V -decrease condition (" as far as d with probability at least p ") can only be strengthened as V (s) moves towards 0. But it also avoids the need to set a uniform ε-progress condition for all of S * . Although the generality of p, d might seem complicated, in fact in many special cases it is very simple. One such is the "distance from 0" variant on the one-dimensional symmetric random walk, where p, d can be constant functions: we take S to be the integers, both positive and negative, with S 0 ={0} and V (s)=|s| and we define p, d to be everywhere 1 /2, 1 respectively -with probability at least 1 /2 the variant decreases by at least 1. That is all that's needed to establish ACT for the symmetric random walk ( §7.1). 6
Other approaches
In our own, earlier probabilistic-variant rule [23,Sec. 6], [20,Sec. 2.7], we effectively made p, d constants, imposed no super-martingale condition but instead bounded V above over S * , making it not sufficient for the random walk. Later however we did prove random walk to be ACT using a rule more like the current one [20,Sec. 3.3].
Chakarov and Sankaranarayanan [4] consider the use of martingales for the analysis of infinite-state probabilistic programs, and Chakarov has done more extensive work [3].
In [4] it's shown that a ranking super-martingale implies ACT, and a key property of their definition for ranking super-martingale is that there is some constant ε>0 such that the average decrease of the super-martingale is everywhere (except for the termination states) at least ε. Their program model is assumed to operate over discrete state spaces, without nondeterminism.
That work is an important step towards applying results from probability theory to the verification of infinite-state probabilistic programs. [8] also use ranking super-martingales, with results that provide a significant extension to Chakarov and Sankaranarayanan's work [4]. Their program model includes both non-determinism and continuous probability distributions over transitions. They also show completeness for the class of programs whose expected time to termination is finite. That excludes the random walk however; but they do demonstrate by example that the method can still apply to some systems which do not have finite termination time.
Fioriti and Hermanns
More recently still, Chatterjee, Novotný andŽikelić [5] study techniques for proving that programs terminate with some probability (not necessarily one). Their innovation is to introduce the concept of "repulsing super-martingales" -these are also super-martingales with values that decrease outside of some defined set. Repulsing super-martingales can be used to show lower bounds on termination probabilities, and as certificates to refute almost-sure termination and finite expected times to termination. (See also §9.2, §9.5.)
There are a number of other works that demonstrate tool support based on the above and similar techniques. All the authors above [4,8,5] have developed and implemented algorithms to support verification based on supermartingales. Esparza, Gaiser and Kiefer [7] develop algorithmic support for ACT of "weakly finite" programs, where a program is weakly finite if the set of states reachable from any initial state is finite. Kaminski et al. [15] have studied the analysis of expected termination times of infinite state systems using probabilistic invariant-style reasoning, with some applications to ACT. In even earlier work Celiku and McIver [2] explore the mechanisation of upper bounds on expected termination times, taking probabilistic weakest pre-conditions [20] for their model of probability and non-determinism.
Our treatment of demonic nondeterminism
Before proving §2.2, we explain our treatment of demonic-and probabilistic choice together.
Our transition function T is of type S * →PDS, where P is the powerset constructor and D is the discrete-distribution constructor: thus for state s in S * , its possible transitions T (s) comprise a set (P) of discrete distributions (D) of states (S). It simultaneously extends (1) the conventional model S→PS of demonic (non-probabilistic) programs and e.g. (2) Kozen's model S→DS [18] and later Plotkin and Jones' model [14] of probabilistic (non-demonic, i.e. deterministic) programs. For (1) the embedding PS;PDS is as sets of point distributions, and for (2) the embedding DS;PDS is as singleton sets of distributions.
The full probabilistic/demonic model has been thoroughly explored in earlier work [25,13,20] and has an associated simple programming language pGCL, for which it provides a denotational semantics. 7 Using pGCL semantics, we can model our system as a while-loop of the form while s ∈S 0 do "choose s' according to T (s)"; s:= s' end ,
where "choose s' according to T (s)" is simply a pGCL probabilistic/demonic assignment statement and the semantics of while is given as usual by a least fixed-point. An alternative, more recent approach is concerned with expected time to termination, and while-loops' semantics are given equivalently as limits of sequences of distributions [15]. Either way, the resulting set of final distributions (non-singleton, if there is nondeterminism) comprises sub-distributions, summing to no more than one (rather than to one exactly), where the "one deficit" is the probability of never escaping the loop. Proving ACT then amounts to showing that all those sub-distributions are in fact full distributions, summing to one.
Our relying on well established semantics for demonic choice and probability together is the reason we do not have to construct a scheduler explicitly, as some approaches do: the scheduler's actions are "built in" to the set-of-distributions semantics. Proof Recall that the state space is S, that the termination subset is S 0 ⊆S and that S * = S−S 0 is the rest. The transition function T is of type S * →PDS and the variant V is of type
S→R ≥ with V (S 0 ) = {0}.
Fix some non-negative real number H (for "high"), and consider the subset S H of S * whose variants are no more than H, that is {s: S * | V (s)≤H}. By the non-increasing constraint on p, d we have that for every s in S H any transition decreases V (s) by at least d(v)≥d(H)=d H say, with probability at least p(v)≥p(H)=p H . Note that there does not have to be an actual s in S * with V (s)=H for this condition to apply. Now fix s in S H with therefore V (s)≤H. The probability that V will eventually become 0 via transitions from that s is no less than (p H ) H/d H , since taking the probability-at-least-p H option to decrease V by at least d H , on every transition, suffices if that option is taken at least H /d H times in a row.
Since the above paragraph applies for all s in S H , the probability of transitions' escaping S H eventually is bounded away from zero by (p H ) H/d H uniformly for all of S H . We can therefore appeal to the zero-one law [12], [23,Sec. 6], [20,Sec. 2.6], which reads informally Let process P be defined over a (possibly infinite) state space S, and suppose that from every state in some subset S H of S the probability of P 's eventual escape from S H is at least ε, for some fixed ε>0. Then P 's escape from S H is AC : it occurs with probability one.
Note that the zero-one law applies even if S H is infinite.
It is possible however that the escape occurs from S H not by setting V to 0 but rather by setting V to some value greater than H, i.e. occurs "at the other end". Because of possible nondeterminism, there might be many distributions describing the escape from S H ; but because we know escape is AC, they will all be full distributions, i.e. summing to one. Let δ be any one of them.
Set z=δ S0 , i.e. so that the probability of indeed escaping to V=0 is z. Then the probability of escaping to V >H instead is the complementary 1−z for that δ, and the expected value of V over δ is at least z×0 + (1−z)×H, since the actual value of V in the latter case is at least H. But by super-martingale, we know that the expected value of V when escape occurs from S H , having started from s, cannot be more than
V (s) itself. So we have (1−z)H ≤ V (s), whence z ≥ 1− V (s) /H.
Now we simply note that the inequality z ≥ 1− V (s) /H holds for any choice of s, H and, in particular, having fixed our s we can make H arbitrarily large. † Thus z, the probability of escape to V=0, i.e. to S 0 , must be 1 for all s. 8 2
Discussion of the rule and the necessity of its conditions
What happens if V is not a super-martingale? Then ACT could be be (unsoundly) proved e.g. for a biased random walker biased away from 0, say 1 /3 probability of stepping closer to zero and 2 /3 of stepping away. Setting its variant equal to its distance from zero satisfies progress, but not super-martingale.
What happens without progress? Then a stationary walker would be compliant, satisfying super-martingale but not progress. (Remember our super-martingale does not require strict decrease: a stationary walker would satisfy it.)
Why not allow V to go below 0? In the proof we argued the expected value of V on exit from S H would be at least z×0 + h×H -but it could be much lower if an exit in the zero direction could set V to a negative value.
In fact V can be boundedly negative: we would just shift the whole argument up. But V must be bounded below, otherwise the rule is unsound. Consider the "captured spline" example (in Fig. 7 of §7.7 below), and replace the 0-variants for escape by variants −2(n + 1) 2 . The ∇ rule (defined in §5) would now apply with ∇ the constant function −1. For the current p, d rule we could use the large negative escape-variants to increase the (positive) along-the-spline variants so that they became unbounded.
What happens when V is bounded? Consider again the 2 /3-1 /3 biased random walk. We can synthesise a (super-)martingale by setting V (n)=0 when n=0 and solving for V (n−1) /3 + 2V (n+1) /3 = V (n) otherwise -it gives the definition V (n) = 2 n −1 /2 n−1 with which super-martingale is satisfied by construction. Then, since V is injective, we can go on to define p(v), d(v) to be the probability,decrease resp. actually realised by the process whenever its variant is v, appearing at first sight to satisfy progress trivially: set p to be the constant function 1 /3 and d(v) to be 2−v in this example. Both p, d are non-increasing and strictly positive over variant values taken by the process.
But progress is not satisfied, because the functions d, p must be defined and non-increasing over all positive values v and, in particular, not only over variant values actually taken by the process: that is, they must be defined even for values v for which there is no s with v=V (s). In this example d(v) decreases to 0 as v approaches but never reaches 2, and so we cannot set a non-zero and nonincreasing value for d (2) itself. (In §7.7 a similar example is given where instead it is p(2) that cannot be defined.)
The point in the proof at which this "any v whatsoever" is used is marked by a marginal ( †), where we let H increase without bound. That H does not have to be V (s) for any "actual" s.
In summary: if V is bounded but the values of "actual" d (or p) are not bounded away from zero, then for any H greater than all V (s) there can be no non-zero value for d(H) (or p(H)) and the proof fails. 9
Why are p, d functions of the variant rather than of the state? Indeed they could have been defined as functions of the state (simply by composing them with V ). In that case the non-increase conditions would become If states s, s are such that V (s)<V (s ) then p(s )≤p(s) and d(s )<d(s).
But we would have to add that V over S * must either take only finitely many values or be unbounded, because we would then no longer be considering the v's that correspond to no s. That conflicts with our "purely local reasoning" goal.
Why not simply require V to be unbounded? For a finite state space V cannot be unbounded; yet for finite state spaces a termination argument is (usually) easy. As our rule stands, termination for finite state-spaces is handled as part of the general argument, not as a special case.
Are there alternatives formulations of progress? Yes: there are several alternatives.
The rule ( §2.2) uses progress in its proof ( §4.1) only to show bounded-awayfrom-zero escape from an arbitrary but bounded V -region (0, H] that we called S H . That is, starting from any s in S H the probability of reaching eventually an s with either V (s )=0 or V (s )>H is bounded away from 0, where the bound can depend on H. (It is super-martingale that then converts that to AC escape to S 0 alone, that is V (s )=0, by letting H increase without bound.) Any other condition with the same force would do, and a significant programming-oriented example is given in §5.
Another alternative, more suited to the situation where S, T are laid out as a transition system or as a Markov process (but not so suitable for systems expressed as programs), is simply to require that the V -image of S * have no accumulation points. (An example of this kind of condition is found in [16, Item (i)]and [9,Condition (2) proper divergence].) In that case the size of the set V (S H ), i.e. the "number of V 's" in any region (0, H], is required to be finite for any H. If the system is deterministic (or at least only boundedly nondeterministic) 10 then if in every transition V must decrease, by no matter how small an amount and with no matter how small a probability, escape from (0, H] is assured because Zeno-effects cannot occur in a finite set: the p(v), d(v) required in our rule ( §2.2) can be synthesised by taking minima over the whole (finite) set (0, v], i.e. with H=v.
Defining p, d everywhere, rather than only on "actual" v's, is not a burden if the v's are unbounded: define for examplep(v ) to be the infimum of p(v) for all actual v's with v≤v . Those extra valuesp(v ) are never used, since there are no states with V (s)=v : just the existence of the extra values is enough.
The only time this trick does not work is precisely the case we are discussing, where v is bounded but p(v) tends to zero. 10 Both [16,9] deal only with deterministic systems, i.e. stationary Markov processes.
An equivalent rule based on parametrised strict super-martingales
Pursuing the theme of equivalent formulations of progress (mentioned just above), we give here an equivalent rule in which progress is removed altogether, and replaced by parametrically strict super-martingale as follows:
There must be a non-increasing strictly positive function ∇ on the positive reals such that whenever we have V (s)=v for some s, v and some δ in
T (s) then E δ V ≤ v−∇(v). 11(2)
Call this formulation the "∇ rule", and the original the "p, d" rule. Although the ∇ rule is simpler to state than the p, d rule, in practice the definition of ∇ can be complicated, often the definitions of p, d are more straightforward. The similarity of this rule with other strict super-martingale rules is clear: our condition is weaker (the rule stronger) because we do not impose a uniform ε across all of S * . We show first that the p, d rule implies this ∇ rule.
Lemma 1. (Technical)
Let f be a non-negative function over the non-negative reals, and let y, y be non-negative reals; let δ be a discrete distribution on the non-negative reals. Then
E δ f ≤ y implies δ {x|f (x)<y } ≥ 1− y /y .(3)
That is, if δ guarantees that the expected value of f is no more than some y, then for any y we have that δ is guaranteed with probability at least 1− y /y to set f to a value no more than y . 12 Proof Let p be the aggregate probability that δ assigns to {x | f (x)≥y }. Then, since δ is fixed, the smallest possible value of E δ f is py , found by making f itself as small as possible: that occurs when f (x)=0 for all x with f (x)<y and f (x)=y for all x with f (x)≥y . Thus py ≤ E δ f ≤ y, whence p ≤ y /y and so the complementary
δ {x|f (x)<y } is 1−p ≥ 1− y /y . 2
Lemma 2. Guaranteed decrease of variant Let V, S, T etc. be as above. Suppose for some state s in S * we have that any T -transition is guaranteed to decrease the expected value of V by at least some ε>0.
Then any T -transition is guaranteed with probability p to decrease the actual value of V by at least d, where d:= ε /2 and p:
= d /V (s)−d.
Proof Let δ in T (s) be a T -transition from s, and for Lem. 1 set y = V (s)−ε and y = V (s)− ε /2 and f =V . Then δ is guaranteed with probability at least
1− y y = 1 − V (s)−ε V (s)− ε /2 = ε /2 V (s)− ε /2 11
Note that ∇ must be defined on all the positive reals, not just on the variant values the process can actually take. 12 If y ≤y then of course this guarantee is vacuous.
to decrease V by at least V (s)−y = ε /2. So we let d be ε /2 and p be d /V (s)−d. 2
We can now conclude that the ∇ rule implies the p, d rule because if the ε in Lem. 2 is a non-increasing but never-zero function of V (s), then the p, dvalues synthesised there are also non-increasing never-zero functions of V (s). Non-increase of d follows from the assumed non-increase of ε, and the nonincrease of p follows from increase of V and non-increase of d.
For the opposite direction, that the ∇ rule implies the p, d rule, we again let V, S, T etc. be as above. we will replace variant V by V = f •V where f is a real-valued function that is non-decreasing, strictly concave and of non-increasing curvature.
That would be equivalently f ≥0 and f <0 and f ≥0, for which an example is logarithm.
Now for any state s and δ in T (s), we know that with probability at least p=p(V (s)) the δ-transition decreases V (s) by at leastd=d(V (s)), and from super-martingale we know that that E δ V ≤ V Because of f 's concavity, that smallest value of V (s)−E δ V will be non-zero; and because the curvature is decreasing, and p, d are non-increasing functions of V (s), it will be non-increasing wrt. increasing values of V (s); because f in non-decreasing, that is equivalently non-increasing wrt. increasing values of V .
Relation to the rule of Fioriti and Hermanns
Fioriti and Hermanns' rule [8] does not have our progress condition; instead they require uniform bounded-away-from-zero decrease of the expected value of the variant, that is with the same bound for the whole of S * .
But in §5 we showed that our rule is equivalent to one without progress, i.e. where super-martingale has been strengthened to the ∇ rule at (2) above.
Fioriti and Hermanns' rule is then the special case of (2) where ∇ is the everywhere-ε constant function. Furthermore, since that rule is complete for systems with finite expected time to termination, the result above means our proposed rule is also complete for that class. But -as observed in §2.2-our rule also applies to the unbounded random walk, where the termination time is infinite.
For further discussions of completeness, see §9.
7 Examples of termination and non-termination 7.1 Symmetric unbounded random walk (terminates)
We mentioned in §2.2 that with variant the "distance from 0" and p, d the constant functions 1 /2, 1 respectively the ACT of the one-dimensional symmetric random walker is immediate. We also stressed our concern with source-level reasoning. Here we illustrate such reasoning for a random-walk program:
s:= 1 while s =0 do s:= s+1 1 /2 ⊕ s-1 end
Reasoning in Kozen's style [18] (here written in pGCL [25,20]) would generate just these two elementary verification conditions for the proof-rule of §2.2: 13 · The expected value of the variant does not increase:
super-martingale s ≥ wp.(s:= s+1 1 /2 ⊕ s-1).s · With probability at least 1 /2 the variant decreases by at least 1:
progress 1 /2[s=N ] ≤ wp.(s:= s+1 1 /2 ⊕ s-1).[s≤N −1]
The wp is the probabilistic generalisation of Dijkstra's weakest precondition [6,18,25,20].
To allay suspicions that might be raised by the simplicity of the above, we "uppack" the reasoning used in the proof of Thm. 1, showing in particular how the zero-one law contributes in this particular example. Without loss of generality we take the state-space to be the non-negative integers, start at position s=1 and show that eventually we will reach s=0.
Consider say the segment 1≤s≤100 of the line, and the bounded random walk within it, beginning (as we said above) at s=1. Since s is decreased by d=1 with probability p= 1 /2 at every step, i.e. the progress property, the walker's chance of moving to s=0 is at least 1 /2 100 for every 1≤s≤100. Thus its escape from [1,100] is AC, whether that escape is high or low, and the expected value of s when that happens will be z×0 + (1−z)×100, that is 100(1−z), where z as before is the probability of escaping to V =0.
But the expected value of s is constant at 1 (the super-martingale property), no matter how many steps are taken, so that in fact z= 99 /100. That is, the probability that escape occurs to s=0 rather than to s=101 is 99 /100, establishing in any case that s=0 is reached from s=1 with at least that probability. Now replay the argument within the segment 1≤s≤10 6 instead. The walker's behaviour is not affected by the segment within which we reason -it does not "know" we are looking at [1, 10 6 ]-and it moves just as it did in the 100 case. But because we are thinking about 10 6 this time, our conclusions are strengthened to " escape from s=1 to s=0 with probability 1− 1 /10 6 ". The p, d version of our rule ( §2.2) establishes ACT. The variant is the distance from 0 which, everywhere except 0 itself, is a (super) martingale that decreases by at least d=1 with probability at least p= 1 /2. Here the walker has constant bias away from 0, and indeed termination is not AC.
Although super-martingale is satisfied and we can define p(v)= 1 /3, it is impossible to define a non-increasing function d that gives a lower bound on the amount by which the variant decreases: the variant at State s is 2− 1 /2 s−1 , bounded above by 2 and forcing the non-increasing but strictly positive d impossibly to satisfy d(2)=0.
Fig. 2. The constant-bias random walk example
In Fig. 2 we have a one-dimensional random walk that does not terminate AC. If we synthesise a variant V that is an exact martingale, as shown, we satisfy super-martingale by construction. And its decrease occurs with probability (at least) 1 /3 everywhere. But because the variant is bounded, we cannot define a d that satisfies progress, so our termination rule does not apply. (And §5 shows that the ∇-rule does not apply either.) In § §9. 2,9.3,9.4 we see that in fact this walker does not terminate AC. State s goes up with probability (s+1)/(2s+1). It is strictly biased away from 0 everywhere. Here we use the p, d version ( §2.2) of the proof rule: the expected value of the variant after a transition is equal to its actual value before (except at State 0, where our rule does not require it to be). But still this walker is strictly biased away from 0 at all positions, with that bias however decreasing towards zero with increasing distance. In spite of that bias, still its termination is AC. Fig. 3. The harmonic-bias random walk example
Harmonic-bias random walk (terminates)
In Fig. 3 we see a biased one-dimensional random walk that still terminates AC.
The key point is that the bias decreases as distance from 0 increases, tending to "symmetric" in the limit. 14 Here the variant is unbounded. (Compare §7.2 just above, where the variant is bounded.) Condition super-martingale is satisfied by construction; define p(v)= 1 /3 everywhere; and define d(v) to be 1 /s where s is the largest such that Harmonic Number H s is no more than v. This d is non-increasing in v and strictly positive.
An alternative proof of termination for this process is provided by the general techniques of §9.1.
The "tinsel" process (terminates)
Here we exhibit a process whose infinite stopping time is obvious from its construction. (The random-walk process ( §7.1) has the same infinitary property, but it is not so obvious.)
The root branches with probabilities 1 /2, 1 /4 · · · , 1 /2 n , · · · to straight paths of length 2, 4, · · · , 2 n , · · · resp. each of whose contributions to the expected stopping time is therefore ( 1 /2 n )/( 1 /2 n ) = 1. Since there are infinitely many children of the root, the expected stopping-time overall is infinite. See Fig. 4, where the variant function for ACT is shown. The super-martingale condition is satisfied trivially except at the root node, where the small calculation 1/2 1 +2/2 2 +3/2 3 + · · · = 2 ≤ 2 is needed to see that it is satisfied there too.
The expected stopping time however is n≥1 2 n /2 n = ∞. (We call it "tinsel" because it's like long ribbons hanging down from a tree.)
The "curtain" process (terminates)
This variation on infinite stopping time begins with transitions that either move away from the root or "drop down" to ever longer straight runs. Again the stopping time is infinite but termination is still AC. See Fig. 5, where the variant function is shown. The super-martingale condition is satisfied trivially everywhere.
The expected stopping time however is again n≥1 2 n /2 n = ∞, as for Fig. 4. (We call it "curtain" because many short runs hang down from a single long run.)
The escaping spline (terminates)
Here we illustrate in Fig. 6 how our rule depends on the actual transition probabilities in an intuitive way, that a "spline" whose overall probability of being followed forever is zero gives a variant with which we can prove its termination. (Complementarily, if the probability of remaining in the spline is not zero then our rule does not apply, as we show in §7.7.)
The states are numbered from s=1 at the left, and V (s) is s itself. The function p(v) is 1 /v+1 and the function d(v) is 1 everywhere: at state s =0 the variant is s, and with probability at least 1 /s+1 the value of the variant will decrease by at least 1. In fact, for most s with V (s) =0 the variant by much more than 1 -the function d gives only a lower bound for the actual decrease in the variant. Each horizontal transition has probability of one minus the (vertcally downwards) escape immediately before it. Each variant 2, 3, 4, · · · turns out to be the previous variant divided by its incident probability, establishing super-martingale by construction. The successive probabilities of not having escaped are corresponding prefixes of the infinite product 1 /2× 2 /3× 3 /4× · · · which tend to zero. Hence the variant increases without bound, proving that eventual escape is AC.
In general, if the product of the "stay on spline" probabilities tends to zero, the variants -the reciprocals of those prefix probabilities-increase without bound. In the example of Fig. 7, based on [20, Sec. 2.9.1], the process does not escape with probability one. If we applied the strategy of the escaping spline (Fig. 7), we would choose variant V (s) = 2s /s+1. It is a (super-)martingale because in general V (s−1)×0
+ (1− 1 /(s+1) 2 )×V (s+1) = 1 /(s+1) 2 ×0 + (1− 1 /(s+1) 2 )×( 2(s+1) /s+2) = ( s 2 +2s /(s+1) 2 )×( 2(s+1) /s+2) = ( s 2 +2s /s+2)×( 2(s+1) /(s+1) 2 ) = 2s /s+1 = V (s) .
The decrease function d is trivial: we can set it to the constant 1, since the potential decrease is always at least 1 with probability 1 /(s+1) 2 .
But for p(v) we choose (2−v) 2 /4, i.e. a value that is no more than 1 /(s+1) 2 when v = 2s /s+1. Whatever that value is, it is clear that it approaches 0 as v approaches 2, and so we will not be able to select a non-zero value for p(2). As for §7.2, the results of § §9. 2,9.3,9.4 show that this process does not terminate AC. As before, each horizontal transition has probability (this time not shown) of one minus the escape immediately before it; and each (speculative) variant is the previous variant divided by its incident probability. The successive probabilities of not having escaped are now corresponding prefixes of an infinite product (1− 1 /2 2 )×(1− 1 /3 2 )×(1− 1 /4 2 )× · · · which, unlike the earlier one of Fig. 6, does not diverge: rather it converges to 1 /2. Hence eventual escape is with probability only 1− 1 /2 = 1 /2.
Making the variants the reciprocals of those cumulative escape probabilities, as in Fig. 6, results in increasing variants bounded above by 2, which does not satisfy progress for p(v) when for example v=2.
In general, the strategy of Figs. 6,7 works just when the successive "not yet escaped" probabilities tend to zero, since that is exactly when the variants, their reciprocals, increase without bound. Fig. 7. The "captured spline" process 7.8 The two-dimensional random walk (terminating but not proved)
In Fig. 8 we recall the one-dimensional random walk, but this time using a variant equal to the logarithm of (one plus) the walker's distance from the origin and a ∇-style progress condition. (Compare Fig. 1 in §7.1.) For better comparison with the two-dimensional version, we have made the walk unbounded in both directions. It suggests that the two-dimensional walker could be treated with the variant being based on the logarithm of the walker's Euclidean distance from the origin. Again using the ∇ rule, we would have at least to show (something like) that for all integers x, y we have log((x+1) 2 +y 2 ) + log((x−1) 2 +y 2 ) + log(x 2 +(y+1) 2 ) + log(x 2 +(y−1) 2 ) < 4 log(x 2 +y 2 ) .
Unfortunately, numerical calculations show that this inequality fails near the |x|=|y| lines. It seems that the log function bends too much, is "too concave". Here we use the ∇-version ( §5) of the proof rule: the expected value of the variant decreases by at least some fixed positive and non-increasing function of its current value: the expected decrease here is 1 2 log n 2 n 2 −1 , a non-increasing function of log n. We therefore "flatten things out a bit" by trying a double-log log(log(·)) instead, a function still concave but less so, and we have indeed shown by similar numerical calculations that the corresponding inequality lgg(x+1) 2 +y 2 ) + lgg((x−1) 2 +y 2 ) + lgg(x 2 +(y+1) 2 ) + lgg(x 2 +(y−1) 2 ) < 4 lgg(x 2 +y 2 ) (4) is satisfied for all integers x, y with |x|, |y| ≤ 10, 000. 15 Our conjecture is that (4) holds for all integers x, y and, if it does, it would establish termination for the two-dimensional symmetric random walk using a single variant function. 16 See §9.3 for evidence that there is a suitable variant function, even if it turns out not to be lgg.
Compositionality
Following [8], by "compositionality" we mean the synthesis of an ACT -proof for a system that is composed of smaller systems for each of which we have an ACT -proof already. For now, we study this only briefly.
Suppose we have a "master" system M and a number of component systems C 1..N . System M has at least N termination states, at which its variant V M is therefore zero; and each component system has a designated start state s n where its variant function V n takes some value v n . The composite system is then made by "plugging in" each component system's starting state s n to some termination state of M .
The systems in Fig. 4 (Tinsel) and Fig. 5 (Curtain) are examples of this, except that for them the number of component systems is infinite.
In Tinsel, the master M is a single infinite branch leading with ever-decreasing probability 1/2 n to termination in exactly one step. Its component systems C 1.. are straight line processes each with stopping time 2 n −1. The overall stopping time of the combination is infinite.
In Curtain the master M is a straight-line system with a probability of 1/2 of termination at each step; its expected stopping time is 2. The component systems C 1.. this time have termination times of 2 n −n. Again the overall stopping time of the combination is infinite.
Although we did give termination proofs for these two systems, we cannot (i.e at the moment we do not know how to) synthesise such proofs from the master's and the components' proofs when the number of components is infinite. But here is what we can do when the number of components is finite:
- $
Define v C to be the maximum over all n of v n , that is a number at least as great as the starting variant of any of the finitely many subsystems. If the individual systems satisfied the ∇ rule with their separate ∇ functions, then the composite system will satisfy the rule with the single ∇ function. The use of finiteness was in two places:
- $
That the v C added to V H was finite. (It is a sup taken over all subsystems.) -That the ∇ is nowhere zero.
In Tinsel (Fig. 4) and Curtain (Fig. 5), having infinitely many components, the failure of synthesis occurs at the two points $, because v C is infinite. In spite of that, as the examples show, we were able to find proofs "by hand" (i.e. not synthesised). Note however if a proof method were complete only for finite stopping-time systems, there can be no synthesis in these two cases: although all the component systems have finite stopping times, but the composite systems do not.
Related historical results on Markov chains
The work of Blackwell: random walks and radially symmetric trees
Blackwell [1] gives a general technique for proving termination of a certain subclass of Markov processes, those moving both down and up so-called "radially symmetric trees". (It also provides an independent proof of termination for our example §7.3, the harmonic random walk.)
Definition 1. Radially symmetric tree
A radially symmetric tree is finitely branching, having the property that each node at depth d has exactly c d children, where the root has depth zero and all the c 0 , c 1 , · · · are integers at least 1. A radially symmetric tree is infinite, and has no leaves.
Definition 2. Random tree-walk
A random walk on a radially symmetric tree starts at any node and chooses uniformly to move either to its parent or to one of its children, thus with probability 1 /c d +1 for a node at any positive depth d along any of its connecting arcs. At the root, where there is no parent, the probability is instead 1 /c0. (Only the root has no parent.)
Termination occurs when the root is reached. 2
Radially symmetric trees are determined uniquely by their c d 's, and examples include the following:
(i) Each node has exactly one child, thus a single path starting at the root. (ii) Each node has exactly two children, thus an infinite binary tree. (iii) Each node has either one or two children, according to the scheme that c d is two just when d is a power of two (and thus is one otherwise).
The first example generates the one-sided symmetric random walk in one dimension, and it terminates with probability one ( §7.1). The second example is (effectively) a collection of 2 /3-1 /3 asymmetric random walks in one dimension, down the branches of the tree: its termination is not AC ( §4.2). The third, more complicated example is a binary tree with only infrequent splittings: we show in Fig. 9 below that it terminates, and quote a completeness result for such trees.
The Blackwell-style proof of (iii)'s termination (a sanity check of our claim) follows [19], and is ultimately based on Blackwell's Thm. 2
below. Note that it is an if-and-only-if:
Theorem 2. Blackwell [1] Let p n for n≥0 be a sequence of probabilities, with p 0 =1, and consider the Markov matrix on the non-negative integers defined M n,n+1 = p n and M n,n−1 = q n = 1−p n .
Then any Markov chain with this matrix will eventually reach 0 almost surely if and only if the equation f (n) = q n f (n−1) + p n f (n+1) has no non-constant bounded solution. 17 2
A corollary of Thm. 2 gives us a classification of ACT for radially symmetric trees: Corollary 1. Lessa [19] Given a radially symmetric tree, define variant function
V (d) := 0≤i<d 1 c 0 × · · · ×c i(5)
where d is a depth of some node in the tree and c d is the number of children for nodes at that depth. Every node at depth d has variant V (d), and this V is indeed a super-martingale. The random walk on this tree, defined as in Def. 2, terminates everywhere if and only if V (d) is unbounded as d→∞. Proof 2
Lessa's proof of Cor. 1 uses Blackwell's theorem Thm. 2; but our variant rule here provides an independent proof for Fig. 9, i.e. without Blackwell's theorem. More significantly however, Blackwell's theorem provides us with a completeness result for using our variant rule, at least for one-dimensional random walks.
Blackwell's completeness result
Blackwell's work [1] on classifying recurrence in Markov processes suggests how we might understand the coverage of our new rule. He considers Markov processes with countable state spaces and stationary (i.e. fixed) transition probabilities, and shows that such processes have essentially unique structures of recurrent and transient sets. We now give a summary, using (partly) Blackwell's terminology as well as what we have used elsewhere in the paper.
Let C be a subset of the state space, and fix some initial stateŝ. Say that C is almost closed (wrt. thatŝ) iff the following conditions hold:
1. The probability that C is entered infinitely often, as the process takes transitions starting fromŝ, is strictly greater than zero and 2. If C is visited indeed visited infinitely often, starting fromŝ, then eventually it remains within C permanently.
Say further that a set C is atomic iff C does not contain two disjoint almostclosed subsets.
Finally, call a Markov process simple atomic if it has a single almost-closed atomic set such that once started fromŝ it eventually with probability one is trapped in that set. We then have Theorem 3. Corollary of Blackwell's Thm. 2 on p656) [1] A Markov process is simple atomic (as above) just when the only bounded solution of the equation E δ V = V (s), that is Blackwell's Equation (his 6), stating (in our notation) that V is exact, neither super-nor sub, is constant for all s in S * and δ in T (s).
2
We adapt the above to our current situation as follows. As above, we fix a starting stateŝ, and we collapse our termination set S 0 to a single state s 0 , adjusting T accordingly and in addition making T take s 0 to itself. We then assume that the probability ofŝ's reaching s 0 is one. We now note:
(1) Our termination set {s 0 } is almost-closed and atomic, because (i) almost closed: Our process reaches s 0 with non-zero probability (in fact we assumed with probability one) and, once at s 0 , it remains there. (ii) atomic: Our set {s 0 } has no non-empty subsets.
(2) We now recall that in fact s 0 is reached with probability one, so that the whole process is simple atomic. (3) From our Thm. 3 we conclude that the only possible non-trivial variant is unbounded.
Thus -in summary-we have specialised Blackwell's result to show that if we have a non-trivial exact variant that is bounded, then the process does not terminate AC. This is a result in the style of Chatterjee et al. [5]. Every note at depth 2 n has two children; all others have one child. Transitions from a node are uniformly distributed over its arcs, thus 1 /3 for each of two children and 1 /3 up, and otherwise 1 /2 up and 1 /2 down.
The variant function generated according to the scheme of (5) is
depth d 0 1 2 3 4 5 6 7 8 c d 1 2 2 1 2 1 1 1 2 V (d) from (5) 0 1 /1 1 + 1 /1×2 3 /2 + 1 /2×2 7 /4 + 1 /4×1 2 + 1 /4×2 17 /8 + 1 /8×1 9 /4 + 1 /8×1 19 /8 + 1 /8×1 simplified 0 1 3 /2 7 /4 2 17 /8 9 /4 19 /8 5 /2
At Depth 4, for example, we have 1 /3× 7 /4 + 2 /3× 17 /8 = 2, thus satisfying supermartingale at that position. Because the variant at depth 2 d is 1 + d /2, i.e. increases without bound, it is straightforward to construct functions p, d, showing that this process terminates. Foster [9] gives a characterisation of Markov processes for which a technique like ours is guaranteed to work. A significant example is the two-dimensional symmetric random walk, supporting our conjecture in §7.8. Because Foster's paper seems quite technical, we give here a "translation" into our own terms. His equations will be referred to as (F1) etc. and his sections as §F1 etc.
- §F1 We assume that our state space S is countable, enumerated s 0,1,··· with the termination subset S 0 being just a single point {s 0 }, and we extend our transition function T to all of S, i.e. not just over S * , by making it take s 0 to itself. The enumeration should correspond roughly to "being further from s 0 ", which is made precise in conditions (F6) and (F8).
Foster is concerned with conditions for the existence of a super-martingale (F1) variant function V from S to the non-negative reals (F3), where V is unbounded without accumulation points (F2).
We assume that T is deterministic, and thus specialise it to be of type S to DS rather than to PDS.
- §F2 The "limit" of taking transitions forever is defined to be T * , say, using the "Cesaro average" 18 that avoids the problem of recurrence when considering simply T 0 , T 1 etc. composed in the Markov style.
But (F4) is not as simple as it looks. It seems to imply that there is no infinite chain of transient states (such as in the "spline" examples). For if there were, the mass travelling down the chain would be "lost" in the Cesaro average, and the sum would not be one. This turns out to be important in the discussion of (4 ) below.
Kendall's [16]
earlier result is
If there is a variant as in §9.3, then T * takes any starting state to a full distribution on S (i.e. not partial), and there is a finite subset C of S from which T does not escape.
Foster then explains that the current paper's purpose is to explore the opposite implication to Kendall's [16], i.e. that, under "certain weak additional assumptions" on T , if there is such a (finite) subset C as above then there is a V satisfying the conditions of §9.3. His additional assumptions include (by implication) that C is reached with probability one from anywhere in S, his (F4 ), because he argues that (F4,F5,F6) together imply (F4 ). That looks at first like the zero-one law. But note that (F6) does not bound the probability of escape away from zero: it merely says that it is not zero, and that is not usually enough. Together with (F4) though it suffices, because (F4) seems to say any transient state (even if there are infinitely many) must be visited infinitely often (since otherwise the mass moving among the transient states would be "lost", as suggested above).
His additional assumptions are then (F6) From any state in S * there is a non-zero probability of reaching S 0 ={s 0 } eventually. (F7) From any state in s i in S * there is for any j>i a non-zero probability of reaching any s j eventually. Note that s j is in S * also. (F8) There is a single probability δ<1 for the whole system such that for any N there is an i such that for all j≥i the state s j cannot reach C within N steps and with probability at least δ. As he says, it's a "remoteness" condition, intuitively mandating that the higher the i the longer it takes s i to reach C with some fixed-beforehand probability δ.
He notes that because of (F4 ) the N (which depends on i) is finite: from s i you must get to C eventually with probability at least δ because, in fact, you get there with probability 1.
-Statement of Theorem F2, and its application to the two-dimensional symmetric random walk
Recall that S is assumed to be countably infinite. Foster's Theorem F2 reads
If T satisfies conditions (F4-8), then there is a variant function V on S that satisfies (F1,F3), i.e. that it is a non-negative super-martingale and (F2) that it tends to infinity as the state-index tends to infinity.
We note that condition (F2) implies that V is without accumulation points. The implications of this theorem seem to be e.g. that there must be a variant in our style for the two-dimensional symmetric random walk, even if it has not (yet, as far as we know) been given in closed form. We check the conditions one-by-one:
(F4) This is (apparently) replaced by (F4 ). (F4 ) The probability of reaching S 0 , that is the origin, is one everywhere. (F5) Once you are at the origin, you do not leave. (F6) Every state in S * can reach S 0 with non-zero probability.
(F7) Here we need an enumeration of the states: Foster uses the Manhattan distance, which makes nested "diamonds" . But since every state in S * can reach every other state, in fact we do not need the enumeration yet. 19 (F8) Any state at Manhattan distance N cannot reach the origin at all until
Step N , and so δ=0 should do for this provided we assign higher indices to higher-Manhattan-distance states, which is what Foster does in §F3.
Applying Theorem F2 then gives us a super-martingale V such that V (s i ) tends to infinity as i tends to infinity, which means that V (s) tends to infinity as s gets Manhattan-further from the origin, given the indexing that we (i.e. Foster) have chosen.
To show that our rule applies, we need however to establish a progress condition. (See our earlier remarks about alternative progress conditions, in §4.2.) First define p(v) to be 1 /4 for all v. Then for d, first consider the subset S ≤v of S comprising all those s with V (s)≤v. Because of (F2) the V -image V (S ≤v ) of S ≤v must be finite; so set d(v) to be the minimum non-zero distance between any two of them, that is (min V (s )−V (s) | s, s ∈V (S ≤v ) ∧ V (s )>V (s)).
Thus there is guaranteed to be a V satisfying super-martingale and progress that establishes termination for the two-dimensional symmetric random walk ( §7.8) -even if we don't know what it is in closed form. Foster's general proof is by construction, and we sketch it in App. A.
-Why Theorem F2 does not synthesise a variant for the three-dimensional symmetric random walk Foster remarks [9, p. 590] that synthesis cannot succeed for the three-dimensional random walk (since it is known that it is not ACT ); but he does not say which of his Theorem F2's conditions are not satisfied.
Clearly his (F4') is not satisfied (that the process is ACT ); but that is a derived condition, a consequence of his original (F4-8), and so it is fair to ask which of those original conditions fails in three dimensions. Furthermore, his synthesis procedure is well defined whether the process satisfies ACT or not, and so we can therefore ask as well what is wrong with the V it synthesises for the three-dimensional random walk, in our terms.
For the first, it is condition (F4) that must fail. The process satisfies (F5), that the process is trapped at the origin, and (F6), that the origin is accessible from every point, and (F7), that every point is accessible from every other (except the origin). And it seems likely that it satisfies (F8), since by numbering the states in rings it can be arranged that higher-numbered states have arbitrarily high first-arrival times at the origin.
Failure of (F4) means that there is some bounded away from zero probabilistic mass that follows an infinite (not looping) path through the state space: that is the only way in which the Cesaro limit can "lose mass", making (in Foster's notation) the sum ∞ j=0 π ij strictly less than one. For the second, the problem with the synthesised V is that it is bounded, a failure of condition (F2). Item 4. in §9.4 below shows that in that case our condition progress cannot be satisfied for that V .
A modern alternative to Blackwell's completeness argument
The result of §9.2 can be obtained much more directly using the program semantics of this paper, and an argument in the style of Thm. 1.
In [20,Lem. 7.3.1] we show (using the terminology here) that if V is a submartingale and is bounded, 20 21 then if escape from S * is AC (i.e. the loop terminates with probability one), the expected value of V on S 0 (i.e. on the states where the loop-guard is false) is at least the actual value of V in the initial state (of the loop). 22 Now if the initial state is in S * then the value of V there is strictly positive; yet if escape occurs with probability one, the expected value of V on termination will be 0, since it is confined to S 0 . That contradiction means that we cannot have sub-martingale V be bounded and still terminate almost surely.
Here are some remarks on the relation between our argument and Blackwell's Thm. 3. Blackwell states that a process is ACT just when its only bounded exact martingale is constant; our result just above states that an ACT process cannot have a bounded sub-martingale.
1. What role does the "or is constant" criterion, from Blackwell's theorem, play in our argument? Because we require V to be zero on S 0 , a constant V for us would be zero on all of S, meaning that S * was empty (since V must be strictly positive there). So we should add to our result "unless S * is empty."
2. Where do we use in our argument that V is bounded? How does our argument fail if we don't?
We use it in our appeal to [20,Lem. 7.3.1], where in fact V is assumed to be between 0 and 1. If V is simply bounded above (but not by one, necessarily), then it can easily be brought within range by scaling. If V is unbounded, however, it cannot be brought within range that way. An easy counter-example is the symmetric random walk starting at 1 and aiming to reach 0. The variant "distance from 0" is an exact martingale, and has value 1 on initialisation. But it is unbounded, and so the conclusion "its (expected) value on termination is at least its starting value" is false. Indeed on termination its (actual) value has decreased from 1 to 0.
3. What's an intuitive (and easy to understand now, in retrospect) reason that our conclusion is "obvious", without appealing either to Blackwell or to [20]? Think of the variant value V , initially concentrated at s, as being gradually "dissipated" whenever some probabilistic weight escapes to S 0 . Since V is zero there, the sub-martingale property requires that V increase, to compensate, on the remaining probabilistic weight still within S * . But because V is bounded, that increase cannot go far enough -it eventually must stop. And that means that some probabilistic weight remains trapped within S * . 20 Note we say sub-martingale, i.e. that the expected value of V can increase. 21 In that part of [20] we are working an invariant, here our V , that is bounded by 1. That loses no generality, since any bounded V can be divided by its least upper bound without disturbing its sub-martingale property. 22 See §B for a précis of this loop rule.
4.
Our rule §2.2 proves ACT given super-martingale and progress for some
V . Yet above we show that if V is a sub-martingale and bounded, then ACT cannot hold. Does that mean that progress cannot hold for any bounded, nonzero exact martingale? Yes, it does mean that. If V is bounded, then when the process is at (or sufficiently near) V 's upper bound either (a) The function d must be arbitrarily small (tending to 0), and so it cannot be non-increasing with respect to a non-zero d-value taken above V 's upper bound (for which see Fig. 2), or (b) The function d is bounded away from zero, in which case the expected value of V must strictly decrease, thus not realising an exact martingale.
General comparison with refutation methods
Blackwell's result Thm. 3 says that a Markov process is atomic and simple if and only if all exact martingales are constant or unbounded. We showed (in our terms) that when a program terminates with probability 1, the termination set implies the program is atomic and simple (as a Markov Process). Then, using Blackwell's, result we are able to conclude that all exact martingales are constant or unbounded. In an independent proof (the one-liner §9.4) we can show this directly without going through Blackwell, namely that if a program has a non-constant exact martingale then it can't terminate with probability 1. Chatterjee et al. also look at repelling super-martingales to refute almost termination. Their Theorem 6 uses an ε-repulsing super-martingale with ε>0 to refute almost sure termination. Their Theorem 7 uses an ε repulsing supermartingale with ε ≥ 0 to refute finite expected time to termination: i.e. to refute finite expected time to termination only a martingale is required.
Our result in §9.4 implies a new refutation certificate for programs: if the martingale is finite and non-constant it actually refutes termination with probability 1, not just finite expected time to termination.
For example, if we consider the one-dimensional random walker §7.1 it has an exact unbounded martingale, and therefore our rule shows that it terminates with probability 1. The walker in §7.2 has an exact bounded martingale, and this we can conclude does not terminate with probability 1. In both cases Chatterjee's Theorem 7 would deduce that neither have finite expected time to terminate.
Conclusion
Our overall aim is (has always been) to allow and promote rigorous reasoning at the level of program source-code. In this paper we have proposed a new rule, combining earlier ideas of our own with important innovations of others, and have attempted to formulate it in a way that indeed is will turn out to be suitable for source code.
That is, we hope that as an extension of what's here we will be able to formulate these rules in the program logic pGCL, or similar; and if the techniques are further extended subsequently, we would hope to do the same for those too.
Program logic also provides a rigorous setting not only for use of the rules but also for their proof in the first place. Although we did not use program logic here, for our proofs, we believe it would be possible e.g. in the style of [20].
Finally, we have left an intriguing open question: is there an elementary variant for the two-dimensional random walk? Foster [9] shows that there is such a variant, but he does not give it in closed form. We conjecture that lgg suffices, but have only verified that experimentally. Will we ever be able to set as a student exercise Assuming the properties [ . . . ] of the function [. . . ], use probabilistic assertions in the source code of the following program to prove that it terminates with probability one for any initial integers X,Y:
x,y:= X,Y while x =0 ∧ y =0 do x,y:= x+1,y ⊕ x-1,y ⊕ x,y+1 ⊕ x,y-1 end , where iterated ⊕ is shorthand for uniform choice (in this case 1 /4 each).
A -Sketch of Foster's proof [9] We use the notation and definitions from §9.3 to present Foster's Theorem 2.
Recall that we have assumed that S 0 ={s 0 }, i.e. that termination occurs in a single state, and that we have adjusted (the assumed deterministic) T so that it takes s 0 to itself.
Write f (t) i for the probability that T started from s i reaches s 0 for the first time in the t-th step and (as Foster does) write p ij for T (i)(j), the probability that T takes one step from s i to s j ; more generally write p (t) ij for the probability that T takes t steps to do that. Foster remarks just after (F9) that a simple special case is where time-to-termination is bounded, but notes that such an assumption excludes the symmetric random walk and moves immediately to the more general case. 23 For the more general case we note first that for i>0 we have f
(t+1) i = j p ij f (t)
j . So if we were hopefully to proceed simply by setting V (s 0 )=0 and V (s i ) = 1≤t f (t) i for i>0, then in the latter case we would check the supermartingale property (F1) by calculating
j p ij V (s j ) = j p ij 1≤t f (t) j = 1≤t j p ij f (t) j = 1≤t f (t+1) i "above and i>0" ≤ 1≤t f (t) i , "(actually equal unless f (1) i >0)" = V (s i ) ,
so that V would in fact be an exact martingale in most of S * . 24 But this looks too good to be true, and indeed it is: in fact 1≤t f (t) i = 1 by assumption, so this is just the special case where V is 1 everywhere except at s 0 ; and the martingale property is exact everywhere, except at states one step away from s 0 . And this trivial V does not satisfy progress. 25 Still, the above is the seed of a good idea. Using "a theorem of Dini" [17, Foster's citation (4)], 26 that If c n is a sequence of positive terms with n c n < ∞, then also n c n (c n +c n+1 + · · · ) α < ∞ when α<1, 23 [8] also treats the bounded-termination case explicitly. 24 Think of the symmetric random walk, where everywhere-1 is an exact martingale except when |x|=1, where it is a proper super-martingale. 25 It is trivial in Blackwell's sense [1], a constant solution. 26 There seems to be a typographical error here in Foster's paper, where he writes weakest-liberal precondition functions respectively [6,18,25,20].
-G is the loop guard, for us a predicate characterising S * .
-I is the loop invariant, for us (confusingly) the variant V .
is the ≤ relation on functions, defined pointwise (as usual). -[G] * I wlp.body.I says that the expected value of I (our V ) after one transition is at least as great as its actual value at the state from which the transition was taken. (The [G] * means that the relation is mandated only when G holds, i.e. within S * .) Thus it is this condition that states that V is a sub-martingale on S * .
-In general A&B is A + B − 1 max 0 when 0≤A, B≤1. Thus when T =1 we have that I&T = I. -G is G's negation, so that [G] * I is our V set to zero on S * , equivalently our V restricted to S 0 .
-The lemma's inequality thus states If termination occurs with probability one (T =1 everywhere), then the value of V on the starting state is no more than (I&T ) the expected value of V on S 0 when escape from S * has occurred (wp.loop.([G] * I)).
4
Proof of the new rule for almost-certain termination 4.1 Proof of soundness Theorem 1. Soundness of §2.2 The technique described in §2.2 is sound.
(s). Then because of the concavity of f , the smallest value of V (s)−E δ V occurs when δ sends exactly weightp to (possibly several) s with V (s ) = V (s)−d and exactly weightp to s with V (s ) = V (s)+d wherep is 1−p andpd+p d = 0. (The construction ofp ,d is to maked as big as possible while satisfying super-martingale wrt. V .)
Fig. 1 .
1The unbounded symmetric random walk example 7.2 Constant-bias random walk (non-terminating)
decrease by at least 1 with probability at least 1. In (1, 2] the (smaller) lower bound is 1 /2; in (2, 3] it's 1 /4; in(3,4] it's 1 /8. . .
Fig. 4 .
4The "tinsel" process (rotated 90 • )
Fig. 5 .
5The "curtain" process (again rotated 90 • )
Fig. 6 .
6The "escaping spline" process 7.7 The captured spline (non-terminating)
Fig. 8 .
8The unbounded symmetric random walk example
-
Modify System M by adding v C to its variant function V H . -Paste the starting node s n of each System C n into the appropriate termination state of M . (These are therefore no longer terminating states.) -Set ∇ for the new, single system to be the pointwise minimum over all C n and M of their individual ∇ n and ∇ M functions.
Fig. 9 .
9Blackwell
To be clear, we interpret here our Lem. 7.3.1 from[20], which includes demonic nondeterminism. The lemma readsLet invariant I satisfy [G] * I wlp.body.I. Then I&T wp.loop.([G] * I), where T is the termination probability wp.loop.[true] of the loop. We note that -G is a predicate over the variables of the program. -Square brackets [·] convert Boolean true,false to numeric 1,0 respectively. -I, T are real-valued expressions over the variables of the program, in the interval [0, 1]. -wp and wlp are the probabilistic generalisations of Dijkstra's weakest-and
This revises an earlier version[22] by correcting typographical errors and adding an extra section §9 on historical background.
This is the probabilstic generalisation: in the traditional, non-probabilistic rule the decrease must be certain. On the other hand, in the traditional case the variant need not be bounded above. In both cases, it must be bounded below.
Although in general there is a question of definedness of E δ V when δ has infinite support and V is unbounded, that does not arise here.
This simplicity shows that the difficulty of finding an ACT rule lies in part in making sure it does not allow too much: what prevents our rule's proving that a biased random walk is ACT ? See §7.2.
This approach is also similar to the work of Segala[26], whose construction based on I/O automata appeared at about the same time as the workshop version of[13]; and it has numerous connections with probabilistic/demonic process algebras as labelled transition systems that alternate between demonic-and probabilistic branching.
A subtle issue here is that there might be V =0 states that s can reach via all of S * but from which it is blocked because it must terminate when V >H -and our z above does not take those into account. That is, the inequality wrt. z might apply only to a subset of the V=0 states that s can reach in the full system S * . But the "actual z", i.e. for the full system, can only be greater still -and so the result holds regardless.
See Thm. 2 in §9.1 a for place where unbounded variant seems to be required.
In fact they are both equalities; but in general the inequalities shown are what must be verified.
Although we constructed this example ourselves, we later found it in[9, Sec. 3(b)].
We write lgg for that function. Very close to the origin the it is undefined: but those cases can be adjusted manually.16 As hoped, lgg fails in the three-dimensional case.
Note however that our rule does not require V to be unbounded. See §4.2.
http://www.sciencedirect.com/science/article/pii/0304414977900321 .
Foster's (weaker) condition requires only that each state can reach every higherenumerated state.
AcknowledgementsWe are grateful to David Basin and the Information Security Group at ETH Zürich for hosting the six-month stay in Switzerland, during part of which we did this work. And thanks particularly to Andreas Lochbihler, who shared with us the probabilistic termination problem that led to it.Foster increases the f (t) i terms above by dividing them by f (t) 1 + f (t+1) 1 + · · · , which is non-zero but no more than one, 27 and still (as we will see) the new, larger terms still have a finite sum. (A minor detail is that he must show that the sum f (t) 1 + f (t+1) 1 + · · · does not become zero at some large t and make terms from then on infinite: his assumption (F7) prevents that by ensuring that from no state in S * does a single T -step go entirely into S 0 .) With the revised V replacing the earlier "hopeful" definition, the calculation above becomes insteadThis is encouraging: but we still must prove (F3) for our revised definition 28i.e. that it's finite for all i and not only for the i=1 that Dini gave us; and we must show that is satisfies (F2), i.e. that it tends to infinity as i does.For the first, Foster proves that V (s i )≤V (s 1 )/p (t ) 1i for any i>1 and some t >0 with p (t ) 1i >0, which is one place he uses (F7), in particular that every s i is accessible from s 1 .Specifically, he reasons as follows:(i) For that t and any t we have fi , because we know that s 1 's journey to s 0 can go via s i . (ii) The numerator f (t) i in (6) can therefore be replaced by f1i provided (≤) replaces the equality. (iii) The sum in the denominator of (6) can be adjusted to start at t +t rather than t, still preserving the inequality. (iv) The overall sum in (6) of non-negative terms for V (s i ) is now the "drop the first t terms suffix" of that same sum for V (s 1 ), which we already know to be finite (from Dini), but divided by p1i which we know to be non-zero.For the second, Foster uses the δ from (F8), showing that V (s i ) is at least+··· where t i is the number of steps after which s i reaches s 0 27 It is the square-root of the probability that s1 does not reach s0 in fewer than t steps. 28 Note the f 's in the denominator are subscripted "1", not "i".with probability at least δ for the first time. By (F8) that t i tends to infinity as i does, and thus so does V (s i ).His detailed reasoning is as follows:(i) Since t i 's tending to infinity is all that is required, any at-most-finite number of i's where t i =0 can be ignored. Thus pick t i ≥1.+··· , a suffix of its defining series(6). That completes the proof sketch.
On transient Markov processes with a countable number of states and stationary transition probabilities. David Blackwell, Ann. Math. Statist. 26David Blackwell. On transient Markov processes with a countable number of states and stationary transition probabilities. Ann. Math. Statist., 26:654-658, 1955.
Compositional specification and a nalysis of cost-based properties in probabilistic programs. Orieta Celiku, Annabelle Mciver, Proceedings of Formal Methods. Formal MethodsOrieta Celiku and Annabelle McIver. Compositional specification and a nalysis of cost-based properties in probabilistic programs. In Proceedings of Formal Methods, 2005.
Deductive Verification of Infinite-State Stochastic Systems using Martingales. Aleksandar Chakarov, University of Colorado at BoulderPhD thesisAleksandar Chakarov. Deductive Verification of Infinite-State Stochastic Systems using Martingales. PhD thesis, University of Colorado at Boulder, 2016.
Probabilistic program analysis with martingales. Aleksandar Chakarov, Sriram Sankaranarayanan, International Conference on Computer Aided Verification. Aleksandar Chakarov and Sriram Sankaranarayanan. Probabilistic program analy- sis with martingales. In International Conference on Computer Aided Verification, 2013.
Krishnendu Chatterjee, Petr Novotný, Dordežikelić , arXiv:1611.01063Stochastic invariants for probabilistic termination. Krishnendu Chatterjee, Petr Novotný, and DordeŽikelić. Stochastic invariants for probabilistic termination. arXiv:1611.01063.
A Discipline of Programming. E W Dijkstra, Prentice-HallE.W. Dijkstra. A Discipline of Programming. Prentice-Hall, 1976.
Proving termination of probabilistic programs using patterns. Javier Esparza, Andreas Gaiser, Stefan Kiefer, International Conference on Computer Aided Verification. Javier Esparza, Andreas Gaiser, and Stefan Kiefer. Proving termination of proba- bilistic programs using patterns. In International Conference on Computer Aided Verification, 2012.
Probabilistic termination: Soundness, completeness, and compositionality. Luis Fioriti, Holger Hermanns, Proceedings of the 42nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages. the 42nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming LanguagesLuis Fioriti and Holger Hermanns. Probabilistic termination: Soundness, complete- ness, and compositionality. In Proceedings of the 42nd Annual ACM SIGPLAN- SIGACT Symposium on Principles of Programming Languages, 2015.
On markov chains with an enumerable infinity of states. F G Foster, Mathematical Proceedings of the Cambridge Philosophical Society. 484F. G. Foster. On markov chains with an enumerable infinity of states. Mathematical Proceedings of the Cambridge Philosophical Society, 48(4):587-591, Oct 1952.
Operational versus weakest pre-expectation semantics for the probabilistic guarded command language. Friedrich Gretz, Joost-Pieter Katoen, Annabelle Mciver, Perform. Eval. 73Friedrich Gretz, Joost-Pieter Katoen, and Annabelle McIver. Operational versus weakest pre-expectation semantics for the probabilistic guarded command lan- guage. Perform. Eval., 73:110-132, 2014.
Termination of probabilistic concurrent programs. S Hart, M Sharir, A Pnueli, ACM Trans Prog Lang Sys. 5S. Hart, M. Sharir, and A. Pnueli. Termination of probabilistic concurrent pro- grams. ACM Trans Prog Lang Sys, 5:356-80, 1983.
Probabilistic models for the guarded command language. Jifeng He, Karen Seidel, A K Mciver, Science of Computer Programming. 28First presented at FMTA '95Jifeng He, Karen Seidel, and AK McIver. Probabilistic models for the guarded command language. Science of Computer Programming, 28:171-92, 1997. First presented at FMTA '95, Warsaw.
A probabilistic powerdomain of evaluations. C Jones, G Plotkin, Proceedings of the IEEE 4th Annual Symposium on Logic in Computer Science. the IEEE 4th Annual Symposium on Logic in Computer ScienceLos Alamitos, CalifComputer Society PressC. Jones and G. Plotkin. A probabilistic powerdomain of evaluations. In Pro- ceedings of the IEEE 4th Annual Symposium on Logic in Computer Science, pages 186-95, Los Alamitos, Calif., 1989. Computer Society Press.
Weakest precondition reasoning for expected run-times of probabilistic programs. Joost-Pieter Benjamin Lucien Kaminski, Christoph Katoen, Federico Matheja, Olmedo, Proceedings of ESOP. ESOPBenjamin Lucien Kaminski, Joost-Pieter Katoen, Christoph Matheja, and Federico Olmedo. Weakest precondition reasoning for expected run-times of probabilistic programs. In Proceedings of ESOP, 2016.
On non-dissipative markoff chains with an enumerable infinity of states. David G Kendall, Mathematical Proceedings of the Cambridge Philosophical Society. 4731951David G. Kendall. On non-dissipative markoff chains with an enumerable infin- ity of states. Mathematical Proceedings of the Cambridge Philosophical Society, 47(3):633-634, 007 1951.
Theory and Application of Infinite Series. Konrad Knopp, LondonKonrad Knopp. Theory and Application of Infinite Series. London, 1928.
A probabilistic PDL. D Kozen, Jnl Comp Sys Sci. 302D. Kozen. A probabilistic PDL. Jnl Comp Sys Sci, 30(2):162-78, 1985.
Recurrence vs transience: An introduction to random walks. Pablo Lessa, Pablo Lessa. Recurrence vs transience: An introduction to random walks.
Abstraction, Refinement and Proof for Probabilistic Systems. A K Mciver, C C Morgan, SpringerNew YorkTech Mono Comp SciA.K. McIver and C.C. Morgan. Abstraction, Refinement and Proof for Probabilistic Systems. Tech Mono Comp Sci. Springer, New York, 2005.
Probabilistic termination in B. A K Mciver, C C Morgan, T S Hoang, Proc ZB '03. D. Bert, J.P.Bowen, S. King, and M. WaldenZB '03Springer2651A.K. McIver, C.C. Morgan, and T.S. Hoang. Probabilistic termination in B. In D. Bert, J.P.Bowen, S. King, and M. Walden, editors, Proc ZB '03, volume 2651 of LNCS, pages 2-6-239. Springer, 2003.
A new rule for almost-certain termination of probabilistic-and demonic programs. Annabelle Mciver, Carroll Morgan, Annabelle McIver and Carroll Morgan. A new rule for almost-certain termination of probabilistic-and demonic programs. https://arxiv.org/abs/1612.01091v1, December 2016.
Proof rules for probabilistic loops. C C Morgan, Proc BCS-FACS 7th Refinement Workshop, Workshops in Computing. He Jifeng, John Cooke, and Peter WallisBCS-FACS 7th Refinement Workshop, Workshops in ComputingSpringerC.C. Morgan. Proof rules for probabilistic loops. In He Jifeng, John Cooke, and Peter Wallis, editors, Proc BCS-FACS 7th Refinement Workshop, Workshops in Computing. Springer, July 1996. ewic.bcs.org/conferences/1996/refinement/papers/paper10.htm.
Almost-certain eventualities and abstract probabilities in the quantitative temporal logic qTL. C C Morgan, A K Mciver, Theo Comp Sci. 2933Available at [11, key PROB-1]; earlier version appeared in CATS '01C.C. Morgan and A.K. McIver. Almost-certain eventualities and abstract prob- abilities in the quantitative temporal logic qTL. Theo Comp Sci, 293(3):507-34, 2003. Available at [11, key PROB-1]; earlier version appeared in CATS '01.
Probabilistic predicate transformers. C C Morgan, A K Mciver, K Seidel, doi.acm.org/10.1145/229542.229547ACM Trans Prog Lang Sys. 183C.C. Morgan, A.K. McIver, and K. Seidel. Probabilistic predicate transformers. ACM Trans Prog Lang Sys, 18(3):325-53, May 1996. doi.acm.org/10.1145/229542.229547.
Modeling and Verification of Randomized Distributed Real-Time Systems. Roberto Segala, MITPhD thesisRoberto Segala. Modeling and Verification of Randomized Distributed Real-Time Systems. PhD thesis, MIT, 1995.
|
[] |
[
"The Tradeoff Analysis in RF-Powered Backscatter Cognitive Radio Networks",
"The Tradeoff Analysis in RF-Powered Backscatter Cognitive Radio Networks"
] |
[
"Dinh Thai Hoang \nSchool of Computer Engineering\nNanyang Technological University (NTU)\nSingapore\n",
"Dusit Niyato \nSchool of Computer Engineering\nNanyang Technological University (NTU)\nSingapore\n",
"Ping Wang \nSchool of Computer Engineering\nNanyang Technological University (NTU)\nSingapore\n",
"Dong In Kim \nSchool of Information and Communication Engineering\nSungkyunkwan University (SKKU)\nKorea\n",
"Zhu Han \nDepartment of Electrical and Computer Engineering\nUniversity of Houston\nUSA\n"
] |
[
"School of Computer Engineering\nNanyang Technological University (NTU)\nSingapore",
"School of Computer Engineering\nNanyang Technological University (NTU)\nSingapore",
"School of Computer Engineering\nNanyang Technological University (NTU)\nSingapore",
"School of Information and Communication Engineering\nSungkyunkwan University (SKKU)\nKorea",
"Department of Electrical and Computer Engineering\nUniversity of Houston\nUSA"
] |
[] |
In this paper, we introduce a new model for RFpowered cognitive radio networks with the aim to improve the performance for secondary systems. In our proposed model, when the primary channel is busy, the secondary transmitter is able either to backscatter the primary signals to transmit data to the secondary receiver or to harvest RF energy from the channel. The harvested energy then will be used to transmit data to the receiver when the channel becomes idle. We first analyze the tradeoff between backscatter communication and harvest-then-transmit protocol in the network. To maximize the overall transmission rate of the secondary network, we formulate an optimization problem to find time ratio between taking backscatter and harvest-thentransmit modes. Through numerical results, we show that under the proposed model can achieve the overall transmission rate higher than using either the backscatter communication or the harvest-then-transmit protocol.
|
10.1109/glocom.2016.7842321
|
[
"https://arxiv.org/pdf/1608.01789v1.pdf"
] | 18,888,033 |
1608.01789
|
8bc1161aca56b2f1c64c7c983b027ddbff71d43c
|
The Tradeoff Analysis in RF-Powered Backscatter Cognitive Radio Networks
Dinh Thai Hoang
School of Computer Engineering
Nanyang Technological University (NTU)
Singapore
Dusit Niyato
School of Computer Engineering
Nanyang Technological University (NTU)
Singapore
Ping Wang
School of Computer Engineering
Nanyang Technological University (NTU)
Singapore
Dong In Kim
School of Information and Communication Engineering
Sungkyunkwan University (SKKU)
Korea
Zhu Han
Department of Electrical and Computer Engineering
University of Houston
USA
The Tradeoff Analysis in RF-Powered Backscatter Cognitive Radio Networks
Cognitive radio networksambient backscatteringRF energy harvestingconvex optimization
In this paper, we introduce a new model for RFpowered cognitive radio networks with the aim to improve the performance for secondary systems. In our proposed model, when the primary channel is busy, the secondary transmitter is able either to backscatter the primary signals to transmit data to the secondary receiver or to harvest RF energy from the channel. The harvested energy then will be used to transmit data to the receiver when the channel becomes idle. We first analyze the tradeoff between backscatter communication and harvest-then-transmit protocol in the network. To maximize the overall transmission rate of the secondary network, we formulate an optimization problem to find time ratio between taking backscatter and harvest-thentransmit modes. Through numerical results, we show that under the proposed model can achieve the overall transmission rate higher than using either the backscatter communication or the harvest-then-transmit protocol.
I. INTRODUCTION
Recently, RF energy harvesting technique has been integrated and implemented in cognitive radio networks (CRNs). This leads to a new type of networks, called RF-powered CRNs. In these networks, secondary users can harvest RF energy when a primary channel is busy, and use the energy to transmit data when the primary channel is idle [1], [2]. This is referred to as the harvest-then-transmit protocol/mode. There are many advantages of RF energy harvesting in CRNs as discussed in [3]. However, for RF-powered CRNs, when the channel idle probability is low, i.e., the channel is mostly occupied by primary users, the secondary transmitters have less opportunity to transmit data, resulting in a low overall transmission rate for secondary networks. Therefore, there is a need to overcome this shortcoming.
Ambient backscatter communication [4], [5] has been introduced as a new communication method. The technique allows wireless data transmission between two wireless nodes using ambient signals without needing a standard form of energy supply and storage. In ambient backscatter communication, when a transmitter wants to communicate with a receiver, the transmitter will backscatter signals received from a signal source, e.g., a TV tower, to its receiver. The receiver then can decode and obtain useful information from the transmitter. However, similar to traditional RF-powered CRNs, the performance of backscatter communication greatly depends on the ambient signals. Specifically, when the idle channel probability is high, the performance of the ambient backscatter communication is low due to limited time to backscatter. Therefore, in this paper, we propose a novel model which utilizes the advantages of both backscatter communication and harvest-then-transmit protocol in RF-powered CRNs.
In particular, we consider an RF-powered CRN with the backscatter communication capability. In the network, the secondary transmitter (ST) is able not only to harvest energy from radio signals, but also to backscatter these signals to its receiver for data transmission. As highlighted in [6], backscatter communication and energy harvesting cannot practically be performed at the same time. If the ST performs backscatter communication, the RF carrier wave is being modulated which can significantly reduce the amount of harvested energy, and mostly it is not sufficient to transmit data. Clearly, when the channel is mostly busy, the ST should use backscatter mode to transmit data. By contrast, when the channel is less busy, the ST should use the harvest-then-transmit mode. This leads to an important question of how to tradeoff the time for using backscatter and harvest-then-transmit modes such that the overall transmission rate of the secondary user is maximized. Here, the overall transmission rate refers to the total rate from both backscatter and harvest-then-transmit modes.
The time tradeoff problem for wireless powered communication networks was studied in few work in the literature. For example, in [7], the authors studied the tradeoff between the wireless energy transfer and wireless information transmission for wireless powered communication networks by introducing the harvest-then-transmit protocol with the aim to maximize the transmission rate for the network. In [8], an optimal tradeoff time between the energy harvesting phase and the data transmission phase for an underlay CRN was investigated through adopting the convex optimization technique. Extending from [8], the authors in [9] considered a cooperation scenario for primary users (PUs) and secondary users (SUs). The SUs need to determine not only how much time for energy harvesting, but also how much power for PUs' data relay or data transmission to allocate.
In this paper, we analyze the tradeoff between energy harvesting and backscatter communication for an overlay RFpowered CRN. The main aim is to improve the overall transmission rate for the secondary network. We formulate an optimization problem to obtain the optimal time to perform energy harvesting and backscatter communication when the channel is busy. We show that the problem is convex, and hence any tool from convex optimization can be used to obtain a globally optimal solution. Through numerical results, we demonstrate that our proposed solution can significantly improve the performance for the secondary network compared with baseline methods. To the best of our knowledge, this is the first work that proposes the idea of integrating the ambient backscatter communication with wireless powered CRNs. Moreover, we are the first that introduce the tradeoff analysis in RF-powered backscatter CRNs.
II. SYSTEM MODEL A. Network Setting
We study an RF-powered backscatter CRN composed of a primary transmitter (PT), and a secondary transmitter (ST) communicating with a secondary receiver (SR). The ST is equipped with an RF energy harvesting module and a backscatter circuit in order to harvest RF energy and backscatter radio signals, respectively. The ST can also transmit data as normal wireless transmission. When the PT, e.g., an amplitude modulated (AM) broadcasting base station (BS) or a TV tower, transmits RF signal to its primary receiver (PR), the primary channel is busy. At the same time, the ST can either harvest energy and store it in the energy storage or backscatter the signal for data transmission [4]. The harvested energy is used for direct wireless data transmission to the SR when the primary channel is idle. This is referred to as the harvest-then-transmit mode while the other is referred to as the backscatter mode. We assume that the SR perfectly knows the mode of the ST and applies corresponding demodulators to extract useful information.
B. Tradeoff in RF-Powered Backscatter Cognitive Radio Network
In the proposed system, when the PT transmits signals, i.e., the primary channel is busy, the ST can transmit data to the SR using backscatter communication ( Fig. 1 (a)) or harvest energy ( Fig. 1 (b)). Let β denote the normalized channel idle period and (1 − β) denote the normalized channel busy period (as shown in Fig. 1). When the channel is busy, α denotes the time ratio for energy harvesting, and (1 − α) denotes the time ratio for backscatter communication. The energy harvested during the time ratio α will be used for direct data transmission during the idle channel period. We observe that there is a tradeoff between the time ratio for backscatter communication and energy harvesting. As shown in Fig. 2 1 , when α is small, i.e., the ST spends much time for backscatter communication, the overall transmission rate is small. This is from the fact that the ST cannot fully utilize the channel idle period for direct data transmission due to small amount of energy harvested. As α increases, the overall transmission rate increases since more harvested energy can be used to transmit more data. However, when the ST spends much time for energy harvesting, i.e., α is high, the overall transmission rate decreases since the channel idle period is limited while the backscatter communication is not efficiently used during the channel busy period.
Clearly, the ST can achieve the optimal overall transmission rate by balancing between the backscatter communication and energy harvesting during the busy channel period. In particular, there is an optimal value for α * which we aim to achieve by formulating and solving an optimization problem presented in the following sections.
III. PROBLEM FORMULATION AND PROPOSED SOLUTION A. Problem Formulation
We aim at maximizing the overall transmission rate of the secondary network which is the number of information bits transmitted by the ST per time unit. We denote R as the overall transmission rate which is obtained as follows:
R = R b + R h ,(1)
where R b and R h are the numbers of transmitted bits using the backscatter mode and the harvest-then-transmit mode in a time unit, respectively. 1) Backscatter mode: In the following, we explain how SR can receive information through using ambient backscatter and how to control backscatter transmission rate between ST and SR.
a) Extracting backscatter information from ambient signals: We briefly describe the method used by the SR to extract information from the ST through the ambient backscatter communication. For more details, the readers are referred to [4]. The core idea of backscatter communication is that the ST backscatters information at a lower rate than that of ambient signals, e.g., signals from the PT. Thus, the SR is able to distinguish such two signals by using the averaging mechanisms. In particular, the authors in [4] presented a simple circuit diagram to demodulate the information for the SR. There are two stages, i.e., averaging stage and compute threshold stage. In the first stage, the SR smooths and averages the natural variations in the PT signals. The output of the averaging stage yields two signal levels, corresponding to the voltage V 0 (bit '0') and the voltage V 1 (bit '1') for V 1 > V 0 . Then, in the second stage, the SR computes the threshold between these two levels, which is the average of the two signal levels, i.e., V0+V1 2 . If the received signal is greater than the threshold, the SR concludes that the received signal is V 1 , and V 0 otherwise. Finally, the comparator takes two voltages as inputs and generates a bit '0' or '1' accordingly.
b) Transmission rate of backscatter mode: It is shown in [4] that the transmission rate of the ambient backscatter communication depends on the setting of the RC circuit elements. For example, to transmit data at the transmission rate of 1kbps and 10kbps, the values of circuit elements, i.e., R 1 , R 2 , C 1 , and C 2 , are set at (150 kΩ, 10 M Ω, 4.7 nF , 10 nF ) and (150 kΩ, 10 M Ω, 680 pF , 1 µF ), respectively. Therefore, let B b denote the transmission rate of the ambient backscatter communication, the total number of bits transmitted using the backscatter mode for the RF-powered backscatter CRN is expressed as follows:
R b = (1 − β)(1 − α)B b .(2)
Here, we note that through real implementations in [4], when the ST backscatters signals to the SR, the ST still can harvest energy from RF signals. Although the amount of harvested energy is not enough to transmit data (when the channel is idle), it is sufficient to sustain backscatter operations of the ST. Therefore, in (2), there is no need to consider the circuit energy consumption for the backscatter mode.
2) Harvest-then-transmit mode: This mode includes two phases. First, the ST harvests energy from the PT in the energy harvesting period. Then, the ST will use the harvested energy to transmit data in the data transmission period. In the following, we show the amount of energy that the ST can harvest in the first phase and the number of bits transmitted in the second phase.
a) Harvesting energy: From Friis equation [10], we can determine the harvested RF power from the PT for the ST in a free space as follows:
P R = δP T G T G R λ 2 (4πd) 2 ,(3)
where P R is the ST's harvested power, P T is the PT transmission power, δ is the energy harvesting efficiency, G T is the PT antenna gain, G R is the ST antenna gain, λ is the emitted wavelength, and d is the distance between the PT and the ST. We then derive the total amount of harvested energy over the energy harvesting period α(1 − β) as follows:
E h = α(1 − β)P R = α(1 − β)δP T G T G R λ 2 (4πd) 2 .(4)
b) Transmitting data: After harvesting energy in the first phase, the ST will use all harvested energy subtracted by the circuit energy consumption to transmit data over the data transmission period µ when the channel is idle. Let P tr denote the transmission power of the ST in the data transmission period µ (µ ∈ [0, β] as shown in Fig. 1 (c)) when the channel is idle. Thus, P tr can be obtained from
P tr = E h − E c µ ,(5)
where E h is the total harvested energy and E c is the circuit energy consumption. From [11], given the transmit power P tr , the transmit data rate can be determined as follows:
r h = κW log 2 1 + P tr P 0 ,(6)
where κ ∈ [0, 1] is the transmission efficiency, W is the bandwidth of the primary channel, and P 0 is the ratio between the noise power N 0 and the channel gain coefficient h, i.e., P 0 = N0 h . Then, the number of transmitted bits per time unit using the harvest-then-transmit mode is given by
R h = µκW log 2 1 + P tr P 0 .(7)
Here, since R h in (7) must be non-negative, P tr in (5) must be also non-negative. Consequently, from (5), we have the following condition:
E h = α(1 − β)P R ≥ E c , it means (8) α ≥ E c (1 − β)P R .(9)
We denote α † =
Ec
(1−β)PR as the minimum energy harvesting time to obtain enough energy for supplying the circuit of the ST to use the harvest-then-transmit mode. Then, we have α ≥ α † . Note that we have α ≤ 1. Therefore, if α † ≤ 1, then R h can be greater than zero. We denote m = (1−β)
P0µ P R and n = 1 − Ec P0µ , then from (7) we have
R h = µκW log 2 (n + mα), if α † ≤ 1 and α † ≤ α, 0, otherwise.
(10) Here, we note that m > 0 and (n + mα) > 0, ∀α ∈ [α † , 1].
Then, the optimization problem can be formulated as in (11) (on the top of next page).
B. Proposed Solution
First, from (11), when R(α, µ)
= (1 − β)(1 − α)B b , it is easy to show that max α,µ R(α, µ) = R(α = 0) = (1 − β)B b , ∀α ∈ [0, 1]. (12)
Second, through Theorem 1, we will prove that when α † ≤ 1 and α † ≤ α, the optimal overall transmission rate is achieved when the ST transmits data over the entire channel idle period, i.e., max α,µ R(α, µ) = R(α, β). THEOREM 1. When α † ≤ 1 and α † ≤ α, if we consider R h from (10) as a function of µ, then R h reaches the highest value if and only if µ = β. In other words,
max µ R h (µ) = R h (β), ∀µ ∈ [0, β].(13)
The proof of Theorem 1 is ignored due to the limited pages. From Theorem 1, the optimization problem in (11) can be rewritten with only one variable α as in (14) (on the top of next page). After that, we give the following theorem. The proof of Theorem 2 is similar to the proof of Theorem 1 in Appendix A. We ignore here due to the limited pages.
THEOREM 3. For α ∈ [α † , 1] and α † ≤ 1, if B b ≥ βκW m (mα † +n)(1−β) ln 2 , then α * = α † . Moreover, when B b ≤ βκW m (m+n)(1−β) ln 2 , then α * = 1.
The proof of Theorem 3 is similar to the proof of Theorem 1 in Appendix A. We ignore here due to the limited pages.
From Theorem 2 and Theorem 3, we show graphically the optimal solution α * ∈ [α † , 1] under the variation of B b in Fig. 3. Note that the convexity of the objective function R is proved in Appendix ?? and validated in Fig. 2.
Finally, we can derive the maximum value of R as in (15) (on the top of next page).
IV. PERFORMANCE EVALUATION A. Experiment Setup
In the RF-powered backscatter CRN under our consideration, the PT is an FM radio station. The bandwidth and the frequency of the FM signals are set at 100kHz and 100MHz, respectively. The idle channel probability is 0.3. Unless otherwise stated, the transmission power of the PT that broadcasts BM signals is set at 10kW, and the backscatter transmission rate are set at 33kbps. The PT antenna gain and ST antenna gain are set at 6dbi as in [12], and the circuit power consumption is set at -35dbm. Similar to [4], the distance from PT to ST is assumed to be around 6.7 miles while the distance between ST and SR is within 1 meter. The energy harvesting efficiency and data transmission efficiency are set at 0.6.
B. Numerical Results
In Fig. 2 (on page 3), we show the variation of the objective function and the optimal value of α. As shown in Fig. 2, when α ∈ [α † , 1], the objective function R(α) is concave, and it achieves the highest value at α * = 0.41125. Then, in Figs. 4(a) and (b), we show the optimal value of α and the overall transmission rate of the secondary network when β is varied. As shown in Fig. 4(a), as the channel idle ratio increases from 0.1 to 0.6, the optimal value of α gradually increases from 0.05 to 1. It remains at 1 when the channel idle ratio is higher than 0.6. This means that as the channel idle ratio increases, the ST will spend more time for harvesting energy instead of performing backscatter communication. Then, when the channel idle ratio is greater than or equal to 0.6, the ST will always use the harvest-then-transmit mode. The reason is that the harvest-then-transmit mode can provide higher transmission rate than that of the backscatter mode. Thus, when the channel idle ratio is high, i.e., the busy channel period is small, the ST will spend the whole time to harvest energy when the channel is busy. Consequently, more bits can be transmitted during the channel idle period.
In Fig. 4(b), we show the overall transmission rate obtained by the proposed solution. We compare the optimal results with two baseline strategies, i.e., backscatter mode (BM) and harvest-then-transmit mode (HM). Note that in BM or HM, the ST will only perform backscatter communication or energy harvesting, respectively. As shown in Fig. 4(b), the proposed
max α,µ R(α, µ) = (1 − β)(1 − α)B b + µκW log 2 (n + mα), if α † ≤ 1 and α † ≤ α, (1 − β)(1 − α)B b , otherwise. (11) max α R(α) = (1 − β)(1 − α)B b + βκW log 2 (n + mα), if α † ≤ 1 and α † ≤ α (1 − β)B b , otherwise. (14) R max = max (1 − β)B b , (1 − β)(1 − α * )B b + βκW log 2 (n + mα * ) , if α † ≤ 1 and α † ≤ α (1 − β)B b , otherwise.(15)
The channel idle ratio - solution always achieves the highest transmission rate compared with the BM and HM. In particular, when the channel idle ratio is 0.1, the overall transmission rate obtained by the proposed solution is approximately 2 times greater than those of the HM. When the channel idle ratio is 0.6, the overall transmission rate obtained by the proposed solution is equal to that of the HM and almost 1.3 times greater than that of the BM. Note that when the channel idle ratio increases from 0.4 to 0.9, there is a decrease of transmission rate obtained by the HM. The reason is that when the channel idle ratio is too high, there is no energy for the ST to harvest, and thus the transmission rate will be reduced. We then vary the transmission power of the PT (Fig. 5) and the transmission rate of the BM (Fig. 6) to evaluate the performance as well as the tradeoff between BM and HM modes. In Fig. 5 (a), as the transmission power of the PT increases, the optimal value of α remains at zero (i.e., the The transmission power of the basestation BM) when the transmission power is lower than 13kW, and it increases gradually to 0.9 as the transmission power increases from 13kW to 50kW. In Fig. 6(a), as the transmission rate of the backscatter mode increases, the optimal value of α remains at one (i.e., the HM) when the transmission rate of the BM is lower than 21kbps, and it will be reduced gradually, i.e., the ST tends to spend more time for the backscatter mode, to zero when the transmission rate of the BM is greater than 45kbps. Furthermore, as shown in both Fig. 5(b) and Fig. 6(b), the overall transmission rate obtained by the proposed solution always achieves the best performance compared with that of the BM and the HM.
V. SUMMARY
In this work, we have proposed a new concept of integrating the ambient backscatter communication with RF-powered CRN. We then have introduced an optimization problem to obtain an optimal solution for the secondary transmitter to backscatter signals or to harvest energy for data transmission.
The objective is to maximize the overall transmission rate for the secondary network. Numerical results have shown that our proposed solution can achieve significantly better performance compared with using either backscatter communication or harvest-then-transmit protocol.
APPENDIX A THE PROOF OF THEOREM 1
Since α † ≤ 1 and α † ≤ α, then from (10), we have
R h = µκW log 2 1 + 1 P 0 µ α(1 − β)P R − E c .(16)
To prove Theorem 1, we denote a = κW,
b = 1 P 0 α(1 − β)P R − E c ,(17)
where a and b are positive constants since now we consider R h as a function of µ. Then, (10) becomes R h (µ) = aµ log 2 1 + b µ .
We then derive the first and second derivatives of R h with respect to µ as follows:
R h (µ) = a log 2 1 + b µ − ab (µ + b) ln 2 ,(20)
R h (µ) = − ab 2 µ(µ + b) 2 ln 2 .
From (21), we show that R h < 0 since a, b, and µ are greater than 0. Hence, R h (µ) is a decreasing function with respect to µ. Moreover, from (20), we derive the following result (22) This implies that R h (µ) > 0, ∀µ ∈ [0, β]. As a result, R h (µ) is an increasing function over µ ∈ [0, β], and thus max µ R h (µ) = R h (β), ∀µ ∈ [0, β].
The proof now is completed.
Fig. 1 .
1RF-powered cognitive radio network with ambient backscatter communication.
Fig. 2 .
2The optimal value of α.
THEOREM 2 .
2When α ∈ [α † , 1] and α † ≤ 1 and the value backscatter transmission rate B b ∈ † +n)(1−β) ln 2 , there exists a globally optimal solution of α * ∈ [α † , 1] which maximizes R.
Fig. 3 .
3Optimal value of α under the variation of B b when R h ≥ 0.
Fig. 4 .
4The performance of the system under the variation of channel idle ratio.
Fig. 5 .
5The performance of the system under the variation of transmission power of the base station.
Fig. 6 .
6The performance of the system under the variation of transmission rate of the backscatter mode.
The parameter setting for obtaining the result inFig. 2is provided in Section IV.
Optimal spectrum access for energy harvesting cognitive radio network. S Park, D Hong, IEEE Transactions on Wireless Communications. 1212S. Park and D. Hong, "Optimal spectrum access for energy harvesting cognitive radio network," IEEE Transactions on Wireless Communications, vol. 12, no. 12, pp. 6166-6179, Dec. 2013.
Opportunistic channel access and RF energy harvesting in cognitive radio networks. D T Hoang, D Niyato, P Wang, D I Kim, IEEE Journal on Selected Areas in Communications. 3211D. T. Hoang, D. Niyato, P. Wang, and D. I. Kim, "Opportunistic channel access and RF energy harvesting in cognitive radio networks," IEEE Journal on Selected Areas in Communications, vol. 32, no. 11, pp. 2039- 2052, Nov. 2014.
Performance optimization for cooperative multiuser cognitive radio networks with RF energy harvesting capability. D T Hoang, D Niyato, P Wang, D I Kim, IEEE Transactions on Wireless Communications. 147D. T. Hoang, D. Niyato, P. Wang, and D. I. Kim, "Performance optimiza- tion for cooperative multiuser cognitive radio networks with RF energy harvesting capability," IEEE Transactions on Wireless Communications, vol. 14, no. 7, pp. 3614-3629, Mar. 2015.
Ambient backscatter: Wireless communication out of thin air. V Liu, A Parks, V Talla, S Gollakota, D Wetherall, J R Smith, Proceedings of the ACM SIGCOMM. the ACM SIGCOMMHong KongV. Liu, A. Parks, V. Talla, S. Gollakota, D. Wetherall, and J. R. Smith, "Ambient backscatter: Wireless communication out of thin air," in Pro- ceedings of the ACM SIGCOMM, pp. 39-50, Hong Kong, Aug. 2013.
Wi-fi backscatter: internet connectivity for RF-powered devices. B Kellogg, A Parks, S Gollakota, J R Smith, D Wetherall, Proceedings of the ACM SIGCOMM. the ACM SIGCOMMChicagoB. Kellogg, A. Parks, S. Gollakota, J. R. Smith, and D. Wetherall, "Wi-fi backscatter: internet connectivity for RF-powered devices," in Proceedings of the ACM SIGCOMM, pp. 607-618, Chicago, Aug. 2014.
Enabling bit-by-bit backscatter communication in severe energy harvesting environments. P Zhang, D Ganesan, Proceedings of the 11th USENIX Conference on Networked Systems Design and Implementation. the 11th USENIX Conference on Networked Systems Design and ImplementationWashingtonP. Zhang and D. Ganesan, "Enabling bit-by-bit backscatter communication in severe energy harvesting environments," in Proceedings of the 11th USENIX Conference on Networked Systems Design and Implementation, pp. 345-357, Washington, Apr. 2014.
Throughput maximization in wireless powered communication networks. H Ju, R Zhang, IEEE Transactions on Wireless Communications. 131H. Ju and R. Zhang, "Throughput maximization in wireless powered com- munication networks," IEEE Transactions on Wireless Communications, vol. 13, no. 1, pp. 418-428, Jan. 2014.
Optimal time sharing in underlay cognitive radio systems with RF energy harvesting. V Rakovic, D Denkovski, Z H Velkov, L Gavrilovska, IEEE International Conference on Communications. LondonV. Rakovic, D. Denkovski, Z. H. Velkov, and L. Gavrilovska, "Optimal time sharing in underlay cognitive radio systems with RF energy harvest- ing," in IEEE International Conference on Communications, pp. 7689- 7697, London, Jun. 2015
Optimal cooperation strategy in cognitive radio systems with energy harvesting. S Yin, E Zhang, Z Qu, L Yin, S Li, IEEE Transactions on Wireless Communications. 139S. Yin, E. Zhang, Z. Qu, L. Yin, and S. Li, "Optimal cooperation strategy in cognitive radio systems with energy harvesting," IEEE Transactions on Wireless Communications, vol. 13, no. 9, pp. 4693-4707, Jan. 2014.
Antenna Theory: Analysis and Design. C A Balanis, WileyNew York, NY, USAC. A. Balanis, Antenna Theory: Analysis and Design. New York, NY, USA: Wiley, 2012.
Decentralized delay optimal control for interference networks with limited renewable energy storage. H Huang, V K N Lau, IEEE Transactions on Signal Processing. 605H. Huang and V. K. N. Lau, "Decentralized delay optimal control for interference networks with limited renewable energy storage," IEEE Transactions on Signal Processing, vol. 60, no. 5, pp. 2552-2561, May. 2012.
Reverse-link interrogation range of a UHF MIMO-RFID system in Nakagami-m fading channels. D Y Kim, D I Kim, IEEE Transactions on Industrial Electronics. 574D. Y. Kim and D. I. Kim, "Reverse-link interrogation range of a UHF MIMO-RFID system in Nakagami-m fading channels," IEEE Transactions on Industrial Electronics, vol. 57, no. 4, pp. 1468-1477, Apr. 2010.
|
[] |
[
"Nonlinear Biomedical Physics Estimating the distribution of dynamic invariants: illustrated with an application to human photo-plethysmographic time series",
"Nonlinear Biomedical Physics Estimating the distribution of dynamic invariants: illustrated with an application to human photo-plethysmographic time series"
] |
[
"Michael Small [email protected] \nDepartment of Electronic and Information Engineering\nHong Kong Polytechnic University\nHung HomKowloon, Hong Kong\n"
] |
[
"Department of Electronic and Information Engineering\nHong Kong Polytechnic University\nHung HomKowloon, Hong Kong"
] |
[
"Nonlinear Biomedical Physics"
] |
Dynamic invariants are often estimated from experimental time series with the aim of differentiating between different physical states in the underlying system. The most popular schemes for estimating dynamic invariants are capable of estimating confidence intervals, however, such confidence intervals do not reflect variability in the underlying dynamics. We propose a surrogate based method to estimate the expected distribution of values under the null hypothesis that the underlying deterministic dynamics are stationary. We demonstrate the application of this method by considering four recordings of human pulse waveforms in differing physiological states and show that correlation dimension and entropy are insufficient to differentiate between these states. In contrast, algorithmic complexity can clearly differentiate between all four rhythms.
|
10.1186/1753-4631-1-8
| null | 1,840,597 |
nlin/0308032
|
ff6be558289b0869f18c5112a4a534d5e482cfdb
|
Nonlinear Biomedical Physics Estimating the distribution of dynamic invariants: illustrated with an application to human photo-plethysmographic time series
2007
Michael Small [email protected]
Department of Electronic and Information Engineering
Hong Kong Polytechnic University
Hung HomKowloon, Hong Kong
Nonlinear Biomedical Physics Estimating the distribution of dynamic invariants: illustrated with an application to human photo-plethysmographic time series
Nonlinear Biomedical Physics
18200710.1186/1753-4631-1-8Received: 22 May 2007 Accepted: 23 July 2007BioMed Central Page 1 of 11 (page number not for citation purposes) Research * Corresponding author This article is available from: http://www.nonlinearbiomedphys.com/content/1/1/8
Dynamic invariants are often estimated from experimental time series with the aim of differentiating between different physical states in the underlying system. The most popular schemes for estimating dynamic invariants are capable of estimating confidence intervals, however, such confidence intervals do not reflect variability in the underlying dynamics. We propose a surrogate based method to estimate the expected distribution of values under the null hypothesis that the underlying deterministic dynamics are stationary. We demonstrate the application of this method by considering four recordings of human pulse waveforms in differing physiological states and show that correlation dimension and entropy are insufficient to differentiate between these states. In contrast, algorithmic complexity can clearly differentiate between all four rhythms.
Background
Various dynamic invariants are often estimated from time series in a wide variety of scientific disciplines. It has long been known that these estimates (and in particular correlation dimension estimates) alone are not sufficient to differentiate between chaos and noise. Most notably, the method of surrogate data [1] was introduced in an attempt to reduce the rate of false positives during the hunt for physical examples of chaotic dynamics. Although it is not possible to find conclusive evidence of chaos through estimation of dynamic invariants, surrogate methods are often used to generate a distribution of statistic values (i.e. the estimates of the dynamic invariant) under the hypothesis of linear noise. In the most general form, the standard surrogate methods can generate the distribution of statistic values under the null hypothesis of a static monotonic nonlinear transformation of linearly filtered noise.
In this communication, we introduce a significant generalisation of a recent surrogate generation algorithm [2,3]. The pseudo-periodic surrogate (PPS) algorithm allows one to generate data consistent with the null hypothesis of a noise driven periodic orbit -provided the data exhibits pseudo-periodic dynamics. Previously, this algorithm has been applied to differentiate between a noisy limit cycle, and deterministic chaos. By modifying this algorithm and applying it to noisy time series data, we are able to generate surrogate time series that are independent trajectories of the same deterministic system, measured via the same imperfect observation function. That is, we assume that there is a deterministic dynamical system subject to additive independent and identically distributed (i.i.d.) observational noise. This ensemble of attractor trajectory surrogates (ATS) can then be used to estimate the distribution of statistic values for estimates of any statistic derived from these time series.
The statistics of greatest interest to us are dynamic invariants of the underlying attractor, and in particular correlation dimension and entropy estimates provided by the Gaussian kernel algorithm (GKA) [4,5]. Our choice of the GKA is entirely arbitrary, but based on our familiarity with this particular algorithm. True estimation of dynamic invariants from noisy data is a process fraught with difficulty, in this paper we are only concerned with estimating the distribution of estimates. To emphasise this point further we repeat out analysis with another quantity, Lempel-Ziv complexity [6], which does not constitute a dynamics invariant. Nonetheless, our algorithm provides a reliable estimate of the distribution of statistic values for this statistic as well.
An important application for the ATS technique is to determine whether dynamic invariants estimated from distinct time series are significantly different. The question this technique can address is whether (for example) a correlation dimension of 2.3 measured during normal electrocardiogram activity is really distinct from the correlation dimension of 2.4 measured during an episode of ventricular tachycardia [7,8]. Estimates of dynamic invariants (including the GKA [4,5]) often do come with confidence intervals. But these confidence intervals are only based on uncertainty in the least-mean-square fit, not the underlying dynamics. Conversely, it is standard practice to obtain a large number of representative time series for each (supposedly distinct) physical state, and compare the distribution of statistic values derived from these. But, this approach is not always feasible: in [7,8] for example, the problem is not merely that these physiological states are difficult and dangerous to replicate, but that inter-patient variability makes doing so infeasible.
In the remainder of this paper we describe the new ATS algorithm and demonstrate that it can be used to estimate the distribution of dynamic invariant estimates from a single time series of a known dynamical system (we demonstrate this with the Hénon map and the chaotic Rössler system). We then apply this same method to four recordings of human pulse waveforms, measured via photoplethysmography [9,10]. Each of the four recordings correspond to a distinct physiological state. We compute correlation dimension and entropy using the GKA method and show that the expected distribution of correlation dimension and entropy estimates are insufficient to differentiate between these four physiological states. In contrast, we show that algorithmic complexity can clearly differentiate between all four rhythms.
In Section 2 we describe the algorithm we employ in this paper, and in Section 2.2 we demonstrate that, for suitable parameter values, this technique will preserve the deterministic dynamics of the underlying system. In Section 3 we present some numerical case studies and in Section 4 we finally present our conclusions.
Attractor trajectory surrogates
In the first part of this section we will review the PPS algorithm presented in [2] and describe the novel features of the ATS approach. In section 2.2 we examine the foundation of this technique's ability to preserve the underlying deterministic dynamics.
The algorithm
In what follows we assume that the measured scalar time series x t represents discretely sampled measurements of a . For the case of dynamic noise, the situation is complicated further as the evolution of m t is governed by m t+1 = φ(m t ) + ξ t where ξ t is stochastic.
The ATS algorithm may now be described as follows. Embed the observed scalar time series {x t } to obtain a vector time series {z t }, z t ∈ R d , of N observations. The choice of embedding is arbitrary, but has been adequately discussed in the literature (there are numerous works in this field, [11] for example, provides references to several of them). We assume that the embedding is such that there exists a continuously differentiable map Ξ : M → R d between the underlying manifold M and the embedding space R d such that both Ξ and DΞ are one-to-one. Under these conditions, the dynamics of (φ, M) and the evolution of z t = Ξ(m t ) ∈ R d are considered to be equivalent. From the embedded time series, the surrogate is obtained as follows. Choose an initial condition, w 1 ∈ {z j |j = 1, ..., N}. Then, at each step i, choose the successor to w i with probability where the noise radius ρ is an as-yet unspecified constant.
That is, the successor of w i , w i+1 is chosen to be the point
s t = w t ·[1 0 0 0 ʜ 0](2)
In [2,3] this algorithm was shown to be capable of differentiating between deterministic chaos and a noisy periodic orbit. In the context of the current communication we assume that {x t } is contaminated by additive (but possibly dynamic) noise and we choose the noise radius ρ such that the observed noise is replaced by an independent realisation of the same noise process. Furthermore, we assume that the deterministic dynamics are preserved by suitable choice of embedding parameters. Under these two assumptions, {z t } and {w t } have the same invariant density and {x t } and {s t } are therefore (noisy) realisation of the same dynamical system with (for suitable choice of ρ) the same noise distribution. We illustrate this more precisely in the following section.
Invariance
As in [2,3] the problem remains the correct choice of ρ.
This is the major difference between the ATS described here and the PPS of [2,3]. However, since the null hypothesis we wish to address is different from (and more general than) that of the PPS, choice of ρ for the ATS is less restrictive. For t = T given, one can compute P(w t+1 ≠ z i+1 ∧ ||w t -z i || = 0|t = T) directly from the data by applying (1) to the embedded time series(we use the symbol ∧ here in the usual manner to denote logical conjunction). Assuming the process is ergodic (that is, ergodic with respect to the standard measure -this assumption is sufficient rather than necessary) one can then sum to get the probability of a temporal discontinuity in the surrogate at any time instant. By temporal discontinuity we mean that w i = z j but w i+1 ≠ z j+1 . That is, a point where the surrogate trajectory does not exactly follow the data.
There is a one-to-one correspondence between a value p = have been selected correctly and the noise in the data is not too large, then the transformation x t # z t dictated by these parameters is an embedding. That is, the operator Ξ : M → R d with Ξ(m t ) = z t (in the absence of noise) and its derivative DΞ are both one-to-one. Hence, the dynamic evolution of can be represented by
P(w t+1 ≠ z i+1 ∧ ||w t -z i || = 0)z j+1 = Φ(z j ) + e j(4)
where Φ(·) is diffeomorphic to the true evolution operator (i.e. Φ = Ξ°φ°Ξ -1 where φ : M → M is the underlying evolution operator, defined earlier) and e j are uncorrelated noise vectors (corresponding to the terms ε t and possibly ξ t described earlier). Now we consider the process of constructing a surrogate. Let denote the surrogate vector time series of length N. Clearly, setting w 1 = z k for some randomly chosen k is simply some new initial condition. Now, w i+1 = z j+1 where j is chosen randomly from a distribution such that ||w i -z j || is small. Let ε i = z j -w i corresponding to the small (random) perturbation introduced by selection according to (1), then
w i+1 = Φ(w i + ε i ) + e j .( 5 )
Note that, ε j is the perturbation introduced in taking z j 's successor to be the successor of
w i (it is a dynamic noise exp − − w z i j ρ P w z P P i j i j i k k N ( ) , , + + = = = ∑ 1 1 1 P w z i k i j , exp = − − ρ P w z i j i j , exp = − − ρ P w z w z N P w z w z j T i j i j i j i j T N ( ) ( ) + + + + = ≠ ∧ − = = ≠ ∧ − = = ∑ 1 1 1 1 1 0 1 0 (3) z j d e ∈ R w i i N { } =1
term, and it is a perturbation introduced by the ATS method). Conversely, e j is the dynamic error in applying Φ (this term is inherent to the data, and to our model of the data). By taking n-th iterates of (4) and (5) we see that the two noise terms e j and ε i+1 will combine. In other words, from (5) we get
w i+2 = Φ(Φ(w i + ε i ) + e j + ε i+1 ) + e j+1 ,( 6 )
and so on. Suppose that e j ~ where is some noise distribution. Then, for the surrogates {s t } to be a new realisation of the system that generated {x t } we require that e j
+ ε i+1 ~
. But this is equivalent to the condition that z jw i ~ k for sufficiently small k. Hence, the critical issue is the choice of ρ such that the these two noise terms are drawn from the same distribution and that therefore the surrogate dynamic (5) is equivalent to (4). This requires sufficient data, ergodicity, and ρ small enough. Note that, as ρ becomes smaller and the surrogate data become more like realisations of the same system, we also see less randomisation. This is a natural and unavoidable tradeoff.
Results
The following subsections present the application of this method for data generated from the Hénon map (section 3.1), the Rössler system (section 3.2) and experimental measurements of human pulse pressure waves (section 3.3).
The Hénon map
One potential difficulty of this method is that the stretching-and-folding characteristic of Smale horseshoe type chaos could easily destroy the dynamics of (5) and therefore produce surrogate trajectories that short-cut across the attractor. Although we can see from equations (5) and (6) that for sufficiently small perturbations this will never be the case, we would like to test this possibility in practise. For this purpose we apply the method described in the previous section to the extremely well studied Hénon map: one of the archetypes of Smale horseshoe chaos. 1% and 10% noise levels), we computed typical ATS data for different values of transition probability p. We find that in almost all cases (see Figure 1) the results for the ATS data agree qualitatively with the data. Comparison of estimated dynamic invariant (results omitted) confirm this. In all cases, for moderate range of p (i.e. p neither approaching 0 or 1) and moderate observational noise, we find data and surrogate agree closely. When this same computation was repeated for dynamic noise, we found data and surrogates to be similarly indistinguishable (see Figure 2): except for the case of large p and small noise (in this case, 1% dynamic noise and p = 0.8). Note that, for the Hénon map larger values of dynamic noise will actually force the system into an unstable régime.
The Rössler system
We now demonstrate the applicability of this method for a more realistic example: noisy time series data simulated from the Rössler differential equations (during "broadband" chaos). We integrated (one thousand points with a time step of 0.2) the Rössler equations both with and without multidimensional dynamic noise at 5% of the standard deviation of the data. As far as possible, we generated realisations of the Rössler system that superficially resemble the physiological data of 3.3. The purpose of this is to provide a more realistic test of our method. We then studied the x-component after the addition of 5% observational noise. We selected embedding parameters using For the data set and each ensemble of surrogates we then estimated correlation dimension D, entropy K and noise level S using the GKA algorithm [4,5]. The GKA embedding used embedding dimension m = 2, 3, ..., 10 and embedding lag of 1. It is important to note that, a correlation dimension estimate is not the same thing as the actual correlation dimension. In particular, this algorithm estimates correlation dimension and noise level simultaneously (as well as entropy). A lower correlation dimension (associated with the presumed determinism in the system) is accompanied by an increase in the estimated noise level. That is, the estimated dimension can be lower because the algorithm is attributing more of the variation in the data to noise, and therefore estimating a higher noise level (and hence, in some case, the correlation dimension falls below 1). Similarly, the fact that the entropy is negative in the first case is associated with the system noise. Nonetheless, we are using these numbers only as measures, that is, as test statistics. Figure For such moderate p we found that the estimate of noise S from the GKA algorithm coincided for data and surrogates, but this was often not the case for more extreme values of p. This estimate of signal noise content is therefore a strong test of the accuracy of the dynamics reproduced by the ATS time series. One expects this to be the case as noise level is precisely the parameter upon which the ATS method depends. Furthermore to confirm the spread of Sample reconstructed attractors for data and surrogates of the Hénon map the data we also estimated D, K, and S for 20 further realisations of the same Rössler system (with different initial conditions). In each case, as expected, the range of these values lies within the range predicted by the ATS scheme. We do see, for example, in Figure 3(c) that the range of noise level exhibited by the true Rössler system is not as expansive as that for the surrogates (to some extent, we can also observe the same problem with entropy in Figure 3(e)). This is due to the fact that the ATS method can be made to introduce more randomisation than absolutely necessary. By tuning down the randomisation we (obviously) will converge to the true data. By increasing the randomisation we cover an ever widening range, which will always include the true value. For large randomisa-tion, and for statistics that are most sensitive to noise (in this case K and S) there may also be some bias -the observed difference in the means. Although it is desirable that both distributions coincide exactly, it is re-assuring (and sufficient) that the ATS distribution contains the true distribution.
Photo-plethysmographic recordings
We now consider the application of this method to photoplethysmographic recordings of human pulse dynamics over a short time period (about 16.3 seconds). We have access to only a limited amount of data representative of each of four different dynamic regimes. In any case, we would expect the system dynamics to change if measured Sample reconstructed attractors for data andsurrogates of the Hénon map (e) over a significantly longer time frame. The data collection and processing with the methods of nonlinear time series analysis are described in [9,10]. Previously, we have studied nonlinear determinism in cardiac dynamics measured with electrocardiogram (ECG) [7,8]. Although we do not consider ECG data here, this data would be another useful system to examine with these methods. Actually, the problem with ECG data is that we have too much data and it is therefore difficult to fairly select a "representative" small number of short time series. However, we intend to examine this data more carefully in the future. However, we do note in passing that both PPG and ECG are measures of cardiac activity and are therefore potentially equivalent [12,13]. The four data sets we examine in this communication are depicted in Figure 4.
For each data set we repeated the analysis described for the Rössler time series. Results for GKA embedding dimension m = 6 and p = 0.1 are depicted in Figure 5. As with the Rössler system, variation of the parameters m and p did not significantly change the results. We find that in every case (except for extreme values of p) the distribution of D, K and S estimated from the ATS data using the GKA included the true value. Most significantly, this indicates that the range of values of p is appropriate. Moreover, these results are consistent with the hypotheses that the noise is effectively additive and can be modelled with this simple scheme, and that the underlying deterministic dynamics can be approximated with a local constant modelling scheme.
We also estimated the statistics D, K and S for additional available data (subsequent, contiguous, but non-overlapping) from each of the four rhythms. This small amount of data afforded us two or three additional estimates of each statistic for each rhythm. For the unstable and quasistable rhythm we observed good agreement. For the stable (normal and post-operative) rhythms, this is not the case. On examination of the data we find that this result is to be expected. Both the stable rhythms undergo a change in amplitude and baseline subsequent to the end of the original 16 second recording, this non-stationarity is reflected Distribution of statistics D, K and S for short and noisy realisations of the Rössler system in the results. This same non-stationarity has also been observed independently in Bhattacharya and co-workers [9,10].
We now return to the question that the ATS test was designed to address: can we differentiate between these four rhythms based on the GKA? Figure 6 provides the answer. In Figure 6 we see the estimated distribution of statistic values (D, K and S) for each of the four rhythms shown in figure 4. Clearly (and not surprisingly), the correlation dimension estimate and noise level of the unstable rhythm is significantly different from the other three rhythms.
Our analysis indicates that, contrary to what one may expect from individual measurements, the stable or "quasi-stable" rhythms cannot be properly distinguished based on these nonlinear statistics derived from the GKA. Moreover, we find that entropy estimated with the GKA algorithm K is of no use in differentiating between any of these four rhythms. Although it is not the purpose of this paper to provide a discriminating statistic for this data, it would be nice to do so. Therefore, in Figure 7 we repeat the calculation of surrogates and statistic distribution for the same data, but using algorithmic complexity (see [11] and the references therein) with binary, ternary, and quaternary encodings with equal likelihood for each symbol. Using this scheme it can be seen from Figure 7 that it is possible to distinguish, with a high level of certainty between three of these rhythms. Distinguishing between all four is also possible, with a small likelihood of error (see Figure 7(a)).
Human pulse waveform recorded withphoto-plethysmography
Conclusion
The results of this analysis are in general agreement with those presented in [9,10]. Independent linear surrogate analysis [1] has confirmed that each of these four rhythms is inconsistent with a monotonic nonlinear transformation of linearly filtered noise (these calculations are routine, and not presented in this paper). The only significant difference is that the correlation dimension estimates we present here are significantly lower than those in [9,10]. This is due to the different correlation dimension algorithm. Unlike the algorithm employed in [9,10], the GKA seperates the data into purely deterministic and stochastic components, and hence estimates both D and S. The correlation dimension estimated in [9,10] is the combined effect of both components of the GKA.
Although we have considered the specific application to human pulse dynamics, the algorithm we have proposed may be applied to a wide variety of problems. We have shown that provided time delay embedding parameters can be estimated adequately, and an appropriate value of the exchange probability is chosen, the ATS algorithm generates independent trajectories from the same dynamical system. When applied to data from the Rössler system Discriminating power of the statistics D, K and S for human pulse waveforms Figure 6 Discriminating power of the statistics D, K and S for human pulse waveforms. The distribution (a binned histogram) of statistic values estimated via the ATS method (as described in figure 5) for each of the four distinct physiological waveforms is shown. The four rhythms correspond to those in Figure 4. These figures show that correlation dimension alone is sufficient to differentiate between three of these four physiological states: on the left, "post-operative" and "quasi-stable" are indistinguishable, the correlation dimension for "normal" is bigger, and "unstable" is larger again We see that these three statistics are insufficient to differentiate between the "quasi-stable" and "post-operative" states, moreover, there is considerable overlap with the "normal" group. Discriminating power of complexity for human pulse waveforms Figure 7 Discriminating power of complexity for human pulse waveforms. The distribution (a binned histogram) of statistic values estimated via the ATS method (as described in figure 5) for each of the four distinct physiological waveforms is shown. The four rhythms correspond to those in figure 4. These figures show that complexity with 2, 3, and 4, symbols (plots (a), (b), and (c), respectively) is sufficient to differentiate between at least three of these four physiological states. The lowest complexity corresponds to "post-operative" state, the next highest to "quasi-stable" followed by "healthy" and finally "unstable". As in figure 6 there is considerable overlap between the "normal" and "quasi-stable" samples. However, for complexity with a binary partition (panel (a)) the four rhythms do appear to be distinct.
deterministic dynamical system (possibly continuous) under the influence of observational noise. In other words, the dynamics are determined by a smooth manifold M and deterministic evolution operator φ : M → M. The output of the evolution of an initial condition m 0 ∈ M under φ (i.e. m t = φ t (m 0 )) are observed via the differentiable function h : M → R. Unfortuantely, experimental measurement is not perfect and the observed time series {x t } is subject to observational noise, hence, x t = h(φ t (m 0 )) + ε t where ε t ~ is drawn from some stationary noise distribution
j is the antecedent of z j+1 . In other words, the successor to w i is the successor of a randomly chosen neighbour of w i . Equation (1) may then be written as where (and similarly, ). Finally, from the vector time series {w i } the ATS {s i } is obtained by projecting w i onto [1 0 0 0 ʜ 0] ∈ R d (the first coordinate). Hence
Figure 1
1illustrate typical ATS calculations for this data set. Using short (1000 point) sections of the Hénon system, with the addition of observational noise (the Figures show
the standard methods (yielding d e = 3 and τ = 8) and then computed ATS surrogates for various exchange probabilities p = 0.05, 0.1, 0.15, ..., 0.95.
3 depicts the results when the GKA is applied with embedding dimension m = 4 and the exchange probability is p = 0.1. Other values of m gave equivalent results, as did various values of p in the range [0.1, 0.8].
Figure 1
1Sample reconstructed attractors for data and surrogates of the Hénon map. Panels (a) and (f) are embedded time series data from the x-component of the Hénon system with the addition of 1% and 10% observational noise (respectively). The remaining panels are representative ATS time series. Panels (b), (c), (d) and (e) are surrogates for panel (a), and Panels (g), (h), (i) and (j) are for panel (f). Each surrogate is computed with a different level of transition probability P. In panels (b) and (g), p = 0.2; in panels (c) and (h), p = 0.4; in panels (d) and (i), p = 0.6; and, in panels (e) and (j), p = 0.8. In each case the attractors reconstructed from the surrogates have the same qualitative features as that of the data -with the possible exception of panel (e). The likely reason for this noted exception is the relatively high transition probability (p = 0.8) and the relatively low noise level (1%). Of course, for smaller values of p (i.e. p = 0.1) the similarity is even more striking.
Figure 2
2Sample reconstructed attractors for data andsurrogates of the Hénon map. Panel (a) is an embedded time series data from the x-component of the Hénon system with the addition of 1% dynamic noise. The remaining panels are representative ATS time series. Each surrogate is computed with a different level of transition probability P. In panel (b) p = 0.2; in panel (c) p = 0.4; in panel (d) p = 0.6; and, in panels (e) p = 0.8. In each case the attractors reconstructed from the surrogates have the same qualitative features as that of the data.
Figure 3
3Distribution of statistics D, K and S for short and noisy realisations of the Rössler system. The histogram shows the distribution of statistic estimates (D, K and S) for 500 ATS time series generated from a 1000 point realisation of the Rössler system. The tall vertical line on each plot is the comparable value for the data and the shorter vertical lines indicate 20 independent realisations of the same process. The top row of figures depicts results for the Rössler system with observational noise only, the bottom row of figures has both observational and dynamic noise. Panels (a) and (d) show correlation dimension estimates, (b) and (e) are entropy, and (c) and (f) are noise level.
Figure 4
4Human pulse waveform recorded withphoto-plethysmography. Four recordings of human pulse waveform (61 Hz) in four different physiological conditions. The four time series correspond to: (a) normal, (b) quasi-stable, (c) unstable, and (d) post-operative (stable).Distribution of statistics D, K and S for human pulse waveforms
Figure 5
5Distribution of statistics D, K and S for human pulse waveforms. The histogram shows the distribution of statistic estimates (D, K and S) for 500 ATS time series generated from each of the four time series depicted in figure 4. The taller vertical line on each plot is the comparable value for the shorter vertical lines are for the (limited) subsequent data recorded from each patient. In each case only two or three subsequent contiguous but non-overlapping time series were available. The figures are: (a) correlation dimension (D), (b) entropy (K), and (c) noise (S) for the normal rhythm; (d) D, (e) K, and (f) S for the quasi-stable rhythm; (g) D, (h) K, and (i) S for the unstable rhythm; and (j) D, (k) K, and (l) S for the post-operative stable rhythm.
S
Now, suppose that the embedding parameters τ and d eand ρ, and we choose to imple-
ment (1) for a particular value of p (i.e. a particular transi-
tion probability) rather than a specific noise level ρ. In
what follows we find that studying intermediate values of
p (p ~ 0.1) is sufficient. For p ∈ [0.1, 0.8] the qualitative
behaviour over the corresponding narrow range of ρ is
uniform. We choose to illustrate with p = 0.1, but the
results for other choices are similar. Of course, for p → 1
or p → 0 the algorithm will not work well.
(page number not for citation purposes)
Publish with Bio Med Central and every scientist can read your work free of charge"BioMed Central will be the most significant development for disseminating the results of biomedical researc h in our lifetime." Sir Paul Nurse, Cancer Research UK Your research papers will be: available free of charge to the entire biomedical community peer reviewed and published immediately upon acceptance cited in PubMed and archived on PubMed Central yours -you keep the copyright Submit your manuscript here: http://www.biomedcentral.com/info/publishing_adv.aspBioMedcentral
AcknowledgementsThis research was fully supported by a grant from the Research Grants Council of Hong Kong (Project No. PolyU 5269/06E) The author wish to thank J. Bhattacharya for supplying the photo-plethysmographic time series.we confirm this result, and we demonstrate its application to experimental data.When the ATS algorithm is applied to generate independent realisation of a hypothesis test, one is able to construct a test for non-stationarity. If two data sets do not fit the same distribution of ATS data then they can not be said to be from the same deterministic dynamical system. Unfortunately, the converse is not always true and the power of the test depends on the choice of statistic. The utility of this technique as a test for stationarity remains uncertain.
Testing for nonlinearity in time series: The method of surrogate data. J Theiler, S Eubank, A Longtin, B Galdrikian, J D Farmer, Physica D. 58Theiler J, Eubank S, Longtin A, Galdrikian B, Farmer JD: Testing for nonlinearity in time series: The method of surrogate data. Physica D 1992, 58:77-94.
Surrogate test for pseudo-periodic time series data. M Small, D Yu, R G Harrison, Physical Review Letters. 87188101Small M, Yu D, Harrison RG: Surrogate test for pseudo-periodic time series data. Physical Review Letters 2001, 87:188101.
Applying the method of surrogate data to cyclic time series. M Small, C Tse, Physica D. 164Small M, Tse C: Applying the method of surrogate data to cyclic time series. Physica D 2002, 164:187-201.
Estimating invariants of noisy attractors. C Diks, Physical Review E. 53Diks C: Estimating invariants of noisy attractors. Physical Review E 1996, 53:R4263-R4266.
Efficient implementation of the Gaussian kernel algorithm in estimating invariants and noise level fromnoisy time series data. D Yu, M Small, R G Harrison, C Diks, Physical Review E. 61Yu D, Small M, Harrison RG, Diks C: Efficient implementation of the Gaussian kernel algorithm in estimating invariants and noise level fromnoisy time series data. Physical Review E 2000, 61:3750-3756.
On the complexity of finite sequencess. A Lempel, J Ziv, IEEE Trans Inform Theory. 22Lempel A, Ziv J: On the complexity of finite sequencess. IEEE Trans Inform Theory 1976, 22:75-81.
Uncovering non-linear structure in human ECG recordings. M Small, D Yu, J Simonotto, R G Harrison, N Grubb, K Fox, Chaos, Solitons and Fractals. 13Small M, Yu D, Simonotto J, Harrison RG, Grubb N, Fox K: Uncov- ering non-linear structure in human ECG recordings. Chaos, Solitons and Fractals 2001, 13:1755-1762.
Automatic identification and recording of cardiac arrhythmia. M Small, D Yu, N Grubb, J Simonotto, K Fox, R G Harrison, Computers in Cardiology. 27Small M, Yu D, Grubb N, Simonotto J, Fox K, Harrison RG: Auto- matic identification and recording of cardiac arrhythmia. Computers in Cardiology 2000, 27:355-358.
Assessing determinism of photoplethysmographic signal. J Bhattacharya, P Kanjilal, IEEE Transactions on Systems, Man and Cybernetics A. 29Bhattacharya J, Kanjilal P: Assessing determinism of photo- plethysmographic signal. IEEE Transactions on Systems, Man and Cybernetics A 1999, 29:406-410.
Analysis and characterization of photo-plethysmographic signal. J Bhattacharya, P Kanjilal, V Muralidhar, IEEE Transactions on Biomedical Engineering. 48Bhattacharya J, Kanjilal P, Muralidhar V: Analysis and characteriza- tion of photo-plethysmographic signal. IEEE Transactions on Bio- medical Engineering 2001, 48:5-11.
M Small, Applied Nonlinear Time Series Analysis: Applications in Physics. 52Physiology and FinanceSmall M: Applied Nonlinear Time Series Analysis: Applications in Physics, Physiology and Finance, Volume 52 of Nonlinear Science Series A Singa- pore: World Scientific; 2005.
Equivalence between 'feeling the pulse' on the human wrist and the pulse pressure wave at fingertip. Y Zhao, M Small, International Journal of Neural Systems. 15Zhao Y, Small M: Equivalence between 'feeling the pulse' on the human wrist and the pulse pressure wave at fingertip. International Journal of Neural Systems 2005, 15:277-286.
Evidence consistent with deterministic chaos in human cardiac data: surrogate and nonlinear dynamical modeling. Y Zhao, M Small, International Journal of Bifurcation and Chaos. To appearZhao Y, Small M: Evidence consistent with deterministic chaos in human cardiac data: surrogate and nonlinear dynamical modeling. International Journal of Bifurcation and Chaos 2007. To appear
|
[] |
[
"Dynamical model for spindown of solar-type stars",
"Dynamical model for spindown of solar-type stars"
] |
[
"Aditi Sood \nSchool of Mathematics and Statistics\nUniversity of Sheffield\nS3 7RHSheffieldUnited Kingdom\n",
"Eun-Jin Kim \nSchool of Mathematics and Statistics\nUniversity of Sheffield\nS3 7RHSheffieldUnited Kingdom\n",
"Rainer Hollerbach \nDepartment of Applied Mathematics\nUniversity of Leeds\nLS2 9JTLeedsUnited Kingdom\n"
] |
[
"School of Mathematics and Statistics\nUniversity of Sheffield\nS3 7RHSheffieldUnited Kingdom",
"School of Mathematics and Statistics\nUniversity of Sheffield\nS3 7RHSheffieldUnited Kingdom",
"Department of Applied Mathematics\nUniversity of Leeds\nLS2 9JTLeedsUnited Kingdom"
] |
[] |
Since their formation, stars slow down their rotation rates by the removal of angular momentum from their surfaces, e.g. via stellar winds. Explaining how this rotation of solar-type stars evolves in time is an interesting but difficult problem in astrophysics in present times. Despite the complexity of the processes involved, a traditional model, where the removal of angular momentum loss by magnetic fields is prescribed, has provided a useful framework to understand observational relations between stellar rotation and age and magnetic field strength. Here, for the first time, a spindown model is proposed where loss of angular momentum by magnetic fields is evolved dynamically, instead of being kinematically prescribed. To this end, we evolve the stellar rotation and magnetic field simultaneously over stellar evolution time by extending our previous work on a dynamo model which incorporates the nonlinear feedback mechanisms on rotation and magnetic fields. We show that our extended model reproduces key observations and is capable of explaining the presence of the two branches of (fast and slow rotating) stars which have different relations between rotation rate Ω vs. time (age), magnetic field strength |B| vs. rotation rate, and frequency of magnetic field ω cyc vs. rotation rate. For fast rotating stars we find: (i) there is an exponential spindown Ω ∝ e −1.35t , with t measured in Gyrs, (ii) magnetic activity saturates for higher rotation rate, (iii) ω cyc ∝ Ω 0.83 . For slow rotating stars we obtain: (i) a power law spindown Ω ∝ t −0.52 , (ii) magnetic activity scales roughly linearly with rotation rate, (iii) ω cyc ∝ Ω 1.16 . The results obtained from our investigations are in good agreement with observations. The Vaughan-Preston gap is consistently explained in our model by the shortest spindown timescale in this transition from fast to slow rotators. Our results highlight the importance of self-regulation of magnetic fields and rotation by direct and indirect interactions involving nonlinear feedback in stellar evolution.
|
10.3847/0004-637x/832/2/97
|
[
"https://arxiv.org/pdf/1605.07125v2.pdf"
] | 54,203,847 |
1605.07125
|
04e8fac5f3542fee75b22e9ff2e41e922fbf9e29
|
Dynamical model for spindown of solar-type stars
18 Sep 2016
Aditi Sood
School of Mathematics and Statistics
University of Sheffield
S3 7RHSheffieldUnited Kingdom
Eun-Jin Kim
School of Mathematics and Statistics
University of Sheffield
S3 7RHSheffieldUnited Kingdom
Rainer Hollerbach
Department of Applied Mathematics
University of Leeds
LS2 9JTLeedsUnited Kingdom
Dynamical model for spindown of solar-type stars
18 Sep 2016Subject headings: Magnetic activityDifferential rotationStarsDynamo
Since their formation, stars slow down their rotation rates by the removal of angular momentum from their surfaces, e.g. via stellar winds. Explaining how this rotation of solar-type stars evolves in time is an interesting but difficult problem in astrophysics in present times. Despite the complexity of the processes involved, a traditional model, where the removal of angular momentum loss by magnetic fields is prescribed, has provided a useful framework to understand observational relations between stellar rotation and age and magnetic field strength. Here, for the first time, a spindown model is proposed where loss of angular momentum by magnetic fields is evolved dynamically, instead of being kinematically prescribed. To this end, we evolve the stellar rotation and magnetic field simultaneously over stellar evolution time by extending our previous work on a dynamo model which incorporates the nonlinear feedback mechanisms on rotation and magnetic fields. We show that our extended model reproduces key observations and is capable of explaining the presence of the two branches of (fast and slow rotating) stars which have different relations between rotation rate Ω vs. time (age), magnetic field strength |B| vs. rotation rate, and frequency of magnetic field ω cyc vs. rotation rate. For fast rotating stars we find: (i) there is an exponential spindown Ω ∝ e −1.35t , with t measured in Gyrs, (ii) magnetic activity saturates for higher rotation rate, (iii) ω cyc ∝ Ω 0.83 . For slow rotating stars we obtain: (i) a power law spindown Ω ∝ t −0.52 , (ii) magnetic activity scales roughly linearly with rotation rate, (iii) ω cyc ∝ Ω 1.16 . The results obtained from our investigations are in good agreement with observations. The Vaughan-Preston gap is consistently explained in our model by the shortest spindown timescale in this transition from fast to slow rotators. Our results highlight the importance of self-regulation of magnetic fields and rotation by direct and indirect interactions involving nonlinear feedback in stellar evolution.
Introduction
Spindown of stars is one of the most debated and interesting issues in astrophysics. Stellar rotation rate is the key parameter which is believed to affect the spindown process. Spindown is not only influenced by stellar properties such as mass, radius and age, but also depends upon the evolution of stellar magnetic fields and their interaction with the stellar atmosphere (Scholz, 2008). Since their formation from interstellar clouds, which involves various internal changes, stars undergo rotational evolution in different stages (Keppens et al. 1995, Tassoul 2000, briefly summarized in the following. During early pre-main sequence evolution, the contraction that occurs in the star along with other various internal structural changes lead it to spin-up. Also, owing to diverse internal changes a radiative core develops which rotates faster than the convective envelope. Coupling between radiative core and convective envelope should be strong enough for the angular momentum to be constantly transferred from core to envelope. This persistent supply of angular momentum from core to envelope reduces the amount of differential rotation produced in the star. By the time the star reaches late pre-main sequence or early main sequence, rotational evolution is modified by the stellar wind. Angular momentum loss via stellar wind gradually decelerates and stops the spin-up of convective envelope towards the end of late pre-main sequence phase and causes a fast spindown of convective envelope on the main sequence. Timescale at which decoupling of core and envelope occurs is observed to be very rapid (Keppens et al. 1995). With increasing rotation, timescale for angular momentum loss through stellar wind decreases and affects the magnetic field strength. Consequently, for rapidly rotating stars, the magnetic field strength does not increase beyond a critical value at a certain rotation rate and instead becomes independent of rotation no matter how fast the star is rotating. When convection zone spins down towards the end of pre-main sequence, magnetic field strength is believed to scale linearly with rotation rate in case of slow rotating stars.
Based upon the whole spindown process stars are often classified into two groups: fast and slow rotating (Saar & Brandenburg 1998, Brandenburg et al. 1999, Barnes 2003, Pizzolato et al. 2003, Mamajek & Hillenbrand 2008, Wright et al. 2011, Vidotto et al. 2014. The existence of two branches of stars, exhibiting different dependence of cyclic variation of stellar magnetic activity known as cycle period P cyc on rotation period P rot , was confirmed by Saar and Brandenburg (1998) and later by Brandenburg et al. (1999). We note that a relationship between cycle period and rotation period was first established by Noyes et al. (1984) as P cyc ∝ P n rot with n = 1.25 ± 0.5. Brandenburg et al. showed all young, active and fast rotating stars lie on one branch namely active branch (A) with scaling exponent n = 0.80, while all old, inactive and slow rotating stars lie on other branch namely inactive branch (I) with scaling exponent n = 1.15 (Saar & Brandenburg 2001, Charbonneau & Saar 2001. Furthermore, stars on A branch experience rapid spindown for which rotation rate Ω is related to time/age with an exponential law given as Ω ∝ e mt , where m is a negative constant, and in this case magnetic activity is found to be saturated, that is, magnetic activity becomes independent of rotation rate for rapidly rotating stars. Stars on I branch undergo a very slow spindown with a power law dependence as Ω ∝ t −1/2 , known as power law spindown (Skumanich 1972), and in this case magnetic activity is thought to scale linearly with rotation rate. The relationship between magnetic activity and rotation rate is important to understanding the physical process responsible for spindown of a star and was first determined by Pallavicini et al. (1981), while Micela et al. (1985) observed that this relationship does not hold for rapidly rotating stars. We note that the regime where magnetic activity increases linearly with rotation rate is termed as 'unsaturated (non-saturated) regime' while the regime where magnetic activity becomes independent of rotation rate is termed as 'saturated regime' in observational studies (e.g. Pizzolato et al. 2003, Mamajek & Hillenbrand 2008, Wright et al. 2011, Vidotto et al. 2014).
One of the challenging problems in explaining spindown is the existence of a gap between the two branches of stars. During the spindown, the star suddenly jumps from A to I branch, creating a gap between the two branches where stars are sparsely populated. This gap was first observed by Vaughan and Preston (1980) and is now known as the V-P gap. Various mechanisms have so far been proposed for this gap, but the underlying physics is still an open question. Some of the previous suggestions are as follows. Durney et al. (1981) advocated a change in magnetic field morphology from complex to simple at the time when rotation decreases to a certain value. Saar (2002) proposed that the existence of two distinct branches of stars could be due to the changes in differential rotation, α-effect and meridional flow speed (which is proportional to Ω in case of flux transport models, e.g. see Dikpati & Charbonneau 1999) with stellar rotation rate. Barnes (2003) studied period-color-diagrams of open clusters and mentioned that the transition from convective (fast rotators) to interface sequence (slow rotators) is due to the shear produced during decoupling of core and envelope. This shear gives rise to large-scale magnetic fields, and recoupling of the core and convection zone shifts the star from convective sequence to interface sequence. Structural changes in large-scale magnetic fields (Donati et al. 2006), change in dynamo action (Böhm-Vitense 2007) and manifestation of different dynamos for different stars (Wright et al. 2011) were also proposed as possible reasons for the V-P gap.
Given the complexity of the spindown problem, which depends upon various parameters such as rotation rate, evolution of magnetic fields and differential rotation, it is not possible to study a full magnetohydrodynamic model over the entire spindown timescales (e.g. from 10 7 − 10 9 yrs). Therefore, various simplified models have been utilized to understand stellar evolution (e.g. Weber & Davis 1967, Mestel 1968, Mestel & Spruit 1987, Kawaler 1988, Matt et al 2012, Johnson et al 2015, Cranmer & Saar 2011, Cohen et al 2009, Garraffo et al 2015. One such model is double zone model (DZM) which is based upon the stellar wind torque law (Weber & Davis 1967;Mestel 1968;Belcher & MacGregor 1976;Kawaler 1988). The main feature of this model is the bifurcated expression considered for the torque acting on the star (depending on the critical rotation rate) due to its magnetised stellar wind. MacGregor and Brenner (1991) used this DZM model for coupled (ordinary differential) equations for the rotation rates of the stellar envelope and radiative core, where the angular momentum loss is prescribed according to the relation between rotation and magnetic field strength. To understand the distribution of stellar rotation at different ages, Keppens et al. (1995) extended this parameterized model to describe the evolution of a single star by taking into account angular momentum exchange, moment of inertia evolution and torque exerted on core and envelope due to which angular momentum changes. Since then, this model was extended by considering different initial conditions and tested against various observations in the spindown process (Krishnamurthi et al. 1997;Irwin & Bouvier 2009;Denissenkov et al. 2010;Kim & Leprovost 2010;Epstein & Pinsonneault 2014;Reiners & Mohanty 2012;Spada et al. 2011;Gallet & Bouvier 2013). Apart from DZM there are other models such as symmetrical empirical model (SEM) (Barnes 2010;Barnes & Kim 2010) and metastable dynamo model (MDM) (Brown 2014). Both SEM and MDM utilise observational data of two different sequences of stars to fine-tune their models and thus are descriptive rather than explanatory models. Specifically, SEM uses different period-evolution of the two sequences (for active and inactive stars) depending on whether the rotation rate is above/below the critical value and fits the parameters from period-color diagrams by obtaining a best fit to the observational data. Unlike SEM, MDM uses one function for all rotation rates but two different coupling constants. By fine-tuning the values of these two coupling constants and the probability for the transition from small to large couplings, MDM improves the agreement with observations over SEM. Although it is yet empirical, MDM is remarkable in introducing into a spin-down model a threshold-like behaviour with different coupling constants and their probabilistic nature. Possible mechanisms for these different coupling constants was later provided, e.g. by evoking the change in magnetic complexity (Reville et al 2015, Garraffo et al 2015). Recently, Matt et al. (2015) proposed a stellar wind torque model (SWTM) which reproduces the shape of upper envelope and lower envelope that corresponds to the transition region between saturated and unsaturated regimes by explaining the mass-dependence of stellar magnetic and wind properties.
In this paper, we for the first time propose a dynamical model of spindown where the loss of angular momentum by magnetic field is dynamically treated, instead of being kinematically prescribed. To this end, we evolve the stellar rotation and magnetic field simultaneously over the stellar evolution time by extending our previous work (Sood & Kim 2013, 2014 which incorporates the nonlinear feedback mechanisms on rotation and magnetic fields via α-quenching and magnetic flux losses as well as mean and fluctuating rotation. We note that Sood & Kim (2013, 2014 have demonstrated that nonlinear feedback plays a vital role in the generation and destruction of magnetic fields as well as self-regulation of the dynamo. In particular, it was found that a dynamic balance is required not only in the generation and destruction of magnetic fields, but also in the fluctuating and mean differential rotation for the working of dynamo near marginal stability; their results were consistent with observations such as linear increase in cycle frequency of magnetic field with moderate rotation rates, levelling off of magnetic field strength with sufficiently large rotation rates, and quenching of shear. We extend this model to simultaneously evolve rotation and magnetic fields over the spindown timescale of a star, since their dynamics are closely linked through angular momentum loss and dynamo. That is, the angular momentum loss responsible for the spindown of a star depends upon magnetic fields while magnetic fields are affected by rotation rates. We show that this model has the capability of explaining the existence of the two branches of stars, different rotation rate dependence of cycle frequency of magnetic fields for these two branches, and the gap between the two branches, reproducing the main observations. By extending our previous work, our model is designed in such a way that it has essential ingredients mentioned above to explain the complex process of spindown of solar-type stars and highlight the importance of nonlinear feedback in this process.
Model
We propose a dynamical model for the evolution of rotation rate and magnetic field in spindown by extending a previous nonlinear dynamo model (Sood & Kim, 2013, 2014Weiss et al. 1984). In particular, Sood & Kim (2013, 2014 incorporated various nonlinear transport coefficients such as α-quenching and flux losses and took the control parameter D known as the dynamo number to scale with rotation rate as D ∝ Ω 2 . The model equations in dimensionless form are given asȦ
= 2DB 1 + κ(|B| 2 ) − [1 + λ 1 (|B| 2 )]A,(1)B = i(1 + w 0 )A − 1 2 iA * w − [1 + λ 2 (|B| 2 )]B,(2)w 0 = 1 2 i(A * B − AB * ) − ν 0 w 0 . (3) w = −iAB − νw.(4)
Here, poloidal magnetic field is represented by A, toroidal magnetic field is given by B, w 0 is the mean differential rotation, and w is the fluctuating differential rotation; A, B and w are complex variables whereas w 0 is real. We note that w 0 and w have zero and twice the frequency of A and B, respectively. The complex conjugates of A and B are denoted by A * and B * , respectively. In this model, poloidal magnetic field A is generated by toroidal magnetic field B (e.g. α-effect through helicity) which is assumed to be proportional to rotation rate Ω (see Eq. 1). Equation 2 represents the generation of toroidal magnetic field B by poloidal magnetic field A, where the quenching of Ω-effect is incorporated by total shear 1 + w 0 . The differential rotation is inhibited by the tension in the magnetic field lines via Lorentz force and causes the quenching of Ω-effect. Due to backreaction, the total shear is reduced from 1 to 1 + w 0 < 1 as w 0 is always negative and is given by 1 + w 0 = ∆Ω/Ω. Generation of mean differential rotation w 0 and fluctuating differential rotation w is represented by Eq. 3 and Eq. 4, respectively. ν 0 and ν represent viscosity of mean differential rotation and fluctuating differential rotation, respectively; κ, λ 1 and λ 2 are constant parameters which represent the strength of nonlinear feedback due to the Lorentz force by magnetic field and enhanced magnetic dissipation (e.g. magnetic flux loss). In particular, κ represents the efficiency of the quenching of α-effect while λ 1 and λ 2 represent the efficiency in the poloidal and toroidal magnetic flux losses, respectively (see Sood & Kim 2013 for full details).
To understand the evolution of rotation rate and magnetic field in spindown of solar-type stars, we extend this model by upgrading Ω from a kinematically prescribed to a dynamic variable. To this end, we first replace D by the square of time dependent rotation rate Ω(t) in Eq. 1:
A = 2Ω 2 B 1 + κ(|B| 2 ) − [1 + λ 1 (|B| 2 )]A,(5)
where Ω is real. Second, we need to include the additional equation for the evolution of Ω(t) to model the spindown of a star by the loss of angular momentum due to magnetic fields. While the latter depends on many factors such as the mass flux and geometry and complexity of magnetic fields (e.g. Garraffo et al 2016) such as the Alfven radius over which it acts as a rotational brake and the latitude at which the mass release happen, for simplicity, we incorporate their overall effects in our dynamical model by the ansatz that a decay rate of Ω is proportional to the strength of magnetic fields as ε 1 |B| 2 + ε 2 |A| 2 Ω with the two tunable parameters 1 and 2 . Here, |B| represents the strength of toroidal magnetic field and |A| Ω is the strength of poloidal magnetic field in physical units due to our non-dimensionalisation (see Sood & Kim 2013). Our empirical model is thus described by the following equation for Ω:Ω
= −ε 1 |B| 2 Ω − ε 2 |A| 2 Ω 2 Ω.(6)
Eq. 6 represents the overall spindown of the star as a whole due to the loss of angular momentum through magnetic fields. Constant parameters ε 1 and ε 2 represent the efficiency of angular momentum loss via toroidal and poloidal magnetic fields, respectively, which are taken to be independent in general, given the uncertainty in precise role of poloidal and toroidal magnetic fields in spindown. Eq. 6 is motivated to capture the key feature of the previous model (e.g. DZM) where the dependence of the angular momentum loss on Ω is roughly proportional to Ω 3 for slowly rotating stars (below the critical rotation rate) to Ω for fast rotating stars (above the critical rotation rate), respectively. Specifically, for fast rotating stars with the rotation rate above the critical value, |B| and |A| become independent of Ω, Eq. 6 reducing toΩ ∝ −Ω, resulting in the exponential decay of Ω in time. On the other hand, for slow rotating stars,Ω ∼ −Ω 3 would be reproduced should magnetic field increase linearly with Ω as |B|, |A| ∼ Ω (see §3.2 for the scaling relation). To summarize, our extended model consists of Eqs. 2-4 and 5-6, where Eqs. 2-4 are the same as in our previous model, Eq. 5 is the modified form of Eq. 1, and Eq. 6 is a new equation to model the time-evolution of Ω.
This system is investigated taking ν = 0.5, ν 0 = 35.0, κ = 0.025, λ 1,2 = 1.125 and ε 1,2 = 3.5 · 10 −5 . The parameters ν, ν 0 , κ and λ 1,2 are much the same as in our previous work (Sood & Kim 2013, 2014. As can be seen from Eq. 6, the two new parameters ε 1,2 control the rate of the spindown process. The value 3.5 · 10 −5 was chosen as it yields an overall spindown timescale of several Gyrs; larger (smaller) values of ε 1,2 were also investigated, and yielded qualitatively the same dynamics, simply occurring on shorter (longer) timescales. In particular, we have checked that qualitatively similar results are obtained in the limiting cases where 1 = 0 or 2 = 0. Correspondingly, the dimensionless time scales such that the largely completed spindown process translates to the present-day age of the Sun of 4.5 Gyrs.
To model the spindown process, we take the initial value of Ω to be 30, corresponding to thirty times the present-day solar rotation, which is Ω = 1 in our non-dimensionalisation. An initial value of Ω = 30 is intended to model the rotation rate of young stars at an age of around ∼ 10 7 years. In contrast to Ω, which can only decrease monotonically according to Eq. 6, the initial conditions of the other four variables are not important, as they can increase as well as decrease, and turn out to settle in to statistically stationary states on comparatively rapid timescales; that is, transients depending on initial conditions of these quantities quickly vanish, and the subsequent evolution depends only on the initial value chosen for Ω. Finally, note that because Ω is monotonically decreasing in time, we can effectively invert the relationship Ω(t) as t(Ω), and therefore consider all the other variables as functions of Ω rather than t. Fig. 1 shows the relationship between Ω and t. A sharp decrease in Ω can be seen for earlier times, which slows down as age starts increasing. In the left panel of Fig. 2 we fit this curve using an exponential law, that is, Ω ∝ e mt . The best fit, for stars with rotation periods in the range 1 ≤ P rot ≤ 3, has m = −1.35, corresponding to an e-folding time of 0.74 Gyrs. In the right panel of Fig. 2 we fit this curve using power laws, that is, Ω ∝ t n . For larger times we get power law scalings which vary gradually for different rotation rates, that is, n becomes smaller for smaller rotation as observed in MacGregor and Brenner (1991). For larger age (slower rotation rates), the power law exponent n is found to be around −0.52 for stars with rotation periods in the range 23 ≤ P rot < 25.65. For different rotation rates we summarise the scalings in Table 1.
Results
Ω versus age relationship
|B| versus Ω relationship
Magnetic field strength |B| is shown as a function of rotation rate in Fig. 3 (Left panel). The unit of B is normalized by the strength of magnetic field in the present-day sun, which is roughly of order 10 4 Gauss in the solar tachocline and 3 Gauss in the atmosphere. Fig. 3 exhibits notably different behaviour of |B| in two different rotation rate regimes. For slow rotation rates, we can clearly see the increasing behavior of |B| with rotation rate which attains a maximum value at Ω ≈ 5.8. For Ω ∈ [1.17, 5], the scaling of |B| with respect to Ω is found to vary between 2.73 to 0.36. We observe an average scaling of 1.47 for Ω ∈ [1.25, 2] which is close to the observed scaling of 1.38 ± 0.14 (Vidotto et al 2014). We note that |A| also scales with Ω similarly to |B|. Interestingly, there is a decrease in |B| which continues up to Ω 12.5. For Ω ≥ 12.5, that is, for very high rotation rates, |B| fluctuates on a very rapid timescale, but with a cycle-averaged value, depicted in red, that is essentially independent of Ω. The rapid fluctuations in |B| are due to the presence of two modes with different frequencies. The fluctuating behavior of |B| with Ω can be seen in Fig. 3 (Right panel) for a small cut of Ω ∈ [23.30, 23.31]. Note how the system spends more time near the top as opposed to the bottom, which explains why the cycle-averaged value of |B| (the red curve in the left panel) is higher than the simple average of the cycle maxima and minima (the highs and lows of the blue curves). (Observationally this would suggest that stars might be more likely to be observed close to a peak of magnetic activity rather than a trough.) Furthermore, we notice a gap between the two different rotation rate regimes in the region Ω ∈ [5.8, 12.5].
Power spectra of B and ω cyc versus Ω relationship
To understand how the rapid cycles in |B| gradually evolve as Ω spins down, we divided the entire time series into discrete chunks of 0.0106 Gyrs, and performed a Fourier transform on each chunk separately. The precise length of the individual sections is not important, the only requirements being that it should be long compared with the fast cycle time, but short compared with the gradual spindown evolution time. Fig. 4 shows Fourier spectra for 8 such sections. It is notable that at earlier times shown in the first and second rows, there are main two peaks around ω ∼ 10 in the spectra, whereas at later times only one in the third and fourth rows, with the peaks furthermore shifting to lower frequencies. In particular, in the second and third rows, where time increases from age 0.1460 Gyrs to 0.2831 Gyrs, we find that peaks shifting gradually towards lower frequency as time increases. This behavior continues until we reach time approximately age 0.3253 Gyrs beyond which the multiple peaks of frequency are found to diminish. This behavior of frequency can be seen in panel 7 of Fig. 4 for time ≈ [0.3148, 0.3253] Gyrs while for time ≈ [0.3569, 0.3675] Gyrs we find only a single peak of frequency (see Fig. 4 panel 8). The behavior of power spectra of |B| clearly shows that the second peak of frequency vanishes as time increases, that is, as the rotation rate decreases. We note that in addition to the main two peaks at ω ∼ 10 or ω < 10 that we discussed above, one or two more peaks are also observed at higher frequency ω ∼ 20 in the first and second rows. These high frequency modes have much weaker power than the main peaks and are simply their subharmonics. In the following, we do not discuss these modes and only focus on the behaviour of the main peaks (e.g. the higher frequency modes are not shown in Fig. 5).
The gradual transitions in the spectra of |B| are further illustrated in Fig. 5, showing socalled short-time Fourier transforms (STFT). In this technique the signal is again divided into short chunks, but these now overlap, essentially forming a moving window, and hence giving an overview of the continuous evolution of frequencies and amplitudes. Using this method, the most pronounced frequency of |B| is obtained in Fig. 5 (Left panel) where high to low intensity of frequency is illustrated via bright red to dark blue colors as shown in color map. For early time t < 0.3253Gyrs, we observe two curves of frequency of maximum intensity ω cyc (depicted in red) with age in Gyrs. Lower curve has larger amplitude of frequency than the upper curve. The existence of these two curves is the manifestation of complex time behaviour of fast rotators and is reminiscent of the complexity of magnetic topology for active branch stars, discussed in recent papers (e.g. Matt et al 2015). Both upper and lower curves show that the frequency of maximum intensity decreases with age rapidly until t ∼ 0.3253Gyrs when the upper curve disappears while the lower curve exhibits the change in the behaviour. This single curve for t > 0.3253Gyrs is interpreted as inactive branch.
In order to investigate further, we examine the scaling of ω cyc by showing the behavior of frequency of maximum intensity ω cyc against rotation rate Ω in the right panel in Fig. 5. Again, we notice that for high rotation rate we have two curves of frequency for maximum intensity, whereas for slow rotation rate we have only one single curve. We use power law relationship, that is, ω cyc ∝ Ω p with a power-law index p to obtain the scaling. For the upper curve, we find the value of p ∼ 0.83 for stars with rotation rate 12.8 ≤ Ω ≤ 30. On the other hand, scaling exponent p of the lower curve varies with rotation rate, as shown in Table 2. Interestingly, for fast rotator with Ω > 12, an average value of p ∼ 0.9, which is close to the observational value for active branch stars (Saar & Brandenburg 2001); for slow rotators, solar-like stars with rotation rate in the range [1.17, 3.5] has p ∼ 1.16, in good agreement with observed scaling exponent for solar-type stars lying on inactive branch (Saar & Brandenburg 2001). In our dimensionless units, the total shear is given by 1 + w 0 . Fig. 6 shows how this total shear changes with rotation rate Ω. As Ω increases from Ω = 1, the total shear is seen to decrease by 90% from 1 to 0.1 with increasing Ω. This reduction in total shear results from the effect of magnetic back-reaction on the shear. The saturation of the total shear for high rotation indicates that the dynamo efficiency is not saturated beyond certain rotation rate. After taking the minimum value around Ω = 12.5, the total shear increases with Ω in a small interval Ω ∈ [12.5, 17] and then remains almost constant for high rotation rate Ω ≥ 17. It is important to note that the apparently broad band of the total shear for Ω ≥ 12.5 in Fig. 5 is due to the two different modes with different frequencies existing in this interval. The inset in Fig. 6 shows the total shear for a small range of Ω ∈ [29.82, 29.84] to highlight the fluctuation in total shear due to two modes. Finally, Ω = 12.5, where the total shear takes its minimum value is related to very rapid transition in rotational evolution and is related to the V-P gap discussed later.
|B| versus Age Relationship
In Fig. 7 magnetic field strength is shown as a function of age. The magnetic field strength |B| is observed to maintain almost the same mean value fluctuating with finite amplitude for very young fast rotating stars of age up to 325 Myrs. This fluctuation is due to the presence of two different modes as discussed later. The magnetic activity is seen to increase with age in the range ∈ [325, 502] Myrs as |B| ∼ t s with a power law exponent s = 0.53 after which the magnetic activity remains almost constant in the age interval ∈ [508, 551] Myrs. Beyond this value the magnetic activity decreases very rapidly with increasing age. We find that power law exponent s varies with different values for stars with different ages and are provided in Table 3. Finally, we note that our results suggest that the fraction of poloidal vs toroidal flux fluctuate for fast rotators, consistent with complex magnetic topology (e.g. Matt et al 2015), while it takes a constant value for slow rotators. The mean value of this ratio does not change significantly over time.
Timescale for spindown
In order to quantify the timescale of spindown, we compute the characteristic spindown time τ = |Ω/Ω| by using Eq. 6 as we evolve the system, and then show the suitable averaged value in Fig. 8 using linear and log scales. The inset in Fig. 8 (left panel) shows the zoomed in view of τ for very fast rotating stars. Here, red depicts the mean value 1 of timescale over time. Clearly, we observe a spindown timescale of 15.96 Myrs for very young rapidly rotating stars of ages from 30 Myrs which decreases very slowly up to age 315 Myrs. Beyond this, the spindown timescale is observed to decrease rapidly with increasing age for a short interval [315, 493.9] Myrs. This decline in spindown time reaches a minimum of approximately 122 Myrs for age 493.9 Myrs. After this, the spindown timescale starts increasing with age of the stars. Specifically, the spindown timescale increases linearly for solar-type stars with ages approximately 4.5 Gyrs. The shortest spindown timescale is obtained in the region [315, 632] Myrs (Ω ∈ [5.8, 12.5]) noted previously, and interestingly corresponds to the V-P gap, the transition region between fast and slow rotators. That is, this is the region where the star suddenly jumps from active to inactive branch staying in this intermediate region for a short time only due to the fast spindown. To summarise, our results show that spindown time for fast rotating stars in that region is shorter than the spindown time for slow rotating stars while the spindown timescale for stars in the transition region is even much shorter than the spindown timescale for fast rotating stars. These results are in good agreement with observations for spindown timescale (Barnes, 2003).
Summary of results
Our dynamical model of spindown coupled to the evolution of magnetic fields successfully reproduced: (i) the basic Ω versus age relationship, (ii) the relationships of |B|, ω cyc and total shear 1 + w 0 versus Ω, and (iii) magnetic activity and spindown timescales with age. All three items are consistent with observations for the spindown process for fast and slow rotating stars, and provide a natural explanation for the V-P gap associated with the abrupt transition of stars from active (A) to inactive (I) branches, which is an important unresolved issue.
Conclusions
The evolution of magnetic fields and rotation rate is a self-regulated process through the direct interaction between large-scale shear flow and magnetic field and the indirect interaction by various (nonlinear) feedback mechanisms through small-scale fields. In particular, the generation of magnetic field and spindown are closely inter-linked processes since the generation of magnetic field depends on rotation of stars and thus spindown while spindown process crucially depends upon the magnetic field (e.g. generation, destruction) and differential rotation. In this paper, we have proposed a dynamical model of spindown to understand self-regulation of magnetic fields and rotation over the spindown time scale, which for the first time evolves magnetic field and rotation rate at the same time taking into account various mutual interactions. Despite being a simple parameterized model our model successfully reproduces the observations for spindown of stars which would otherwise be impossible in a more complete model (e.g. 3D MHD). In particular, we have found exponential spindown, saturation of magnetic field strength and power law dependence of frequency of magnetic fields of active and inactive branches for rapidly rotating stars. For slow rotators, we obtained power law spindown, linear scaling of magnetic field strength and power law relationship of ω cyc on Ω with power law scaling for inactive branch. The transition from fast to slow rotating stars is quantitatively shown to occur very rapidly, thereby providing a natural explanation for the V-P gap. In future, interesting extension of our model would include the coupling of our models to the evolution of mass-loss (e.g. Garraffo 2015) and detailed modelling of slow rotating stars (Saders et al 2016) and transient Sun .
We thank Dr M Miesch for valuable discussion.
Fig. 1 .
1-The left panel shows Ω as a function of age over the full range [0.03, 4.53] Gyrs. As noted in section 3, we scale our dimensionless time in physical units by the age of the present-day Sun of 4.5 Gyrs, while our dimensionless rotation, represented on the vertical axis, is scaled with thirty times solar rotation to obtain rotation period (P rot ) in days, which is depicted on the right side of the y-axis. The right panel shows a zoomed-in view for age =[0.065155, 0.065185] Gyrs, and reveals the presence of fluctuations superimposed on the general spindown trend.
Fig. 2 .
2-Left panel shows exponential spindown with Ω ∝ exp −1.35t for ages ∈ [0.03, 0.7325] Gyrs depicted in red while the blue color represents the trend in semi-logarithmic scale for age ∈ [0.03, 4.53] Gyrs. Right panel shows the power law spindown, Ω ∝ t n , with scaling exponent −0.52 for solar-type stars. A gradual decrease in |n| suggests a drop in the efficiency of angular momentum loss, which seems to align with the suggestion for the reduction in the efficiency of magnetic braking from recent observations from the Kepler space telescope (e.g.Garraffo et al. 2016).
Fig. 4 .
4-Power spectra of |B| for 8 distinct intervals in time, indicated by the numbers at the top of each panel.
Fig. 5 .
5-Behavior of frequency of maximum intensity ω cyc , as a function of age in Gyrs and rotation rate can be seen in left panel and right panel, respectively. Here, bright red to dark blue colors represent high to low intensity of frequency.
Fig. 6 .
6-Total shear 1 + w 0 as a function of rotation rate Ω.
Fig. 7 .
7-Magnetic field strength |B| as a function of age.
Fig. 8 .
8-Left panel shows the spindown timescale τ as a function of age in Gyrs in linear scale while right panel shows τ as a function of age in years in log-scale. The oscillations in τ are caused by the fluctuations inΩ previously seen in Fig. 1 (Right panel).
Table 1 :
1Power law exponent n for stars with different rotation period in daysn
Ω
P rot (days)
-1.38 Ω ∈ [3.5, 1.99]
8.57-15
-0.97 Ω ∈ [1.99, 1.50]
15 -20
-0.70 Ω ∈ [1.50, 1.28]
20 -23.34
-0.52 Ω ∈ [1.28, 1.17] 23.34 -25.65
Fig. 3.-Magnetic field strength |B| is shown as a function of Ω (Left panel) with the cycle-average of |B| depicted in red. We note that the value of |B| ≈ 0.1 at Ω = 1 corresponds to about 3 G at the star's surface, whereas the rotation period (P rot ) in days is depicted on the top of the plot. Also, fluctuating behavior |B| for high rotation rate regime is shown for Ω ∈ [23.30, 23.31] (Right panel).1
5
10
15
20
25
30
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
Ω
|B|
P rot (days)
30
6
3
2
1.5
1.2
1
23.3
23.302 23.304 23.306 23.3308 23.31
0.4
0.6
0.8
1
1.2
1.4
1.6
Ω
|B|
P rot (days)
1.2785 1.2784 1.2783 1.2782 1.2781 1.2780
Table 2 :
2Power law exponent p for the lower curve in ω cyc ∝ Ω p at different rotation periods.p
Ω
P rot (days)
1.16 Ω ∈ [1.17, 3.5] 25.65 -8.7
0.98
Ω ∈ [3.5, 6]
8.7 -5
0.80
Ω ∈ [6, 13]
5 -2.30
1.06 Ω ∈ [16, 30]
1.88 -1
3.4. Total shear versus Ω relationship
Table 3 :
3Power law exponent s for magnetic activity with age t of stars ∈ [1.066, 4.5] Gyrs s Age (Gyrs) -0.61 t ∈ [0.5929, 0.7336] -0.97 t ∈ [0.7336, 1.085] -1.13 t ∈ [1.085, 1, 437] -1.25 t ∈ [1.437, 1.788] -1.40 t ∈ [1.788, 2.139] -1.64 t ∈ [2.139, 2.843]
-2.00
t ∈ [2.843, 3.54]
-2.50
t ∈ [3.54, 4.5]
This mean value is obtained over the 1 420 fraction of the interval.
. S A Barnes, ApJ. 586464Barnes, S. A. 2003, ApJ, 586, 464
. S A Barnes, ApJ. 72222Barnes, S. A. 2010, ApJ, 722, 22
. S A Barnes, Y-C Kim, ApJ. 721675Barnes, S. A. & Kim, Y-C. 2010, ApJ, 721, 675
. J W Belcher, K B Macgregor, ApJ. 210498Belcher, J. W., & MacGregor, K. B. 1976, ApJ, 210, 498
. E Böhm-Vitense, ApJ. 657486Böhm-Vitense, E. 2007, ApJ, 657, 486
. A Brandenburg, S H Saar, C R Turpin, ApJ. 49851Brandenburg, A., Saar, S. H., & Turpin, C. R. 1998, ApJ, 498, 51
. T M Brown, ApJ. 789101Brown, T.M. 2014, ApJ, 789, 101
P Charbonneau, S H Saar, Magnetic Fields across the H-R digram. G. Mathys, S.K. Solanki, & D.T. Wickamasinghe248189Charbonneau, P., & Saar, S. H. 2001, in Magnetic Fields across the H-R digram, ed. G. Mathys, S.K. Solanki, & D.T. Wickamasinghe, ASP Conf. Ser., 248, 189
. O Cohen, J J Drake, V L Kashyap, T I Gombosi, ApJ. 6991501Cohen, O., Drake, J.J., Kashyap, V.L. & Gombosi, T.I. 2009, ApJ, 699, 1501
. S R Cranmer, S H Saar, ApJ. 74154Cranmer, S.R. & Saar, S.H. 2011, ApJ, 741, 54
. P A Denissenkov, M Pinsonneault, D M Terndrup, G Newsham, ApJ. 7161269Denissenkov, P. A., Pinsonneault, M., Terndrup, D. M., & Newsham, G. 2010, ApJ, 716, 1269
. M Dikpati, P Charbonneau, ApJ. 518508Dikpati, M., & Charbonneau, P. 1999, ApJ, 518, 508
. J.-F Donati, A C Cameron, MNRAS. 2911Donati, J.-F., & Cameron, A.C. 1997, MNRAS, 291, 1
. B Durney, R Mihalas, D Robinson, R D , PASP. 93537Durney, B., R., Mihalas, D., & Robinson, R. D. 1981, PASP, 93, 537
. C R Epstein, M H Pinsonneault, ApJ. 780159Epstein, C. R., & Pinsonneault, M. H. 2014, ApJ, 780, 159
. C P Johnstone, M Güdel, T Lüftinger, G Toth, I Brott, A&A. 57727Johnstone, C.P., Güdel, M., T. Lüftinger, Toth, G. & Brott, I. 2015, A&A, 577, A27
. F Gallet, J Bouvier, A&A. 55636Gallet, F., & Bouvier, J. 2013, A&A, 556, A36
. C Garraffo, J J Drake, O Cohen, ApJ. 81340Garraffo, C., Drake, J.J., & Cohen, O. 2015, ApJ, 813, 40
. C Garraffo, J J Drake, O Cohen, arXiv:1607.06096v1astro-ph.SRGarraffo, C., Drake, J.J., & Cohen, O. 2016, arXiv:1607.06096v1 [astro-ph.SR]
J Irwin, J Bouvier, The Ages of Stars. E. E. Mamajek, D. R. Soderblom, & R. F. G.WyseCambridgeCambridge Univ.Press258363IAU SympIrwin, J., & Bouvier, J. 2009, in IAU Symp. 258, The Ages of Stars, ed. E. E. Mamajek, D. R. Soderblom, & R. F. G.Wyse (Cambridge: Cambridge Univ.Press), 363
. S D Kawaler, ApJ. 333236Kawaler, S. D. 1988, ApJ, 333, 236
. R Keppens, K B Macgregor, P Charbonnrau, A&A. 294469Keppens, R., MacGregor, K.B., & Charbonnrau, P. 1995, A&A, 294, 469
. A Krishnamurthy, M H Pinsonneault, S Barnes, S Sofia, ApJ. 480303Krishnamurthy, A., Pinsonneault, M.H., Barnes, S., & Sofia, S. 1997, ApJ, 480, 303
. K B Macgregor, M Brenner, ApJ. 376204MacGregor, K.B. & Brenner, M. 1991, ApJ, 376, 204
. E E Mamajek, L A Hillenbrand, ApJ. 6871264Mamajek, E. E., & Hillenbrand, L. A. 2008, ApJ, 687, 1264
. S P Matt, K B Macgregor, M H Pinsonneault, T P Greene, ApJ. 75426Matt, S.P., MacGregor, K.B., Pinsonneault, M.H., & Greene, T. P. 2012, ApJ, 754, L26
. S P Matt, A S Brun, I Baraffe, J Bouvier, G Chabrier, ApJ. 23Matt, S.P., Brun, A.S., Baraffe, I., Bouvier, J. & Chabrier, G. 2015, ApJ, 799, L23
. G Micela, S Sciortino, S Serio, ApJ. 292172Micela, G., Sciortino, S., Serio, S., et al. 1985, ApJ, 292, 172
. L Mestel, MNRAS. 138359Mestel, L. 1968, MNRAS, 138, 359
. L Mestel, H C Spruit, MNRAS. 22657Mestel, L. & Spruit, H.C. 1987, MNRAS, 226, 57
. T S Metcalfe, R Egeland, J Van Saders, ApJ. 8262Metcalfe, T.S., Egeland, R. & van Saders, J. 2016, ApJ, 826, L2
. N Leprovost, E Kim, ApJ. 719287Leprovost, N. & Kim, E. 2010, ApJ., 719, 287
. R W Noyes, N O Weiss, A H Vaughan, ApJ. 287769Noyes, R.W., Weiss, N.O., & Vaughan, A.H. 1984, ApJ, 287, 769
. G Pace, J Melendez, L Pasquini, G Carraro, J Danziger, P François, F Matteucci, N C Santos, A&A. 499Pace, G., Melendez, J., Pasquini, L., Carraro, G., Danziger, J., François, P., Matteucci, F., & Santos, N.C. 2009, A&A, 499, L9-L12
. G Pace, Astrophys Space Sci. 328307Pace, G. 2010, Astrophys Space Sci. 328, 307
. R Pallavicini, L Golub, R Rosner, G S Vaiana, ApJ. 248279Pallavicini, R., Golub, L., Rosner, R., & Vaiana, G.S. 1981, ApJ, 248, 279
. N Pizzolato, A Maggio, G Micela, S Sciortino, P Ventura, A&A. 397147Pizzolato, N., Maggio, A., Micela, G., Sciortino, S., & Ventura, P. 2003, A&A, 397, 147
. V Réville, A S Brun, S P Matt, A Strugarek, R F Pinto, ApJ. 798116Réville, V., Brun, A.S. , Matt, S.P., Strugarek, A. & Pinto, R.F. 2015, ApJ, 798, 116
. A Reiners, S Mohanty, ApJ. 74643Reiners, A., & Mohanty, S. 2012, ApJ, 746, 43
Solar and Magnetic Fields: Origin and Coronal Effects. I Roxburgh, W , 449Roxburgh, I., W. 1983, Solar and Magnetic Fields: Origin and Coronal Effects, 449
. S H Saar, A Brandenburg, ApJ. 524295Saar, S. H., & Brandenburg, A. 1999, ApJ, 524, 295
Magnetic Fields across the. S H Saar, A Brandenburg, ASP Conf. Ser. G. Mathys, S.K. Solanki, & D. Wickramasinghe248231Saar, S. H., & Brandenburg, A. 2001, Magnetic Fields across the Hertzsprung-Russell Diagram, eds. G. Mathys, S.K. Solanki, & D. Wickramasinghe, ASP Conf. Ser. Vol. 248, 231
S H Saar, J ; R, R Garcia Lopez, & M R Rebolo, Zapatero, the 11th Cool Stars: Stellar System and the Sun. 223292Saar, S. H. 2002, in the 11th Cool Stars: Stellar System and the Sun, ed. R.J. Garcia Lopez, R. Rebolo, & M.R. Zapatero, ASP Conf. Ser., 223, 292
. J L Van Saders, T Ceillier, T S Metcalfe, V S Aguirre, M H Pinsonneault, R A Garca, S Mathur, G R Davies, Nature. 529181van Saders, J. L., Ceillier, T., Metcalfe, T. S., Aguirre, V. S., Pinsonneault, M. H., Garca, R. A., Mathur, S. & Davies, G. R. 2016, Nature 529, 181
. A Scolz, arXiv:0810.1190astro-phScolz, A. 2008, arXiv:0810.1190 [astro-ph]
. A Skumanich, ApJ. 171565Skumanich, A. 1972, ApJ, 171, 565
. F Spada, A C Lanzafame, A F Lanza, S Messina, A Collier Cameron, MNRAS. 416447Spada, F., Lanzafame, A. C., Lanza, A. F., Messina, S., & Collier Cameron, A. 2011, MNRAS, 416, 447
. A Sood, E Kim, A&A. 55522Sood, A. & Kim, E. 2013, A&A, 555, A(22)
. A Sood, E Kim, A&A. 563100Sood, A. & Kim, E. 2014, A&A, 563, A(100)
J.-L Tassoul, Stellar Rotation. CambridgeCambridge Univ. PressTassoul, J.-L. 2000, Stellar Rotation (Cambridge: Cambridge Univ. Press)
. A H Vaughan, G W Preston, PASP. 92385Vaughan, A. H., & Preston, G. W. 1980, PASP, 92, 385
. A A Vidotto, S G Gregory, M Jardine, MNRAS. 4412361Vidotto, A. A., Gregory, S. G., Jardine, M., et al., 2014, MNRAS 441, 2361
. E J Weber, L Davis, Jr, ApJ. 148217Weber, E. J., & Davis, L., Jr. 1967, ApJ, 148, 217
. N O Weiss, F Cattaneo, C A Jones, GAFD. 30305Weiss, N. O., Cattaneo,F., & Jones, C. A. 1984, GAFD, 30, 305
. N J Wright, J J Drake, E E Mamajek, G W Henry, ApJ. 74348Wright, N. J., Drake, J. J., Mamajek, E. E., & Henry, G. W. 2011, ApJ, 743, 48
|
[] |
[] |
[] |
[] |
[] |
We report the numerical calculations of the current-voltage characteristics of intrinsic Josephson junctions in highc T superconductors. The charging effect at
|
10.1016/j.physc.2005.11.009
|
[
"https://export.arxiv.org/pdf/cond-mat/0507076v1.pdf"
] | 119,353,444 |
cond-mat/0507076
|
619fcce206d173bfb40c3faf38085cc76a5b1535
|
arXiv:cond-mat/0507076 4 Jul 2005 1
We report the numerical calculations of the current-voltage characteristics of intrinsic Josephson junctions in highc T superconductors. The charging effect at
I. INTRODUCTION
he phase dynamics in the intrinsic Josephson junctions (IJJ) has attracted a great interest because of rich and interesting physics from one side and perspective of applications from the other one. Different type of couplings between junctions, like inductive coupling in the presence of magnetic field [1], capacitive [2]- [3], charge-imbalance [4] and phonon [5] couplings determine a variety of currentvoltage characteristics (IVC) observed in high temperature superconductors ( HTSC). In [6] has been stressed that capacitive coupling takes various values in HTSC and layered organic superconductors, that is, the capacitive coupling is tunable in these systems. Based on this fact a study for the dynamics of the CCJJ model, focusing on the dependence of phase dynamics on the strength of the capacitive coupling constant has been presented in this paper. Yu
II. MODEL AND NUMERICAL RESULTS
In the CCJJ model the dynamics of the gauge-invariant phase difference l ϕ between superconducting layers l and 1 l + is described by the equation:
1 1 sin (2sin sin sin ) l l l l l l c I I ϕ βϕ ϕ α ϕ ϕ ϕ + + = − − − − − ɺ ɺ ɺ(1)
where I and c I are the external dc current and the Josephson critical current , respectively.
We proposed a fixed initial conditions method (FICmethod), which is based on determination of the initial conditions using the values of branch's slopes. By this method we simulate the IVC of IJJ under restriction that patterns of distribution of phase rotating junctions are symmetric [7]. For the case of 11 junctions at α =1, β =0.2, γ =0.5 we obtain the complete branch structure consisting of 45 branches with a different slopes.
The influence of the coupling parameter α on the IVC of a stack of IJJ is demonstrated in Fig.1, where the IVC calculated at fixed initial conditions are shown at α =0.1, 0.5 and 1. The main features of this influence which we are concentrating on in this paper are the change of the slopes and the endpoints of branches. As seen in Fig.1, the resistive branches shift towards the higher voltage side (towards the outermost branch ) [8] and their's endpoints are increasing with increase in α . Fig.2 shows the α -dependence of the branch's slopes for some branches. The slope of the outermost branch (the all junctions are in the rotating (R-state)) does not depend on the value of the coupling constant. As we expected the slopes of the branches are getting close to the slope of the outermost branch, but this approaching is decreased with increase in Using the equations of CCJJ model [7] we obtain the analytical expression for the α -dependence of the slope n, taking into account the distribution of R-and O-junctions in the stack. For example, 2
2
(1,11) 1 1.5 3 10 (5, 6, 7) 1 4 2
for state O N n N for state O α α α α α + − + = + − + + (2)
The slope for the branch O(5,6,7) (junctions 5,6,7 are in Ostate) limits to the slope of the outermost branch at α → ∞ . As we mentioned before, the order of the branches in IVC is changed with increase in α . For example, the position of the branches 31 and 21 have changed. The coupling between junctions breaks the equidistance of the branch structure and it happens at enough small value of α .
In general, each junction in O-state in the stack has its own α -dependence of the phase difference and that one, which has the most strong dependence, determines the α -dependence of the branch's endpoints. From the analysis of equation (1) and resistively shunted junction equation we find that, for example, for state O(5,6,7) the phase difference in junction 6 determines the endpoint. Mostly the α -dependences of the endpoints are monotonic, but in some cases the strongest α dependence of sin l ϕ is transformed from one junction to another one with increase in α . It leads to a broken dependence. For example, for branch 28 (state O(1,5,6,7,11)) α -dependence is determined by junction 6 at small α , but by junctions 1 and 11 at big α . The analytical dependence in this case has a form
superconducting layers is taken into account. A set of equations is used to study the non-linear dynamics of the system. In framework of capacitively coupled Josephson junctions model we obtain the total number of branches using fixed initial conditions for phases and their derivatives. The influence of the coupling constant α on the current-voltage characteristics at fixed parameter of the current-voltage characteristics are investigated. We obtain the α -dependence of the branch's slopes and branch's endpoints. The obtained results show new features of the coupling effect on the scheme of hysteresis jumps in current-voltage characteristics of intrinsic Josephson junctions.
of the branch's slopes demonstrates the change of branch's order with increase in the α . Some curves are saturated at definite values of slope n. The different behaviour depends on whether the state includes the first and last junction (boundary conditions), whether two or Effect of coupling on scheme of hysteresis jumps in current-voltage characteristics of intrinsic Josephson junctions in highc T superconductors Yu.M.Shukrinov, and F.Mahfouzi T more junctions in oscillating state (O-states) are neighbors or they separated by junctions in R-state.
simulation of the IVC at different values of α show the different β -dependence of the return current for the first and last branches which are different from such dependences for one junction.
.M.Shukrinov is with the BLTP, JINR, Dubna, Moscow Region, 141980, Russia and Physical Technical Institute, Dushanbe, 734063, Tajikistan. F.Mahfouzi is with the Institute for Advanced Studies in Basic Sciences, P.O.Box 45195-1159, Zanjan, Iran. This work was supported in part by the INTAS under Grant No. 01-0617 .
We consider that comparison the obtained results with experimental data allow to determine the limits of CCJJ model and show the way to improve it.
. R Kleiner, F Steinmeyer, G Kunkel, P Muller, Phys. Rev. Lett. 682394Kleiner R, Steinmeyer F, Kunkel G, and Muller P 1992 Phys. Rev. Lett. 68 2394
. G Oya, N Aoyama, A Irie, S Kishida, H Tokutaka, Jpn. J. Appl. Phys. 31829Oya G, Aoyama N, Irie A, Kishida S, and Tokutaka H 1992 Jpn. J. Appl. Phys. 31 L829
. L N Bulaevskii, D Dominguez, M Maley, A Bishop, B Ivlev, Phys. Rev. B. 5314601Bulaevskii L N, Dominguez D, Maley M, Bishop A, and Ivlev B 1995 Phys. Rev. B 53 14601
. T Koyama, M Tachiki, Phys. Rev. B. 5416183Koyama T, and Tachiki M 1996 Phys. Rev. B 54 16183
. M Machida, T Koyama, M Tachiki, Machida M, Koyama T, and Tachiki M 1998 Physica C300 55
. M Machida, T Koyama, M Tachiki, Phys.Rev.Lett. 834816Machida M, Koyama T, and Tachiki M 1999 Phys.Rev.Lett. 83 4816
. H Matsumoto, S Sakamoto, F Wajima, T Koyama, M Machida, Phys. Rev. B. 603666Matsumoto H, Sakamoto S, Wajima F, Koyama T, Machida M 1999 Phys. Rev. B 60 3666
. M Machida, T Koyama, Phys. Rev. B. 7024523Machida M, Koyama T 2004 Phys. Rev. B 70 024523
The change of the IVC with increase in coupling parameter α . Fig. 2. The α -dependence of the branch's slopes. Fig. 1. The change of the IVC with increase in coupling parameter α . Fig. 2. The α -dependence of the branch's slopes.
|
[] |
[
"Quantified Reproducibility Assessment of NLP Results",
"Quantified Reproducibility Assessment of NLP Results"
] |
[
"Anya Belz [email protected] \nADAPT Research Centre Dublin City University\nIreland\n",
"Maja Popović [email protected] \nADAPT Research Centre Dublin City University\nIreland\n",
"Simon Mille [email protected] \nUniversitat Pompeu Fabra\nBarcelonaSpain\n"
] |
[
"ADAPT Research Centre Dublin City University\nIreland",
"ADAPT Research Centre Dublin City University\nIreland",
"Universitat Pompeu Fabra\nBarcelonaSpain"
] |
[
"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics"
] |
This paper describes and tests a method for carrying out quantified reproducibility assessment (QRA) that is based on concepts and definitions from metrology. QRA produces a single score estimating the degree of reproducibility of a given system and evaluation measure, on the basis of the scores from, and differences between, different reproductions. We test QRA on 18 system and evaluation measure combinations (involving diverse NLP tasks and types of evaluation), for each of which we have the original results and one to seven reproduction results. The proposed QRA method produces degree-of-reproducibility scores that are comparable across multiple reproductions not only of the same, but of different original studies. We find that the proposed method facilitates insights into causes of variation between reproductions, and allows conclusions to be drawn about what changes to system and/or evaluation design might lead to improved reproducibility.
|
10.18653/v1/2022.acl-long.2
|
[
"https://www.aclanthology.org/2022.acl-long.2.pdf"
] | 248,118,981 |
2204.05961
|
8303b8e1f7adf6013fe724f14c9fa04821e6099d
|
Quantified Reproducibility Assessment of NLP Results
Association for Computational LinguisticsCopyright Association for Computational LinguisticsMay 22-27, 2022 c 2022
Anya Belz [email protected]
ADAPT Research Centre Dublin City University
Ireland
Maja Popović [email protected]
ADAPT Research Centre Dublin City University
Ireland
Simon Mille [email protected]
Universitat Pompeu Fabra
BarcelonaSpain
Quantified Reproducibility Assessment of NLP Results
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
the 60th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1May 22-27, 2022 c 2022
This paper describes and tests a method for carrying out quantified reproducibility assessment (QRA) that is based on concepts and definitions from metrology. QRA produces a single score estimating the degree of reproducibility of a given system and evaluation measure, on the basis of the scores from, and differences between, different reproductions. We test QRA on 18 system and evaluation measure combinations (involving diverse NLP tasks and types of evaluation), for each of which we have the original results and one to seven reproduction results. The proposed QRA method produces degree-of-reproducibility scores that are comparable across multiple reproductions not only of the same, but of different original studies. We find that the proposed method facilitates insights into causes of variation between reproductions, and allows conclusions to be drawn about what changes to system and/or evaluation design might lead to improved reproducibility.
Introduction
Reproduction studies are becoming more common in Natural Language Processing (NLP), with the first shared tasks being organised, including RE-PROLANG (Branco et al., 2020) and ReproGen (Belz et al., 2021b). In NLP, reproduction studies generally address the following question: if we create and/or evaluate this system multiple times, will we obtain the same results?
To answer this question for a given specific system, typically (Wieling et al., 2018;Arhiliuc et al., 2020;Popović and Belz, 2021) an original study is selected and repeated more or less closely, before comparing the results obtained in the original study with those obtained in the repeat, and deciding whether the two sets of results are similar enough to support the same conclusions.
This framing, whether the same conclusions can be drawn, involves subjective judgments and different researchers can come to contradictory con-clusions: e.g. the four papers (Arhiliuc et al., 2020;Bestgen, 2020;Caines and Buttery, 2020;Huber and Çöltekin, 2020) reproducing Vajjala and Rama (2018) in REPROLANG all report similarly large differences, but only Arhiliuc et al. conclude that reproduction was unsuccessful.
There is no standard way of going about a reproduction study in NLP, and different reproduction studies of the same original set of results can differ substantially in terms of their similarity in system and/or evaluation design (as is the case with the Vajjala and Rama (2018) reproductions, see Section 4 for details). Other things being equal, a more similar reproduction can be expected to produce more similar results, and such (dis)similarities should be factored into reproduction analysis and conclusions, but NLP lacks a method for doing so.
Being able to assess reproducibility of results objectively and comparably is important not only to establish that results are valid, but to provide evidence about which methods have better/worse reproducibility and what may need to be changed to improve reproducibility. To do this, assessment has to be done in a way that is also comparable across reproduction studies of different original studies, e.g. to develop common expectations of how similar original and reproduction results should be for different types of system, task and evaluation.
In this paper, we (i) describe a method for quantified reproducibility assessment (QRA) directly derived from standard concepts and definitions from metrology which addresses the above issues, and (ii) test it on diverse sets of NLP results. Following a review of related research (Section 2), we present the method (Section 3), tests and results (Section 4), discuss method and results (Section 5), and finish with some conclusions (Section 6).
Related Research
The situation memorably caricatured by Pedersen (2008) still happens all the time: you download some code you read about in a paper and liked the sound of, you run it on the data provided, only to find that the results are not the same as reported in the paper, in fact they are likely to be worse (Belz et al., 2021a). When both data and code are provided, the number of potential causes of such differences is limited, and the NLP field has shared increasingly detailed information about system, dependencies and evaluation to chase down sources of differences. Sharing code and data together with detailed information about them is now expected as standard, and checklists and datasheets have been proposed to standardise information sharing (Pineau, 2020;Shimorina and Belz, 2021).
Reproducibility more generally is becoming more of a research focus. There have been several workshops and initiatives on reproducibility, including workshops at ICML 2017 and 2018, the reproducibility challenge at ICLR 2018 and 2019, and at NeurIPS 2019 and 2020, the RE-PROLANG (Branco et al., 2020) initiative at LREC 2020, and the ReproGen shared task on reproducibility in NLG (Belz et al., 2021b).
Despite this growing body of research, no consensus has emerged about standards, terminology and definitions. Particularly for the two most frequently used terms, reproducibility and replicability, multiple divergent definitions are in use, variously conditioned on same vs. different teams, methods, artifacts, code, and data. For example, for Rougier et al. (2017), reproducing a result means running the same code on the same data and obtaining the same result, while replicating the result is writing and running new code based on the information provided by the original publication. For Wieling et al. (2018), reproducibility is achieving the same results using the same data and methods.
According to the ACM's definitions (Association for Computing Machinery, 2020), results have been reproduced if obtained in a different study by a different team using artifacts supplied in part by the original authors, and replicated if obtained in a different study by a different team using artifacts not supplied by the original authors. The ACM originally had these definitions the other way around until asked by ISO to bring them in line with the scientific standard (ibid.).
Conversely, in Drummond's view 2009 obtaining the same result by re-running an experiment in the same way as the original is replicability, while reproducibility is obtaining it in a different way. Whitaker (2017), followed by Schloss (2018), defines four concepts rather than two, basing definitions of reproducibility, replicability, robustness and generalisability on the different possible combinations of same vs. different data and code.
None of these definitions adopt the general scientific concepts and definitions pertaining to reproducibility, codified in the International Vocabulary of Metrology, VIM (JCGM, 2012). One issue is that they all reduce the in principle open-ended number of dimensions of variation between measurements accounted for by VIM to just two or three (code, data and/or team). Another, that unlike VIM, they don't produce comparable results.
NLP does not currently have a shared approach to deciding reproducibility, and results from reproductions as currently reported are not comparable across studies and can, as mentioned in the introduction, lead to contradictory conclusions about an original study's reproducibility. There appears to be no work at all in NLP that aims to estimate degree of reproducibility which would allow crossstudy comparisons and conclusions.
Metrology-based Reproducibility Assessment
Metrology is a meta-science: its subject is the standardisation of measurements across all of science to ensure comparability. Computer science has long borrowed terms, most notably reproducibility, from metrology, albeit not adopting the same definitions (as discussed in Section 2 above).
In this section, we describe quantified reproducibility assessment (QRA), an approach that is directly derived from the concepts and definitions of metrology, adopting the latter exactly as they are, and yields assessments of the degree of similarity between numerical results and between the studies that produced them. We start below with the concepts and definitions that QRA is based on, followed by an overview of the framework (Section 3.2) and steps in applying it in practice (Section 3.3).
VIM Definitions of Repeatability and Reproducibility
The International Vocabulary of Metrology (VIM) (JCGM, 2012) defines repeatability and reproducibility as follows (defined terms in bold, see VIM for subsidiary defined terms):
2.21 measurement repeatability (or repeatability, for short) is measurement precision under a set of repeatability conditions of measurement.
2.20 a repeatability condition of measurement (repeatability condition) is a condition of measurement, out of a set of conditions that includes the same measurement procedure, same operators, same measuring system, same operating conditions and same location, and replicate measurements on the same or similar objects over a short period of time.
2.25 measurement reproducibility (reproducibility) is measurement precision under reproducibility conditions of measurement.
2.24 a reproducibility condition of measurement (reproducibility condition) is a condition of measurement, out of a set of conditions that includes different locations, operators, measuring systems, etc. A specification should give the conditions changed and unchanged, to the extent practical.
In other words, VIM considers repeatability and reproducibility to be properties of measurements (not objects, scores, results or conclusions), and defines them as measurement precision, i.e. both are quantified by calculating the precision of a set of measured quantity values. Both concepts are defined relative to a set of conditions of measurement: the conditions have to be known and specified for assessment of repeatability and reproducibility to be meaningful. In repeatability, conditions are the same, whereas in reproducibility, they differ.
In an NLP context, objects are systems, and measurements involve applying an evaluation method to a system usually via obtaining a sample of its outputs and applying the method to the sample (further details of how concepts map to NLP are provided in Section 3.3).
Assessment framework
The VIM definitions translate directly to the following definition of repeatability R 0 (where all conditions of measurement C are the same across measurements): Precision is typically reported in terms of some or all of the following: mean, standard deviation with 95% confidence intervals, coefficient of variation, and percentage of measured quantity values within n standard deviations. We opt for the coefficient of variation (CV), 1 because it is a general measure, not in the unit of the measurements (unlike mean and standard deviation), providing a quantification of precision (degree of reproducibility) that is comparable across studies (Ahmed, 1995, p. 57). This also holds for percentage within n standard deviations but the latter is a less recognised measure, and likely to be the less intuitive for many. In reproduction studies in NLP/ML, sample sizes tend to be very small (a sample size of 8, one original study plus 7 reproductions, as in Table 6 is currently unique). We therefore need to use de-biased sample estimators: we use the unbiased sample standard deviation, denoted s * , with confidence intervals calculated using a t-distribution, and standard error (of the unbiased sample standard deviation) approximated on the basis of the standard error of the unbiased sample variance se(s 2 ) as se s 2 (s * ) ≈ 1 2σ se(s 2 ) (Rao, 1973). Assuming measured quantity values are normally distributed, we calculate the standard error of the sample variance in the usual way: se(s 2 ) = 2σ 4 n−1 . Finally, we also use a small sample correction (indicated by the star) for the coefficient of variation: CV * = (1 + 1 4n )CV (Sokal and Rohlf, 1971). 2 Before applying CV * to values on scales that do not start at 0 (mostly in human evaluations) we shift values to start at 0 to ensure comparability. 3 This means that to calculate the CV * scores in the tables below, measurements are first shifted.
Application of the framework
Using the defined VIM terms and the notations from Section 3.2, we can refine the question from the start of this paper as follows: if we perform multiple measurements of object O and measurand m under reproducibility conditions of measurement C i , what is the precision of the measured quantity values we obtain? For NLP, this means calculating the precision of multiple evaluation scores for the same system and evaluation measure.
Focusing here on reproducibility assessment where we start from an existing set of results (rather than a set of experiments specifically designed to test reproducibility), the steps in performing QRA are as follows:
1. For a set of n measurements to be assessed, identify the shared object and measurand.
2. Identify all conditions of measurement C i for which information is available for all measurements, and specify values for each condition, including measurement method and procedure.
Gather the n measured quantity values
v 1 , v 2 , ...v n .
4. Compute precision for v 1 , v 2 , ...v n , giving reproducibility score R.
5. Report resulting R score and associated confidence statistics, alongside the C i .
In NLP terms, the object is the ready-to-use system (binaries if available; otherwise code, dependencies, parameter values, how the system was compiled and trained) being evaluated (e.g. the NTSdefault system variant in Table 1), the measurand is the quantity intended to be measured (e.g. BLEUstyle modified n-gram precision), and measurement method and procedure capture how to evaluate the system (e.g. obtaining system outputs for a specified set of inputs, and applying preprocessing and a given BLEU implementation to the latter). VIM holds that reproducibility assessment is only meaningful if the reproducibility conditions of measurement are specified for a given test. Conditions of measurement cover every aspect and detail of how a measurement was performed and how the measured quantity value was obtained. The key objective is to capture all respects in which the measurements to be assessed are known to be either the same or different. If QRA is performed for a set of existing results, it is often not possible to discover every aspect and detail of how a measurement was performed, so a reduced set may have to be used (unlike in experiments designed to test reproducibility where such details can be gathered as part of the experimental design).
The reproducibility and evaluation checklists mentioned in Section 2 (Pineau, 2020;Shimorina and Belz, 2021) capture properties that are in effect conditions of measurement, and in combination with code, data and other resources serve well as a way of specifying conditions of measurement, if they have been completed by authors. However, at the present time, completed checklists are not normally available. The following is a simple set of conditions of measurement the information required for which is typically available for existing work (we include object and measurand for completeness although strictly they are not conditions, as they must be the same in each measurement in a given QRA test):
1. Object: the system (variant) being evaluated. 4 E.g. a given MT system.
2.
Measurand: the quantity intended to be evaluated. 5 E.g. BLEU-style n-gram precision or human-assessed Fluency.
Object conditions:
(a) System code: source code including any parameters. E.g. the complete code implementing an MT system.
(b) Compile/training information: steps from code plus parameters to fully compiled and trained system, including dependencies and environment. E.g. complete information about how the MT system code was compiled and the system trained.
4. Measurement method conditions: 6 (a) Method specification: full description of method used for obtaining values quantifying the measurand. E.g. a formal definition of BLEU.
(b) Implementation: the method implemented in a form that can be applied to the object in order to obtain measured quantity values. E.g. a full implementation of BLEU. (a) Procedure: specification of how system outputs (or other system characteristics) are obtained and the measurement method is applied to them. E.g. running a BLEU tool on system outputs and reference outputs.
(b) Test set: the data used in obtaining and evaluating system outputs (or other system characteristics). E.g. a test set of source-language texts and reference translations.
(c) Performed by: who performed the measurement procedure and any additional information about how they did it. E.g. the team applying the BLEU tool, and the run-time environment they used.
The names of the conditions of measurement used in this paper are boldfaced above. The values for each condition characterise how measurements differ in respect of the condition. In reporting results from QRA tests in the following section, we use paper identifiers as shorthand for each distinct condition value (full details in each case being available from the referenced papers). surements) for which we performed QRA tests in this study. For each object/measurand pair, the columns show, from left to right, information about the system evaluated (object), the evaluation measure applied (measurand), the number of scores (measured quantity values) obtained, the papers in which systems and scores were first reported, and the NLP task and type of evaluation involved.
QRA Tests
There are three sets of related systems: (i) the (single) PASS football report generator (van der Lee et al., 2017), (ii) Vajjala and Rama (2018)'s 11 multilingual essay scoring system variants, and (iii) two variants of Nisioi et al. (2017)'s neural text simplifier (NTS). PASS is evaluated with three evaluation measures (human-assessed Clarity, Fluency and Stance Identifiability), the essay scoring systems with one (weighted F1), and the NTS systems with two (BLEU and SARI). For PASS we have one reproduction study, for the essay scorers seven, and for the NTS systems, from three to six. The PASS reproduction was carried out as part of ReproGen (Belz et al., 2021b), the reproductions of the essay-scoring systems and of one of the NTS systems as part of REPROLANG (Branco et al., 2020), and we carried out an additional reproduction study of the NTS systems for this paper. 8 The PASS text generation system is rule-based, the essay classifiers are 'theory-guided and datadriven' hybrids, and the text simplifiers are end-toend neural systems. This gives us a good breadth of NLP tasks, system types, and evaluation types and measures to test QRA on.
QRA for NTS systems
The neural text simplification systems reported by Nisioi et al. (2017) were evaluated with BLEU (n-gram similarity between outputs and multiple reference texts) and SARI (based on word added/retained/deleted in outputs compared to both inputs and reference texts, summing over addition and retention F-scores and deletion Precisions). Table 4 shows BLEU and SARI scores for the two system variants from the original paper and the two reproduction studies, alongside the four corresponding CV * values. In their reproduction, Cooper and Shardlow (2020) regenerated test outputs for NTS-w2v_def, but not for NTS_def, which explains the missing scores in Column 4. The different numbers of scores in different rows in Columns 6-9 are due to our own reproduction using Nisioi et al.'s SARI script, but two different BLEU scripts: (i) Nisioi et al.'s script albeit with the tokeniser replaced by our own because the former did not work due to changes in the NLTK library; and (ii) SacreBLEU (Xu et al., 2016). in all cases, the differences in these BLEU scores can only be caused by differences in BLEU scripts and how they were run. The corresponding CV * is as big as 0.838 for (just) the four NTS_def BLEU scores, and 1.314 for (just) the three NTS-w2v_def BLEU scores, reflecting known problems with nonstandardised BLEU scripts (Post, 2018).
If we conversely look just at those measurements (identifiable by boldfaced measured quantity values in Table 5) where the reproducing team regenerated outputs (with the same system code) and evaluation scripts were the same, SARI CV * is 3.11 for the NTS_def variants, and 4.05 for the NTS-w2v_def variants (compared in both cases to 0 (perfect) when the same outputs are used). BLEU CV * is 2.154 for the NTS_def variants (compared to 0.838 for same outputs but different evaluation scripts, as above), and 6.598 for the NTS-w2v_def variants (compared to 1.314 for same outputs but different evaluation scripts). These differences arise simply from running the system in different environments.
The overall higher (worse) CV * values for NTS-w2v_def variants (compared to NTS_def) are likely to be partly due to the NTS models using one third party tool (openNMT), and the NTS-w2v models using two (openNMT and word2vec), i.e. the latter are more susceptible to changes in dependencies.
QRA for PASS system
The PASS system, developed by van der Lee et al.
(2017), generates football match reports from the perspective of each of the competing teams. The original study evaluated the system for Clarity, Fluency and Stance Identifiability in an evaluation with 20 evaluators and a test set of 10 output pairs. The evaluation was repeated with a slightly different evaluation interface and a different cohort of evaluators by Mille et al. (2021). Table 2 shows the results from the original and reproduction evaluations (columns 3 and 4), where the Clarity and Fluency results are the mean scores from 7-point agreement scales, and Identifiability results are the percentage of times the evaluators correctly guessed the team whose supporters a report was written for. Columns 6-9 show the corresponding sample size (number of reproductions plus original study), mean, standard deviation (stdev), the confidence interval (CI) for the standard deviation, and CV * , all calculated on the shifted scores (see Section 3.2). Table 3 shows the values (here, paper identifiers) for the nine conditions of measurement introduced in Section 3.3, for each of the six individual measurements (three evaluation measures times two studies). Note that both object conditions and the test set condition are the same, because Mille et al. used the system outputs shared by van der Lee et al. The values for the Implemented by, Procedure and Performed by conditions reflect the differences in the two evaluations in design, evaluator cohorts, and the teams that performed them.
The scores vary to different degrees for the three measurands, with CV * lowest (reproducibility best) for Stance Identifiability, and highest (worst) for Fluency. These CV * results are likely to reflect that evaluators agreed more on Clarity than Fluency. Moreover, the binary stance identification assessment has better reproducibility than the other two criteria which are assessed on 7-point rating scales.
QRA for essay scoring system variants
The 11 multilingual essay scoring system variants reported by Vajjala and Rama (2018) were evaluated by weighted F1 (wF1) score. Table 6 shows wF1 scores for the 11 multilingual system variants from each of the five papers, alongside the 11 corresponding CV * values. Table 7 in the appendix shows the corresponding conditions of measurement. The baseline classifier (mult-base) uses document length (number of words) as its only feature. For the other variants, +/-indicates that the multilingual classifier was / was not given information about which language the input was in; the multword variants use word n-grams only; mult-word uses POS (part of speech) tag n-grams only; multdep uses n-grams over dependency relation, dependent POS, and head POS triples; mult-dom uses domain-specific linguistic features including document length, lexical richness and errors; mult-emb uses word and character embeddings. The multbase and mult-dom models are logistic regressors, the others are random forests.
A very clear picture emerges: system variant pairs that differ only in whether they do or do not use language information have very similar CV scores. For example, mult-POS − (POS n-grams without language information) and mult-POS + (POS n-grams with language information) both have a very good degree of wF1-reproducibility, their CV * being 3.818 and 3.808 respectively; multword − (word n-grams without language information) and mult-word + (word n-grams with language information) have notably higher CV * , around 10. This tendency holds for all such pairs, indicating that using language information makes next to no difference to reproducibility. Moreover, the mult-dom and mult-emb variants all have similar CV * . 9 The indication is that the syntactic information is obtained/used in a way that is particularly reproducible, whereas the domain-specific information and the embeddings are obtained/used in a way that is particularly hard to reproduce. Overall, the random forest models using syntactic features have the best reproducibility; the logistic regressors using domain-specific features have the worst.
Discussion
Quantified reproducibility assessment (QRA) enables assessment of the degree of reproducibility of evaluation results for any given system and evaluation measure in a way that is scale-invariant 10 and comparable across different QRAs, for reproductions involving either the same or different original studies. Moreover, formally capturing (dis)similarities between systems and evaluation designs enables reproducibility to be assessed relative to such (dis)similarities. In combination, a set of results from QRA tests for the same system and evaluation measure can provide pointers to which aspects of the system and evaluation might be associated with low reproducibility. E.g. for the wF1 evaluations of the essay scoring systems above, it is clear that variations in reproducibility are associated at least in part with the different features used by systems.
It might be expected that the reproducibility of human-assessed evaluations is generally worse than metric-assessed. Our study revealed a more mixed picture. As expected, the Fluency and Clarity evaluations of the PASS system were among those with highest CV * , and the BLEU and SARI evaluation of the NTS systems and wF1 evaluation of the mult-POS and mult-dep systems were among those with lowest CV * . However, human-assessed Stance Identifiability of PASS was among the most reproducible, and metric-assessed wF1 of mult-base, mult-dom and mult-emb were among the worst.
In this paper, our focus has been QRA testing of existing research results. However, ideally, QRA would be built into new method development from the outset, where at first reporting, a detailed stan- dardised set of conditions of measurement is specified, and repeatability tests (where all conditions are identical except for the team conducting the tests, see Section 3.2) are performed to determine baseline reproducibility. Such repeatability QRA would provide quality assurance for new methods as well as important pointers for future reproductions regarding what degree of reproducibility to expect for given (types of) methods. If this is not possible, post-hoc reproducibility QRA (where there are differences in conditions of measurement values) is performed instead. If this yields high (poor) CV * , one way to proceed is to minimise differences in conditions of measurement between the studies and observe the effect on CV * , changing aspects of system and evaluation design and adding further conditions of measurement if need be. For human evaluation in particular, persistently high CV * would indicate a problem with the method itself.
Conclusion
We have described an approach to quantified reproducibility assessment (QRA) based on concepts and definitions from metrology, and tested it on 18 system and evaluation measure combinations involving diverse NLP tasks and types of evaluation.
QRA produces a single score that quantifies the degree of reproducibility of a given system and evaluation measure, on the basis of the scores from, and differences between, multiple reproductions of the same original study. We found that the approach facilitates insights into sources of variation between reproductions, produces results that are comparable across different reproducibility assessments, and provides pointers about what needs to be changed in system and/or evaluation design to improve reproducibility.
A recent survey (Belz et al., 2021a) found that just 14% of the 513 original/reproduction score pairs analysed were exactly the same. Judging the remainder simply 'not reproduced' is of limited usefulness, as some are much closer to being the same than others. At the same time, assessments of whether the same conclusions can be drawn on the basis of different scores involve subjective judgments and are prone to disagreement among assessors. Quantifying the closeness of results as in QRA, and, over time, establishing expected levels of closeness, seems a better way forward. .
A Conditions of Measurement for the Essay Scoring Systems
R 0 (
0M1, M2, ...Mn) := Precision(v1, v2, ...vn), where Mi : (m, O, ti, C) → vi (1) and the M i are repeat measurements for measurand m performed on object O at different times t i under (the same) set of conditions C, producing measured quantity values v i . Below, the coefficient of variation is used as the precision measure, but other measures are possible. Conditions of measurement are attribute/value pairs each consisting of a name and a value (for examples, see following section). Reproducibility R is defined in the same way as R 0 except that condition values (but not names) differ for one or more of the conditions of measurement C i : R(M1, M2, ...Mn) := Precision(v1, v2, ...vn), where Mi : (m, O, ti, Ci) → vi (2)
Table 1 :
1Summary overview of the 18 object/measurand combinations taht were QRA-tested for this paper.5. Measurement procedure conditions: 7
Table 1
1provides an overview of the 18 object/ measurand pairs (corresponding to 116 individual mea-7 For definition of 'measurement procedure', see VIM 2.6.
SampleObject Measurand van der Lee et al.Mille et al. (2021)
size
mean stdev stdev 95% CI CV * ↓
(2017)
PASS
Clarity
5.64
6.30
2
4.969 0.583
[-2.75, 3.92]
13.193
Fluency
5.36
6.14
2
4.75 0.691
[-3.26, 4.65]
16.372
Stance id.
91%
97%
2
93.88 5.096 [-24.05, 34.24] 6.107
Table 2 :
2Precision (CV * ) and component measures (mean, standard deviation, standard deviation, confidence
intervals) for measured quantity values obtained in two measurements for each of the three human-assessed
evaluation measures for the PASS system. Columns 6-9 calculated on shifted scores (see Section 3.2).
Object conditions
Measurement method
Measurement procedure
Measured
Object Measurand
conditions
conditions
quantity
CV *
Code by Comp./trained by Method Implem. by Procedure Test set Performed by
value
PASS
Clarity
vdL&al
vdL&al
vdL&al
vdL&al
vdL&al
vdL&al
vdL&al
5.64
13.193
vdL&al
vdL&al
vdL&al
M&al
M&al
vdL&al
M&al
6.30
Fluency
vdL&al
vdL&al
vdL&al
vdL&al
vdL&al
vdL&al
vdL&al
5.36
16.372
vdL&al
vdL&al
vdL&al
M&al
M&al
vdL&al
M&al
6.14
Stance id.
vdL&al
vdL&al
vdL&al
vdL&al
vdL&al
vdL&al
vdL&al
91%
6.107
vdL&al
vdL&al
vdL&al
M&al
M&al
vdL&al
M&al
96.75%
Table 3 :
3Conditions of measurement for two measurements each for three evaluation measures (measurands) and
the PASS system. vdL&al = van der Lee et al. (2017); M&al = Mille et al. (2021).
Table 5
5shows the conditions of measurement for each of the 22 individual measurements. The measured quantity values for those measurements where Comp./trained by=Nisioi et al. are identical for the SARI metric (scores highlighted by green/lighter shading and italics), but differ by up to 1.4 points for BLEU (scores highlighted by blue/darker shading). Because Test set=Nisioi et al.
Table 4 :
4Precision (CV * ) and component measures (mean, standard deviation, standard deviation confidence intervals) for measured quantity values obtained in multiple measurements of the two NTS systems.Nisioi et al. Coop. & Shard. sari(o,s,t) Nisioi et al. OITE Nisioi et al. Coop. & Shard. Nisioi et al. Coop. & Shard. sari(o,s,t) Nisioi et al.Outputs 1 =
Table 5 :
5Conditions of measurement for each measurement carried out for the NTS systems. OTE = outputs vs. targets evaluation, OITE = outputs vs. inputs and targets evaluation. Shaded cells: evaluation of the same system outputs, i.e. the reproductions did not regenerate outputs. Bold: evaluation of (potentially) different system outputs, i.e. the reproductions did regenerate outputs.
Table 6 :
6Precision (CV * ) and component measures (mean, standard deviation, standard deviation confidence
intervals) for measured quantity values obtained in multiple measurements of the essay scoring systems. Seed i =
different approaches to random seeding and cross-validation; ei = different compile/run-time environments; ii =
different test data sets and/or cross-validation folds.
Table 7
7shows the conditions of measurement for each of the 88 individual measurements for the Essay Scoring Systems. Code by Comp./trained by Method Implem. by Procedure Test set Va.& Ra. Huber & Coltekin wF1(o,t) Va.& Ra. OTE Va.& Ra. Huber & Coltekin 0.604 Va.& Ra. Va.& Ra. Huber & Coltekin wF1(o,t) Va.& Ra. OTE Va.& Ra. Huber & Coltekin 0.680 Va.& Ra. Table continued on next page. Code by Comp./trained by Method Implem. by Procedure Test set Performed by value Va.& Ra. Huber & Coltekin wF1(o,t) Va.& Ra. OTE Va.& Ra. Huber & Coltekin 0.600 Va.& Ra. Va.& Ra. Huber & Coltekin wF1(o,t) Va.& Ra. OTE Va.& Ra. Huber & Coltekin 0.662 Va.& Ra. Arhiliuc et al. wF1(o,t) Va.& Ra. OTE Va.& Ra. Arhiliuc et al. 0.681 Va.& Ra. Va.& Ra. wF1(o,t) Va.& Ra. OTE Va.& Ra. Bestgen 0.659 Va.& Ra. Va.& Ra. wF1(o,t) Va.& Ra. OTE Va.& Ra. Bestgen 0.681 Va.& Ra. Va.& Ra. wF1(o,t) ≈Va.& Ra. OTE Va.& Ra. Bestgen 0.684 Va.& Ra. Va.& Ra. wF1(o,t) Va.& Ra. OTE Va.& Ra. Cai. & But. 0.657 Cai. & But. Cai. & But. wF1(o,t) Cai. & But. OTE Va.&Ra. Cai. & But. 0.401Object conditions
Measurement method
Measurement procedure
Measured
Object
Measurand
conditions
conditions
quantity
CV *
Performed by
value
mult-base
wF1
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Va.& Ra.
0.428
14.633
Va.& Ra. Huber & Coltekin wF1(o,t)
Va.& Ra.
OTE
Va.& Ra. Huber & Coltekin
0.493
Va.& Ra.
Arhiliuc et al.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Arhiliuc et al.
0.426
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Bestgen
0.574
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Bestgen
0.579
Va.& Ra.
Va.& Ra.
wF1(o,t) ≈Va.& Ra.
OTE
Va.& Ra.
Bestgen
0.590
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Cai. & But.
0.574
Cai. & But.
Cai. & But.
wF1(o,t) Cai. & But.
OTE
Va.&Ra.
Cai. & But.
0.600
mult-word −
wF1
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Va.& Ra.
0.721
10.609
Va.& Ra. Huber & Coltekin wF1(o,t)
Va.& Ra.
OTE
Va.& Ra. Huber & Coltekin
0.603
Va.& Ra.
Arhiliuc et al.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Arhiliuc et al.
0.605
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Bestgen
0.606
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Bestgen
0.720
Va.& Ra.
Va.& Ra.
wF1(o,t) ≈Va.& Ra.
OTE
Va.& Ra.
Bestgen
0.732
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Cai. & But.
0.606
Cai. & But.
Cai. & But.
wF1(o,t) Cai. & But.
OTE
Va.&Ra.
Cai. & But.
0.740
mult-word +
wF1
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Va.& Ra.
0.719
10.44
Arhiliuc et al.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Arhiliuc et al.
0.607
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Bestgen
0.607
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Bestgen
0.723
Va.& Ra.
Va.& Ra.
wF1(o,t) ≈Va.& Ra.
OTE
Va.& Ra.
Bestgen
0.733
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Cai. & But.
0.607
Cai. & But.
Cai. & But.
wF1(o,t) Cai. & But.
OTE
Va.&Ra.
Cai. & But.
0.736
mult-POS −
wF1
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Va.& Ra.
0.726
3.818
Va.& Ra. Huber & Coltekin wF1(o,t)
Va.& Ra.
OTE
Va.& Ra. Huber & Coltekin
0.681
Va.& Ra.
Arhiliuc et al.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Arhiliuc et al.
0.680
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Bestgen
0.680
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Bestgen
0.722
Va.& Ra.
Va.& Ra.
wF1(o,t) ≈Va.& Ra.
OTE
Va.& Ra.
Bestgen
0.728
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Cai. & But.
0.680
Cai. & But.
Cai. & But.
wF1(o,t) Cai. & But.
OTE
Va.&Ra.
Cai. & But.
0.732
mult-POS +
wF1
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Va.& Ra.
0.724
3.808
Arhiliuc et al.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Arhiliuc et al.
0.680
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Bestgen
0.681
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Bestgen
0.725
Va.& Ra.
Va.& Ra.
wF1(o,t) ≈Va.& Ra.
OTE
Va.& Ra.
Bestgen
0.729
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Cai. & But.
0.681
Cai. & But.
Cai. & But.
wF1(o,t) Cai. & But.
OTE
Va.&Ra.
Cai. & But.
0.731
mult-dep −
wF1
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Va.& Ra.
0.703
4.5
Va.& Ra. Huber & Coltekin wF1(o,t)
Va.& Ra.
OTE
Va.& Ra. Huber & Coltekin
0.660
Va.& Ra.
Arhiliuc et al.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Arhiliuc et al.
0.650
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Bestgen
0.651
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Bestgen
0.699
Va.& Ra.
Va.& Ra.
wF1(o,t) ≈Va.& Ra.
OTE
Va.& Ra.
Bestgen
0.711
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Cai. & But.
0.651
Cai. & But.
Cai. & But.
wF1(o,t) Cai. & But.
OTE
Va.&Ra.
Cai. & But.
0.710
mult-dep +
wF1
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Va.& Ra.
0.693
4.387
Va.& Ra. Huber & Coltekin wF1(o,t)
Va.& Ra.
OTE
Va.& Ra. Huber & Coltekin
0.661
Va.& Ra.
Arhiliuc et al.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Arhiliuc et al.
0.652
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Bestgen
0.653
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Bestgen
0.699
Va.& Ra.
Va.& Ra.
wF1(o,t) ≈Va.& Ra.
OTE
Va.& Ra.
Bestgen
0.712
Va.& Ra.
Va.& Ra.
wF1(o,t)
Va.& Ra.
OTE
Va.& Ra.
Cai. & But.
0.653
Cai. & But.
Cai. & But.
wF1(o,t) Cai. & But.
OTE
Va.&Ra.
Cai. & But.
0.716
27
Table 7 :
7Conditions of measurement for each measurement carried out for the multilingual essay scoring systems. OTE = outputs vs.targets evaluation.
The coefficient of variation (CV), also known as relative standard deviation (RSD) is defined as the standard deviation over the mean, often expressed as a percentage.2 Code and data are available here: https://github. com/asbelz/coeff-var.3 Otherwise CV * reflects differences solely due to different lower ends of scales.
VIM doesn't define 'object' but refers to it as that which is being measured. 5 For definition of 'measurand' see VIM 2.3. 6 For definition of 'measurement method', see VIM 2.5.
Authors of original studies gave permission for their work to be reproduced(Branco et al., 2020;Belz et al., 2021b).
The high CV * for the baseline system may be due to an issue wiith the evaluation code (macro-F1 instead of weighted-F1), as reported by Bestgen (Section 3.2, first paragraph), Caines and Buttery (Section 2.5, one before last paragraph) and Huber and Çöltekin (Section 3.2, second paragraph).10 If evaluation scores are multiplied by a common factor, CV * does not change.
AcknowledgementsWe are grateful to the anonymous reviewers and area chairs for their exceptionally detailed and helpful feedback.Popović's work on this s study was funded by the ADAPT SFI Centre for Digital Media Technology which is funded by Science Foundation Ireland through the SFI Research Centres Programme, and co-funded under the European Regional Development Fund (ERDF) through Grant 13/RC/2106. Mille's work was supported by the European Commission under the H2020 program contract numbers 786731, 825079, 870930 and 952133.
A pooling methodology for coefficient of variation. Se Ahmed, Sankhyā: The Indian Journal of Statistics, Series B. SE Ahmed. 1995. A pooling methodology for coeffi- cient of variation. Sankhyā: The Indian Journal of Statistics, Series B, pages 57-75.
Language proficiency scoring. Cristina Arhiliuc, Jelena Mitrović, Michael Granitzer, Proceedings of The 12th Language Resources and Evaluation Conference. The 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationCristina Arhiliuc, Jelena Mitrović, and Michael Gran- itzer. 2020. Language proficiency scoring. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 5624-5630, Marseille, France. European Language Resources Association.
Artifact review and badging Version 1.1. Accessed. Association for Computing MachineryAssociation for Computing Machinery. 2020. Artifact review and badging Version 1.1. Accessed August 24, 2020.
A systematic review of reproducibility research in natural language processing. Anya Belz, Shubham Agarwal, Anastasia Shimorina, Ehud Reiter, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeAnya Belz, Shubham Agarwal, Anastasia Shimorina, and Ehud Reiter. 2021a. A systematic review of re- producibility research in natural language processing. In Proceedings of the 16th Conference of the Euro- pean Chapter of the Association for Computational Linguistics: Main Volume, pages 381-393.
The reprogen shared task on reproducibility of human evaluations in NLG: Overview and results. Anya Belz, Anastasia Shimorina, Shubham Agarwal, Ehud Reiter, The 14th International Conference on Natural Language Generation. Anya Belz, Anastasia Shimorina, Shubham Agarwal, and Ehud Reiter. 2021b. The reprogen shared task on reproducibility of human evaluations in NLG: Overview and results. In The 14th International Con- ference on Natural Language Generation.
Reproducing monolingual, multilingual and cross-lingual CEFR predictions. Yves Bestgen, Proceedings of The 12th Language Resources and Evaluation Conference. The 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationYves Bestgen. 2020. Reproducing monolingual, mul- tilingual and cross-lingual CEFR predictions. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 5595-5602, Marseille, France. European Language Resources Association.
André Moreira, and Willem Elbers. 2020. A shared task of a new, collaborative type to foster reproducibility: A first exercise in the area of language science and technology with REPROLANG2020. António Branco, Nicoletta Calzolari, Piek Vossen, Gertjan Van Noord, João Dieter Van Uytvanck, Luís Silva, Gomes, Proceedings of The 12th Language Resources and Evaluation Conference. The 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationAntónio Branco, Nicoletta Calzolari, Piek Vossen, Gert- jan Van Noord, Dieter van Uytvanck, João Silva, Luís Gomes, André Moreira, and Willem Elbers. 2020. A shared task of a new, collaborative type to foster re- producibility: A first exercise in the area of language science and technology with REPROLANG2020. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 5539-5545, Marseille, France. European Language Resources Association.
REPROLANG 2020: Automatic proficiency scoring of Czech, English, German, Italian, and Spanish learner essays. Andrew Caines, Paula Buttery, Proceedings of The 12th Language Resources and Evaluation Conference. The 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationAndrew Caines and Paula Buttery. 2020. REPROLANG 2020: Automatic proficiency scoring of Czech, En- glish, German, Italian, and Spanish learner essays. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 5614-5623, Marseille, France. European Language Resources Association.
Com-biNMT: An exploration into neural text simplification models. Michael Cooper, Matthew Shardlow, Proceedings of The 12th Language Resources and Evaluation Conference. The 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationMichael Cooper and Matthew Shardlow. 2020. Com- biNMT: An exploration into neural text simplifica- tion models. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 5588- 5594, Marseille, France. European Language Re- sources Association.
Replicability is not reproducibility: nor is it good science. Chris Drummond, Presented at 4th Workshop on Evaluation Methods for Machine Learning held at ICML'09. Chris Drummond. 2009. Replicability is not repro- ducibility: nor is it good science. Presented at 4th Workshop on Evaluation Methods for Machine Learn- ing held at ICML'09.
Reproduction and replication: A case study with automatic essay scoring. Eva Huber, Çagrı Çöltekin, Proceedings of The 12th Language Resources and Evaluation Conference. The 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationEva Huber and Çagrı Çöltekin. 2020. Reproduction and replication: A case study with automatic essay scor- ing. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 5603-5613, Mar- seille, France. European Language Resources Asso- ciation.
International vocabulary of metrology: Basic and general concepts and associated terms (VIM). Jcgm , Joint Committee for Guides in MetrologyJCGM. 2012. International vocabulary of metrology: Basic and general concepts and associated terms (VIM). Joint Committee for Guides in Metrology, https://www.bipm.org/utils/common/ documents/jcgm/JCGM_200_2012.pdf.
Another PASS: A reproduction study of the human evaluation of a football report generation system. Simon Mille, Thiago Castro Ferreira, Anya Belz, Brian Davis, Proceedings of the 14th International Conference on Natural Language Generation. the 14th International Conference on Natural Language GenerationINLG 2021Simon Mille, Thiago Castro Ferreira, Anya Belz, and Brian Davis. 2021. Another PASS: A reproduction study of the human evaluation of a football report generation system. In Proceedings of the 14th Inter- national Conference on Natural Language Genera- tion (INLG 2021).
Exploring neural text simplification models. Sergiu Nisioi, Sanja Štajner, Simone Paolo Ponzetto, Liviu P Dinu, 10.18653/v1/P17-2014Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics2Sergiu Nisioi, Sanja Štajner, Simone Paolo Ponzetto, and Liviu P. Dinu. 2017. Exploring neural text sim- plification models. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 85-91, Vancouver, Canada. Association for Computational Linguistics.
Empiricism is not a matter of faith. Ted Pedersen, Computational Linguistics. 343Ted Pedersen. 2008. Empiricism is not a matter of faith. Computational Linguistics, 34(3):465-470.
The machine learning reproducibility checklist. Joelle Pineau, v2.0Joelle Pineau. 2020. The machine learning reproducibil- ity checklist v2.0.
A reproduction study of an annotation-based human evaluation of MT outputs. Maja Popović, Anya Belz, Proceedings of the 14th International Conference on Natural Language Generation. the 14th International Conference on Natural Language GenerationAberdeen, Scotland, UKAssociation for Computational LinguisticsMaja Popović and Anya Belz. 2021. A reproduction study of an annotation-based human evaluation of MT outputs. In Proceedings of the 14th International Conference on Natural Language Generation, pages 293-300, Aberdeen, Scotland, UK. Association for Computational Linguistics.
A call for clarity in reporting bleu scores. Matt Post, 186Matt Post. 2018. A call for clarity in reporting bleu scores. WMT 2018, page 186.
Linear statistical inference and its applications. Calyampudi Radhakrishna Rao, WileyCalyampudi Radhakrishna Rao. 1973. Linear statistical inference and its applications. Wiley.
Sustainable computational science: The ReScience initiative. Nicolas P Rougier, Konrad Hinsen, Frédéric Alexandre, Thomas Arildsen, Lorena A Barba, C Y Fabien, Titus Benureau, Pierre De Brown, Ozan Buyl, Andrew P Caglayan, Davison, PeerJ Computer Science. 3142Nicolas P. Rougier, Konrad Hinsen, Frédéric Alexan- dre, Thomas Arildsen, Lorena A Barba, Fabien CY Benureau, C Titus Brown, Pierre De Buyl, Ozan Caglayan, Andrew P Davison, et al. 2017. Sustain- able computational science: The ReScience initiative. PeerJ Computer Science, 3:e142.
Identifying and overcoming threats to reproducibility, replicability, robustness, and generalizability in microbiome research. Patrick D Schloss, MBio. 39Patrick D. Schloss. 2018. Identifying and overcoming threats to reproducibility, replicability, robustness, and generalizability in microbiome research. MBio, 9(3).
The human evaluation datasheet 1.0: A template for recording details of human evaluation experiments in NLP. Anastasia Shimorina, Anya Belz, arXiv:3910940arXiv preprintAnastasia Shimorina and Anya Belz. 2021. The human evaluation datasheet 1.0: A template for recording de- tails of human evaluation experiments in NLP. arXiv preprint arXiv:3910940.
Biometry: The Principles and Practice of Statistics in Biological Research. R R Sokal, F J Rohlf, WH FreemanR.R. Sokal and F.J. Rohlf. 1971. Biometry: The Princi- ples and Practice of Statistics in Biological Research. WH Freeman.
Experiments with universal CEFR classification. Sowmya Vajjala, Taraka Rama, 10.18653/v1/W18-0515Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications. the Thirteenth Workshop on Innovative Use of NLP for Building Educational ApplicationsNew Orleans, LouisianaAssociation for Computational LinguisticsSowmya Vajjala and Taraka Rama. 2018. Experiments with universal CEFR classification. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 147-153, New Orleans, Louisiana. Association for Computational Linguistics.
PASS: A Dutch data-to-text system for soccer, targeted towards specific audiences. Chris Van Der Lee, Emiel Krahmer, Sander Wubben, Proceedings of the 10th International Conference on Natural Language Generation. the 10th International Conference on Natural Language GenerationChris van der Lee, Emiel Krahmer, and Sander Wubben. 2017. PASS: A Dutch data-to-text system for soccer, targeted towards specific audiences. In Proceedings of the 10th International Conference on Natural Lan- guage Generation, pages 95-104.
. Kirstie Whitaker, The MT Reproducibility Checklist. Kirstie Whitaker. 2017. The MT Reproducibility Checklist. https://www.cs.mcgill.ca/ jpineau/ReproducibilityChecklist. pdf.
Reproducibility in computational linguistics: Are we willing to share?. Martijn Wieling, Josine Rawee, Gertjan Van Noord, Computational Linguistics. 444Martijn Wieling, Josine Rawee, and Gertjan van Noord. 2018. Reproducibility in computational linguistics: Are we willing to share? Computational Linguistics, 44(4):641-649.
Optimizing statistical machine translation for text simplification. Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, Chris Callison-Burch, Transactions of the Association for Computational Linguistics. 4Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing sta- tistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401-415.
|
[] |
[
"Solving workflow scheduling problems with QUBO modeling",
"Solving workflow scheduling problems with QUBO modeling"
] |
[
"A I Pakhomchik ",
"S Yudin ",
"M R Perelshtein ",
"A Alekseyenko ",
"S Yarkoni ",
"\nTerra Quantum AG\nSt. Gallerstrasse 16A9400RorschachSwitzerland\n",
"\nVolkswagen Data:Lab\nVolkswagen Group of America San Francisco\nMunichCAUSA, Germany\n"
] |
[
"Terra Quantum AG\nSt. Gallerstrasse 16A9400RorschachSwitzerland",
"Volkswagen Data:Lab\nVolkswagen Group of America San Francisco\nMunichCAUSA, Germany"
] |
[] |
In this paper we investigate the workflow scheduling problem, a known NP-hard class of scheduling problems. We derive problem instances from an industrial use case and compare against several quantum, classical, and hybrid quantum-classical algorithms. We develop a novel QUBO to represent our scheduling problem and show how the QUBO complexity depends on the input problem. We derive and present a decomposition method for this specific application to mitigate this complexity and demonstrate the effectiveness of the approach.
| null |
[
"https://arxiv.org/pdf/2205.04844v1.pdf"
] | 248,665,807 |
2205.04844
|
4987ed67dbd9a276223b01c6a0ff85b14fafe84e
|
Solving workflow scheduling problems with QUBO modeling
A I Pakhomchik
S Yudin
M R Perelshtein
A Alekseyenko
S Yarkoni
Terra Quantum AG
St. Gallerstrasse 16A9400RorschachSwitzerland
Volkswagen Data:Lab
Volkswagen Group of America San Francisco
MunichCAUSA, Germany
Solving workflow scheduling problems with QUBO modeling
In this paper we investigate the workflow scheduling problem, a known NP-hard class of scheduling problems. We derive problem instances from an industrial use case and compare against several quantum, classical, and hybrid quantum-classical algorithms. We develop a novel QUBO to represent our scheduling problem and show how the QUBO complexity depends on the input problem. We derive and present a decomposition method for this specific application to mitigate this complexity and demonstrate the effectiveness of the approach.
I. Introduction
Quantum computing has garnered increased attention in recent years, in both industrial and academic contexts. In general, the aim is to develop specialized hardware that can be programmed to simulate a quantum mechanical process, which is classically intractable [1,2]. Construction of algorithms using quantum bits (qubits) currently proceeds as multiple paradigms, the most wellknown of which being the gate model [3] and the adiabatic quantum computing [4] model. In the former, unitary operators are used to manipulate individual qubits' states to construct the logical operators. In the latter, a system is initialized in a simple superposition of all possible states, and slowly evolved to represent a final function (also called a final Hamiltonian). It has been shown that these models are polynomially equivalent [5], and both have been studied extensively.
Advancements in the development and production of quantum hardware has led to the manufacture of quantum hardware prototypes of various sorts, often made publicly-accessible. Companies such as Google [6], IBM [7], and D-Wave Systems [8], among others, all offer cloud-based access to a suite of quantum algorithms tailored for their respective quantum processing units (QPUs). The purpose of these is to exploit such quantum algorithms in order to address computationally difficult, and sometimes intractable, problems in fields such as machine learning, molecular and physical simulation, and optimization. Of these, significant work has already been done in the realm of optimization, largely due to the implementation of quantum annealing [9] and quantum approximate optimization algorithm (QAOA) [10]. Both of these are metaheuristic quantum optimization algorithms which can be implemented in currently-available quantum hardware. Previous literature highlight the efforts to construct suitable optimization problems that can exploit these quantum algorithms in both academic [11] and industrial [12][13][14][15][16] circles.
How exactly quantum algorithms can impact combi-natorial optimization in the absence of error-correction remains an open question. Furthermore, there is little evidence of concrete use of quantum algorithms for realworld applications, outside of a select choice of showcase examples (e.g., [17]). While error-correction will allow implementation of provably asymptotically faster quantum algorithms with respect to their classical counterparts (such as Shor's factoring [18], Grover's search algorithms [19], and solving systems of linear equations [20,21]), it is unknown if noisy intermediate-scale quantum (NISQ [22]) computing can overcome its limitations to provide similar value. In the meantime, hybrid quantum-classical algorithms have emerged to bridge the gap until the end of the NISQ era is reached. Construction of variational algorithms has been demonstrated, in particular for gate-model quantum computers, to perform specific tasks in quantum machine learning [23], quantum chemistry [24], and optimization [25]. In this paper, we compare such hybrid algorithms to a variety of techniques to solve a specific class of scheduling problems, the workflow scheduling problem. The rest of this paper is organized as follows: Section II introduces the concepts behind the different scheduling problems, as well as the previous works studied in quantum computing. Section III formally introduces the version of workflow scheduling investigated in this paper, and develops the methods required to model this problem as a QUBO for quantum optimization algorithms, including a decomposition technique for solving large QUBOs. In Section IV we present the data used to generate the problem instances, and the algorithms used to solve them in experiments. The results are presented and discussed in Section V, and our conclusions are presented in Section VI.
II. Applications of scheduling problems
Many applications related to supply chain and logistics optimization can be formulated as certain classes arXiv:2205.04844v1 [quant-ph] 10 May 2022 of scheduling problems. Typically, these problems can be described as a set of jobs (composed of individual sequences of operations), each taking a non-negative amount of time, that must be completed in the minimum amount of time (known as the makespan) on a set of machines. There are different variants of the jobshop scheduling problems [26][27][28][29][30][31], each with their own set of constraints and conditions that uniquely define them. Dynamic resources, supply constraints, time windows (and more), all are examples of constraints that may be used to tailor a sub-class of scheduling for a particular interesting case. A particularly general and wellknown version of the problem, the job-shop scheduling problem (JSP), typically refers to the case where there are N jobs to be executed on M machines, and no other additional constraints. This simple version of the problem is already NP-hard; a well-known Ising model formulation has been used to study the JSP in the context of quantum computing [32]. Another example is a similar scheduling problem, the Nurse Scheduling problem, which attempts to schedule nurses to shifts based on personal availability and other hard constraints [33]. In this work we motivate one specific subclass of scheduling, namely the dynamic resource workflow scheduling problem. The problem we consider is motivated by a real-world use-case in the automotive industry, the quality control testing of manufactured cars at the end of an assembly line. After a car is produced, a sequence of tests and checks are performed by workers on the factory floor to ensure the quality of production. This particular problem has the following constraints: for a set of tests to be performed, some tests may have sub-tasks that are conflicting with other tests; the number of workers available changes over time; and most importantly, some tests may be dependent on others to be completed first. The objective of the optimization problem is therefore to determine the sequence of tests that minimizes the total amount of time required to complete all the tests (i.e., minimize the makespan).
We define the problem formally as follows: given a set J = {J 1 , · · · , J N } denoting N jobs (we consider the case where each job has one operation), each job takes an amount of time T (J i ) and requires at least R(J i ) workers to be started and executed. The set of dependencies for each job is D(J i ) = {J j , J k , · · · }, denoting all tasks which are dependent upon the completion of J i . Lastly, the vector W represents the number of available workers to perform the tasks at each time step. We consider workers (i.e., the available resources to perform tasks at each step) as identically qualified and thus they are able to work on any task in the workflow scheduling problem. In general, this is not necessarily the case, and one could extend the models we derive to accommodate for multiple categories of workers (where tasks also depend on different or even multiple categories) seamlessly. Visually, workflow scheduling can be represented as a directed acyclic graph (DAG), where jobs J i are represented as nodes and edges illustrate the set of dependencies for each job. An example of a 6-node problem with limited available resources at each time slot is shown in Fig. 1a,b. The solution of a problem is a makespan map that shows when each job is completed. Sub-optimal and optimal makespan maps for this example problem are presented in Fig. 1c,d respectively.
III. Problem Formulation
Before formulating the workflow scheduling problem as a QUBO, we introduce some assumptions that allow for the simplification of the optimization, without loss of the generality. Here, we assume that (i) a single job can not occupy more than a single time slot, while the length of a job equals exactly one time slot length; (ii) multiple jobs can be started in a single time slot if there are sufficient resources; (iii) all resources are identical and the amount of available resources at the current time step is bounded by the number r max ; (iv) jobs can only be started if all parent jobs are completed.
A. Binary Optimisation
In a binary variable formulation, the solution of a workflow scheduling problem is represented by the set of decision variables x, where each binary variable x i,t represents the following:
x i,t = 1, if ith job is started in the tth time slot, 0, otherwise.
(1) Using this notation, we can express all the necessary constraints which appear naturally in the workflow scheduling.
A job is started only once. All jobs start once and only once, otherwise we introduce unnecessary repetitions that use more resources than needed. This constraint is represented by a simple equality:
t x i,t = 1, ∀ i.(2)
All jobs are started in order.
We must ensure that no solutions to the QUBO allow starting a task before the previous dependent tasks are completed. In order to satisfy this condition, we introduce a constraint in the following way. Let us denote the set of all children for ith job as O i . In our binary formulation we introduce the following penalty:
x i,t1 x j,t2 = 0 ∀i, t 2 t 1 , j ∈ O i .(3)
This is sufficient (along with Eq. (2)) to ensure causality of dependent tasks. Any ordering of tasks in which Example of a workflow scheduling problem with 6 jobs. a) Directed acyclic graph with 6 nodes (job index is inside the circle) that illustrates the required ordering (parents-children relations) and resources for each job (number next to the circle). b) Available resources for each time slot when one or more jobs can be completed. The maximum number of time slots is fixed to be seven. c) Makespan map of the problem processed by sub-optimal greedy algorithm. Map shows at which time slot which job should be completed. The total makespan for greedy solution is 7 time slots. d) Makespan map of the problem processed by optimal algorithm. The total makespan is 5 time slots. Two jobs #1 and #4 are completed in a same time slot #3 since their parents were completed and the number of available resources (9) is higher than the required resources for both jobs (3+1).
children are scheduled before their parents result in higher objective value than the correct ordering, which is what we require.
A job is started if and only if there are enough resources.
A job starts in the tth time slot only if the amount of resources related at tth time slot is enough to cover the job. Practically, such a model is appropriate under the conditions of identical resources whose availability fluctuates given a known schedule. To encode this constraint, we use the following system of inequalities:
i x i,t r i r t , r i , r t ∈ [0, r max ] ,(4)
where r i is the amount of resources required by the ith job, r t denotes the available resources at time slot t, and r max is the total amount of resources in the problem.
Giving formulation of all the constraints in a binary format, we can construct a single quadratic cost function containing constraints as additive penalties, i.e. the QUBO format.
B. Constructing the QUBO formulation
Transforming inequality to equality
Inequalities are transformed into equalities for binary variables by introducing binary slack variables in the following manner:
j a i,j x j b i ⇐⇒ b i − j a i,j x j = Ni−1 k=0 α i,k 2 k . (5)
Here, N i = (log 2 (D i )) + 1 with D i = max(b i − j a i,j x j ), and ... means rounding for positive integers and 0 otherwise. In the case of negative D i , there is no x that satisfies the inequality. If j a i,j x j = b i , then D i = 0, resulting in N i = 1.
Objective function
It is important to note that our workflow optimization formulation focuses on solving the NP-hard makespan minimization problem, rather than the NP-complete decision problem. Thus, we define the objective cost as a function of the makespan, which is to be minimized. We introduce a penalty term which penalizes starting a job after the expected runtime, whose magnitude is a tuned hyperparameter. The resulting objective has the form:
C = i,t>R f (t − R) x i,t ,(6)
where f is a monotonically increasing function of t − R with R being the total expected runtime. Combining all constraints in the form of penalties, we obtain
C = C+β i t x i,t − 1 2 +γ i,t1,t2 t1,j∈Oi x i,t1 x j,t2 + + t i x i,t r i + Nt−1 k=0 α t,k 2 k − r t 2 ,(7)
where β, γ, are penalty weights for one-time job start, ordering, and limited resources, respectively. Here, N t = log 2 (r t − i x it r i ) max + 1 is the number of slack variables for the tth time-step as per Eq. (5).
Unbounded search could be performed to find optimal penalty weights that maximize the probability of obtaining minima, but we set β = γ = = A, f (t − R) = t − R, disregarded the rigorous analysis of the behaviour of solvers for different values of hyperparameters. Thus, QUBO could be represented as the sum of two terms:
C = C + AQ 0 .(8)
The guarantee that an optimal solution would be feasible is similar to the estimate in [34], where such sufficient conditions for feasibility were found. Specifically,
A > C [f eas] ,
where f eas is any feasible solution to the problem. Indeed, supposing the optimal solution opt is not feasible in this case, we get a contradiction by the following chain of inequalities:
C [opt] + AQ 0 [opt] A + C [opt] A > C [f eas] . (9)
Size reduction
Lastly, given that we know the resource distribution beforehand -both required and available -we simplify the problem by assuming that the ith job can not be started at tth time slot if there are not enough resources:
x i,t = 0 if r i > r t .(10)
This trick allows us to reduce the problem size by conditioning on infeasible variables rather than adding penalties for the inability to start a job.
C. Decomposing the QUBO formulation
Combining all constraints into a single objective function-including all ancillas necessary for transforming inequalities-generates a significant increase in the number of variables in the final QUBO. This signifies the polynomial overhead incurred by transforming generic optimization problems to QUBO forms. However, we can simplify the problem using decomposition techniques, transforming a larger QUBO into a set of smaller instances. We accomplish this by leveraging the hierarchical structure of the workflow illustrated by a strict parents-children relation, as dictated by the individual tasks' dependencies. These smaller instances (sub-problems) are created in a way that ensures all constraints in the larger problem remain satisfied. Such a decomposition significantly simplifies the problem complexity, and is especially useful at larger problem sizes since problems with hundreds of jobs are challenging even in the most efficient linear programming formulations. The complete optimisation of the problem is therefore performed by solving these sub-problems in a dynamic manner. Interestingly, such a method is applicable not only for quantum algorithms but also for any other exact or heuristic discrete optimisation tool or formulation, including LP, QUBO, HOBO, etc. We now describe the method in more detail.
We start with finding the roots of the directed graph representation of the workflow scheduling problem (i.e., jobs without any parents), and a fixed number of their descendants, m jobs in total. We also fix the number of time slots n that can be processed in a single subproblem. Therefore, we have to schedule m jobs across n time slots. In other words, n, m are now hyperparameters that control the globality of each of the sub-problems.
In order to formulate such sub-problems as QUBO correctly we slightly modify the constraints. Firstly, we relieve the requirements on all jobs to be started only once that are stated in Eq. (2). Instead we set a constraint that allows the completion of a job either zero or one time:
t x i,t ≤ 1 for all i(11)
Such a constraint comes from the local uncertainty of how many jobs have to be completed in the corresponding sub-problem. For this purpose, we rewrite the cost term from Eq. (6) in the following way:
C = − m,n i,t x i,t ,(12)
which encourages the completion of more expensive jobs in terms of resources. Secondly, the order constraint set in Eq. (3) is not suitable anymore since it does not penalize the case where a child with uncompleted parent was started. Therefore, to address this issue, we change the cost function term for order violation from Eq. (3) in the following way
x j,t1 1 − t2<t1 x i,t2 ∀i, ∀t 1 , ∀j ∈ O i .(13)
The minimum of this is now when x j,t1 = 1 and any x i,t2 = 1 (where order is conserved), or if x j,t1 = 0 and so no child task of i is scheduled. Using the solution of this sub-problem, we can define new roots and their descendants, which are considered as the next sub-problem until all jobs are completed in this manner.
To illustrate the decomposition method let us consider the 6-node example depicted in Fig. 1 from before. Here, we set the number of jobs to be n = 3 and number of time slots m = 2 in a single subproblem. The scheme of the subproblems division and their solutions is shown in Fig. 2. Here, the first three jobs are picked since the job with index 0 is a root and 1 and 2 are its closest descendants. It is impossible to place all three jobs in two time slots, therefore only job 0 and 2 are completed: 0 must be completed since it is a root, and 1 requires more resources than 2. More expensive jobs are completed earlier, if possible, since we do not know if there are enough available resources in the next time slots for these jobs. Within the second subproblem, jobs 1 and 3 are new roots since their ancestor is completed, and job 4 is the closest descendant for job 3. In contrast with the first subproblem, all three jobs are completed. On the last subproblem, only job 5 is left and is successfully completed in a single time slot.
The advantage of such an decomposition algorithm lies in its dynamic nature. In case of failures during the problem solution when the number of available resources is changed, the algorithm can process such an unexpected event -there is no need to restart the whole solution construction. However, the disadvantage of the presented algorithm is its locality, and therefore, the fact that it may provide sub-optimal solutions. One way to further minimize the total makespan (and ultimately perform global optimisation) is to increase the size of subproblems. The worst case scenario requires the subproblem to be the same size as the whole problem. Existing highperformance linear programming solvers, e.g. CPLEX, struggle to schedule more than 40 jobs in a reasonable amount of time, and thus, limit the maximum problem size that can be processed at a single step. This challenge Fig. 1. According to the algorithm, the whole problem is divided into three subproblems where we aim to complete at max three jobs in two time slots and create the whole makespan map (depicted on the left). In the first subproblem, jobs 0 (root), 1 (first descendant), 2 (first descendant) are available for processing, however only 0 and 2 are completed due to the lack of resources: 0 is completed since it is a root and 2 because it requires more resources than 1.
In the second subproblem, jobs 1 (new root), 3 (new root), 4 (first descendant) are considered and all three are completed. In the last subproblem, only job 5 (new root) remains and it is completed in a single time slot. Each subproblem can be solved in any suitable optimisation formulation via any discrete solver.
can be addressed by quantum computers potentially providing better global optimality by solving larger subproblems.
IV. Data & Methods
A. Data
For the purposes of benchmarking, we generate test data inspired by internal, industrially-relevant use cases. As described in Section II, the problems are represented as directed acyclic graphs, with nodes representing jobs and edges their dependencies. The graphs vary in size, from 5 to 30 nodes in increments of 5, and the resources associated with each job are drawn uniformly between 1 and 10. In graphs derived from the use cases serving as inspiration for the benchmarking data set, patterns of connectivity can vary widely. To explore the potential effects of this variation, we generated graphs with edge probabilities drawn from different distributions. More specifically, graphs are generated by sequentially adding nodes until the desired problem size is reached; the probability of a node having a previous node as its parent is a function of the previous node's order in that sequence (for instance, if this function is 1/x, the tenth node added to the graph has a 1/10 probability of having the first node as its parent). We tested three different such parent probability distributions (or fall-off distributions), generating sets of instances using 1/x, 1/x 2 , and 1/ √ x. We found that the expected densities of the problem graph, the respective QUBO, and the expected makespan did not differ significantly in the problem sizes we studied. Therefore we choose to present 1/x in this paper (which generated sufficiently complex graphs) and leave varied graph connectivity as a topic for future work. Results from the other fall-off probabilities were qualitatively similar.
B. Algorithms
In this section we describe various algorithms used to solve the workflow scheduling problem. We consider classical, quantum and hybrid algorithms.
Greedy algorithm
The greedy approach can handle any problem size. However, by definition, it is often not optimal. The algorithm keeps in memory all parent-children relations, and firstly schedules the roots. After completing them, it removes these jobs from the initial graph and evaluates new roots. These new roots are then processed, removed then from the graph, and so on. This procedure is repeated until all jobs are completed. The non-optimality of such an approach can be seen for the simple 6-node graph presented in Fig. 1c.
Classical Exact Solver: Linear Programming
Linear programming is one of the most powerful computing paradigms for discrete optimisation. Here, we implement the constraints described in Section III A in linear form, and optimize the cost function from Eq. (6). We use the branching-based CPLEX solver [35]. This algorithm successfully finds the optimal scheduling for 30 jobs and 80 available time slots, but struggles to solve larger problems. The runtime scaling for this problem is roughly exponential, which is not surprising since the problem class is NP-hard in the worst case, and CPLEX is an exact solver.
Classical Exact Solver: QUBO
To solve the QUBOs classically, we run the CPLEX as a quadratic programming solver [35]. Here, we limit the runtime to 10 minutes. The results in this case are worse than those for the LP using CPLEX, since we generate more variables in the QUBO model.
Classical Metaheuristic Solver: Simulated Annealing
While branching-based CPLEX is an exact solver, and so it can find the optimal solution in infinite time, metaheuristic approaches are usually used either to find sub-optimal solution quickly, or optimal solutions with some probability. The sub-optimal solution can then be used as a starting point for an exact solver. Here, in the framework of the QUBO, we use simulated thermal annealing (SA) as a metaheuristic QUBO solver. We use the implementation from Ref. [36], a fast and robust solver written in C++ with a linear temperature schedule. We set the number of sweeps to 50,000 and number of attempts to 10,000 -a parameter setting which corresponds to 10 minutes of wall-clock time.
Quantum Annealing Solver: D-Wave System
In contrast to thermal annealing, quantum annealing has the potential of avoiding local minima and therefore a providing better solutions to QUBO problems. Here we use D-Wave's Advantage quantum processing unit (QPU), which has over 5,000 qubits and 15-way qubit connectivity [37]. The limited connectivity forces us to use minor-embedding techniques to map our problem to the QPU's topology by chaining multiple physical qubits to represent a single logical qubit. Thus, arbitrary topologies can be realized in QPUs, but with polynomial overhead in the number of qubits used to represent the problem. For instance, one 10-job scheduling problem as a QUBO requires 210 logical qubits, but with embedding leads to 2,206 qubits, which exceeds the original size by almost an order of magnitude. For the 15-node problem, it was impossible to find a valid embedding on the Advantage system. Quantum Gate-based Solver: Terra Quantum's QuEnc Inspired by variational quantum optimization algorithms and quantum machine learning techniques, we use the recently-proposed QuEnc algorithm [38,39]. QuEnc is a heuristic QUBO solver for gate-based quantum systems. Using an amplitude encoding mechanism, it is possible to encode a n c -variable problem using O(log n c ) qubits, which differs it from QAOA [10]. The algorithm also utilizes different ansätze, optimisation techniques, and circuit expressability analysis.
Hybrid Solver with QUBO decomposition: D-Wave HSS
Mitigating the restrictions posed by the existing hardware, hybrid methods were proposed to decompose large problem into smaller instances so they can be solved using quantum algorithms. One of such hybrid algorithm is the Hybrid Solver Service from D-Wave Systems, where the system finds cores of a problem, splits it into smaller pieces via classical algorithms and sends them to a quantum annealer. Such an algorithm is not guaranteed to be optimal, but can be used as a competition metaheuristic, similar to other annealingbased algorithms.
Hybrid Solver with Greedy decomposition: Hybrid QuEnc
Here, we combine the greedy decomposition technique introduced in Section III C, with the quantum engine QuEnc. We divide the scheduling problem into subproblems, each of which is then solved via QuEnc. It is worth noting that one can utilize any such solver to solve the subproblems, but we pick QuEnc for the potential scaling of gate-based quantum algorithms.
V. Performance comparison
As an illustrative example, we solve three sizes of scheduling problems with 5, 10 and 15 jobs via all algorithms described above. As a benchmark, we use the LP solution and find the minimum and the maximum cost function value of the corresponding QUBO, C min and C max , that we used to normalize the cost function value. The results of the solutions comparison are depicted in Fig. 4a.
Among classical approaches we compare exact -CPLEX -and metaheuristic -SA -QUBO solvers. While both CPLEX and SA optimally solve the 5-jobs problem, 10-jobs and 15-jobs scheduling can not be solved via SA. CPLEX can find the optimal scheduling in general, but within the limited time window of 60 minutes it fails and provides only suboptimal QUBO solution. Moreover, in the case when runtime is limited to a few seconds simulated annealing provides better solutions.
For the quantum solvers, quantum annealing is controlled by annealing time and number of samples, while QuEnc is controlled by circuit depth, learning hyperparameters and number of repetitions. We vary the solver parameters taking into account that increasing number of reads and annealing time both leads to the higher probability to find optimal solution, but problem run duration range is limited. For The Advantage system, we used an annealing time of 2 ms and collected 500 samples. Fig. 2, solved via QuEnc with fixed circuit layout containing 50 layers with continuously tuned rotations. On the right-hand-side we show the QuEnc's convergence and corresponding quantum circuits with 5, 6, and 5 qubits that solve 15-bit, 17-bit and 9-bit QU-BOs for subproblems, respectively. By adjusting the QuEnc hyperparameters one can achieve faster convergence with less gates but it requires additional time-consuming tuning, which we avoid here. The final scheduling coincides with the optimal one provided by linear programming.
For QuEnc, we fix the number of layers to be 100, perform learning using gradient descent with a fixed velocity, and repeat the algorithm 10 times. While the 5-job problem is solved with the same optimality, QuEnc provides better solution for 10-job problem and manages to solve 15-job problem exploiting just 10 qubits. We want to emphasize that quantum annealing is performed on a real QPU, while QuEnc was simulated, nevertheless, it is clear that QuEnc utilizes much fewer resources.
Combining classical and quantum algorithms together, we tested two hybrid solutions. The first, HSS, successfully solves the 5-job problem, but can not schedule 10and 15-job problems, providing less optimal solutions than simulated annealing. As a second approach, we combined greedy decomposition with QuEnc as the subproblem solver, using 2 time slots and 3 jobs in a single subproblem. We observed that QuEnc with 50 layers requires 5.3 restarts on average to converge to the optimal solution, with a maximum QUBO size of 17 variables for subproblems. The example of the 5-job problem solution is presented in Fig. 5. On the right-hand-side we plot the simulated QuEnc convergence and corresponding quantum circuits. As can be seen, the circuit used in the experiment is hardware-efficientit utilizes connections only between neighbour qubits. With given decomposition, 6 qubits at most are required. Hardware-efficient circuit and small number of qubits demonstrate the possibility to run these algorithm on NISQ devices. Further increasing the subproblem size leads to an increase in the number of qubits improving the global solution optimality for some problems, as discussed further in Section V A.
The QUBO size as function of number of jobs is shown in Fig. 4b. From the greedy solution we observed that the number of required time slots to schedule N jobs scales as M = 2.8 N . The QUBO size before reduction is N M + N log 2 r max , where r max is the maximum number of resources available in a single time slot. The first term corresponds to the decision variables (jobs and time slots), and the second to slack (ancillary) variables. Since M = O(N ) and r max are fixed in our problem type, the QUBO size scales as O(N 2 ). For considered data, the QUBO size scales as O(N 1.8 ). In order to estimate the weight A from Eq. (8) let us consider the case R = 0 and f (t) = t for objective function in Eq. (6):
C = i,t>0 tx i,t t max i,t x i,t 2.8 N 2 .(14)
A. Decomposition analysis
We investigate decomposing approach, proposed in Section III C, using exact solution, in order to estimate the features of the pipeline in the situation when all the sub-problems are solved properly. The reduction of makespan for 50 instances of a 20 nodes graph with the increase of a subproblem size is depicted in Fig. 6. The factors, which influence the greedy decomposition, are (i) the globality of sub-problem, (ii) the number of considered jobs per time slot r = n/m, and (iii) disability to guarantee even the feasibility of the global solution. Indeed, if coefficient r is fixed, increasing the size of the sub-problem helps algorithm to allocate jobs more globally, thus the optimality of the final solution increases. This phenomena is apparent in Fig. 6 (where r = 1) and permits efficient practical usage to the algorithm.
Hyperparameters, such as the number of jobs n and the number of timeslots m, including ratio r, should be tuned accordingly to considered sample alongside with the penalty weights in a QUBO formulation.
VI. Conclusion
In this paper we presented a novel formulation of a particular class of scheduling problem, the workflow scheduling problem, as a QUBO. Inspired by a real-world use-case, we expanded upon previous known implementations of similar scheduling problems to include more realistic constraints in our problem. Specifically, we consider the case where some jobs are dependent on each other, as well as a maximum capacity of resources (at every time) which must be respected. We found that the introduction of these constraints increased the sizes of the QUBOs considerably, and so we investigated decomposition techniques in order to solve the QUBOs with the various quantum and hybrid solvers. We found that the hybrid and classical algorithms were the most successful in solving the instances, although no solver was able to solve all QUBOs at all sizes. The quantum solvers struggled to solve even the smallest problems. The improvement in performance due to the decomposition technique further highlights how the "quantum" difficulty in solving scheduling problems (and optimization problems in general) is more complex than the difficulty of the class of problems being solved. By reducing the problem size (and therefore the QUBO complexity), some of these limitations were able to be overcome. Therefore, future work will be dedicated to finding particular sub-classes of scheduling problems that can be more efficiently represented in QUBO form. Furthermore, novel implementation of hybrid quantum and quantum-inspired algorithms will also be investigated, to better address the QUBOs arising from such real-world instances.
Authors contributions
A.A. and S.Ya. conceived of the project idea, developed the workflow scheduling problem in its presented form based on the industrial use case, and guided the work presented here. A.A. wrote the methods used to generate synthetic model data. S.Ya. generated the problem instances. A.I.P., S.Yu., and M.R.P. developed the QUBO formulation and benchmarked performance, A.I.P. and S.Yu. developed the corresponding software modules and evaluated complexity. A.I.P. and M.R.P. developed and tested the decomposition method. All authors contributed to the text of the paper.
FIG. 1: Example of a workflow scheduling problem with 6 jobs. a) Directed acyclic graph with 6 nodes (job index is inside the circle) that illustrates the required ordering (parents-children relations) and resources for each job (number next to the circle). b) Available resources for each time slot when one or more jobs can be completed. The maximum number of time slots is fixed to be seven. c) Makespan map of the problem processed by sub-optimal greedy algorithm. Map shows at which time slot which job should be completed. The total makespan for greedy solution is 7 time slots. d) Makespan map of the problem processed by optimal algorithm. The total makespan is 5 time slots. Two jobs #1 and #4 are completed in a same time slot #3 since their parents were completed and the number of available resources (9) is higher than the required resources for both jobs (3+1).
FIG. 2 :
2Decomposition technique for the problem depicted in
FIG. 3 :
3Average makespan obtained via greedy algorithm as function of number of jobs. In order to solve the problem as QUBO we need to know what is the maximum number of resource may be required -it poses the upper bound on the expected makespan. This value is obtained by solving the problem with fast greedy algorithm ensuring that the problem can be solved at least with the greedy makespan. For our data, the makespan grows linearly with the number of jobs as (2.75 ± 0.06) N .
FIG. 4 :
4QUBO solutions using classical, quantum, and hybrid algorithms, and problem size analysis. a) The normalized cost function value for 5, 10, and 15-job scheduling problem solved as a QUBO on classical solvers, CPLEX and SA, quantum, annealing and QuEnc, and hybrid, HSS and Hybrid QuEnc based on greedy decomposition. It is clear that the greedy decomposition with the QuEnc engine (that can be potentially replaced with any other suitable solver) is the only approach that can schedule jobs optimally. Since the maximum size of subproblems is fixed, Hybrid QuEnc can solve arbitrary large problem. b) Size of a QUBO as function of number of jobs N . In our data the number of required resources grows as (2.75 ± 0, 06) N , and the QUBO size grows ∼ N 1.8 . To schedule 5 jobs the QUBO is formulated for 60 binary variables, for 10-job problem we work with 210 bits, and for 15 we need 720 bits.
FIG. 5 :
5Hybrid Quantum solution based on greedy decomposition and QuEnc algorithm. The 6-node problem, decomposition of which is shown in
FIG. 6 :
6Average makespan dependency on the size of a subproblem. Applying decomposition to 50 problems from test data with 20 nodes and 1/x fall-off probability, we can see the predictable decreasing in average makespan with the growth of the sub-problem size. Here, number of time slots related to every step of the algorithm is equal to the maximum number of jobs taken for accommodation.
. P Benioff, 1572-9613Journal of Statistical Physics. 22P. Benioff, Journal of Statistical Physics 22, 563 (1980), ISSN 1572-9613.
. R P Feynman, 1572-9575International Journal of Theoretical Physics. 21R. P. Feynman, International Journal of Theoretical Physics 21, 467 (1982), ISSN 1572-9575.
. R P Feynman, 10.1007/BF018865181572-9516Foundations of Physics. 16R. P. Feynman, Foundations of Physics 16, 507 (1986), ISSN 1572-9516, URL https://doi.org/10. 1007/BF01886518.
. E Farhi, J Goldstone, S Gutmann, M Sipser, E. Farhi, J. Goldstone, S. Gutmann, and M. Sipser, arXiv (2000).
. D Aharonov, W Van Dam, J Kempe, Z Landau, S Lloyd, O Regev, 10.1137/080734479SIAM Review. 50D. Aharonov, W. van Dam, J. Kempe, Z. Landau, S. Lloyd, and O. Regev, SIAM Review 50, 755 (2008), https://doi.org/10.1137/080734479, URL https://doi. org/10.1137/080734479.
. F Arute, K Arya, R Babbush, D Bacon, J C Bardin, R Barends, R Biswas, S Boixo, F G S L Brandao, D A Buell, 10.1038/s41586-019-1666-51476-4687Nature. 574505F. Arute, K. Arya, R. Babbush, D. Bacon, J. C. Bardin, R. Barends, R. Biswas, S. Boixo, F. G. S. L. Brandao, D. A. Buell, et al., Nature 574, 505 (2019), ISSN 1476-4687, URL https://doi.org/10. 1038/s41586-019-1666-5.
G J Mooney, G A L White, C D Hill, L C L Hollenberg, Advanced Quantum Technologies. 42100061G. J. Mooney, G. A. L. White, C. D. Hill, and L. C. L. Hollenberg, Advanced Quantum Technologies 4, 2100061 (2021).
. M W Johnson, M H S Amin, S Gildert, T Lanting, F Hamze, N Dickson, R Harris, A J Berkley, J Johansson, P Bunyk, 1476-4687Nature. 473M. W. Johnson, M. H. S. Amin, S. Gildert, T. Lanting, F. Hamze, N. Dickson, R. Harris, A. J. Berkley, J. Jo- hansson, P. Bunyk, et al., Nature 473, 194 (2011), ISSN 1476-4687.
. T Lanting, A J Przybysz, A Y Smirnov, F M Spedalieri, M H Amin, A J Berkley, R Harris, F Altomare, S Boixo, P Bunyk, https:/link.aps.org/doi/10.1103/PhysRevX.4.021041Phys. Rev. X. 421041T. Lanting, A. J. Przybysz, A. Y. Smirnov, F. M. Spedalieri, M. H. Amin, A. J. Berkley, R. Harris, F. Al- tomare, S. Boixo, P. Bunyk, et al., Phys. Rev. X 4, 021041 (2014), URL https://link.aps.org/doi/10. 1103/PhysRevX.4.021041.
. E Farhi, J Goldstone, S Gutmann, 1411.4028E. Farhi, J. Goldstone, and S. Gutmann, arXiv (2014), 1411.4028.
. A Lucas, https:/www.frontiersin.org/article/10.3389/fphy.2014.000052296- 424XFrontiers in Physics. 2A. Lucas, Frontiers in Physics 2, 5 (2014), ISSN 2296- 424X, URL https://www.frontiersin.org/article/ 10.3389/fphy.2014.00005.
. F Neukart, G Compostella, C Seidel, D Von Dollen, S Yarkoni, B Parney, https:/www.frontiersin.org/article/10.3389/fict.2017.000292297-198XFrontiers in ICT. 4F. Neukart, G. Compostella, C. Seidel, D. von Dollen, S. Yarkoni, and B. Parney, Frontiers in ICT 4 (2017), ISSN 2297-198X, URL https://www.frontiersin.org/ article/10.3389/fict.2017.00029.
M Ohzeki, A Miki, M J Miyama, M Terabe, arXiv:1812.01532Control of automated guided vehicles without collision by quantum annealer and digital devices. M. Ohzeki, A. Miki, M. J. Miyama, and M. Ter- abe, Control of automated guided vehicles without col- lision by quantum annealer and digital devices (2018), arXiv:1812.01532.
S Yarkoni, A Alekseyenko, M Streif, D Von Dollen, F Neukart, T Bäck, 2021 IEEE International Conference on Quantum Computing and Engineering (QCE) (2021). S. Yarkoni, A. Alekseyenko, M. Streif, D. Von Dollen, F. Neukart, and T. Bäck, in 2021 IEEE International Conference on Quantum Computing and Engineering (QCE) (2021), pp. 35-41.
. S Yarkoni, E Raponi, S Schmitt, T Bäck, arXiv:2112.07491S. Yarkoni, E. Raponi, S. Schmitt, and T. Bäck, arXiv (2021), arXiv:2112.07491.
Quark: A framework for quantum computing application benchmarking. J R Finžgar, P Ross, J Klepsch, A Luckow, arXiv:2202.03028J. R. Finžgar, P. Ross, J. Klepsch, and A. Luckow, Quark: A framework for quantum computing application bench- marking (2022), arXiv:2202.03028.
S Yarkoni, F Neukart, E M G Tagle, N Magiera, B Mehta, K Hire, S Narkhede, M Hofmann, Quantum Shuttle, 10.1145/3412451.3428500ISBN 9781450381000Traffic Navigation with Quantum Computing. New York, NY, USAAssociation for Computing MachineryS. Yarkoni, F. Neukart, E. M. G. Tagle, N. Magiera, B. Mehta, K. Hire, S. Narkhede, and M. Hofmann, Quan- tum Shuttle: Traffic Navigation with Quantum Comput- ing (Association for Computing Machinery, New York, NY, USA, 2020), p. 22-30, ISBN 9781450381000, URL https://doi.org/10.1145/3412451.3428500.
. P W Shor, 10.1137/s0097539795293172SIAM Journal on Computing. 26P. W. Shor, SIAM Journal on Computing 26, 1484 (1997), URL https://doi.org/10.1137/ s0097539795293172.
. L K Grover, arXiv:quant-ph/9605043L. K. Grover, arXiv (1996), arXiv:quant-ph/9605043.
. A W Harrow, A Hassidim, S Lloyd, Physical Review Letters. 103A. W. Harrow, A. Hassidim, and S. Lloyd, Physical Re- view Letters 103 (2009).
. M R Perelshtein, A I Pakhomchik, A A Melnikov, A A Novikov, A Glatz, G S Paraoanu, V M Vinokur, G B Lesovik, arXiv:2003.12770M. R. Perelshtein, A. I. Pakhomchik, A. A. Melnikov, A. A. Novikov, A. Glatz, G. S. Paraoanu, V. M. Vinokur, and G. B. Lesovik, arXiv:2003.12770 (2020).
. J , 10.22331/q-2018-08-06-792521-327X279J. Preskill, Quantum 2, 79 (2018), ISSN 2521-327X, URL https://doi.org/10.22331/q-2018-08-06-79.
A Skolik, J R Mcclean, M Mohseni, P Van Der Smagt, M Leib, 10.1007/s42484-020-00036-4Quantum Machine Intelligence. 3A. Skolik, J. R. McClean, M. Mohseni, P. van der Smagt, and M. Leib, Quantum Machine Intelligence 3 (2021), URL https://doi.org/10.1007/s42484-020-00036-4.
. A Kandala, A Mezzacapo, K Temme, M Takita, M Brink, J M Chow, J M Gambetta, 10.1038/nature23879Nature. 549A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, Na- ture 549, 242 (2017), URL https://doi.org/10.1038/ nature23879.
. M P Harrigan, K J Sung, M Neeley, K J Satzinger, F Arute, K Arya, J Atalaya, J C Bardin, R Barends, S Boixo, 10.1038/s41567-020-01105-yNature Physics. 17M. P. Harrigan, K. J. Sung, M. Neeley, K. J. Satzinger, F. Arute, K. Arya, J. Atalaya, J. C. Bardin, R. Barends, S. Boixo, et al., Nature Physics 17, 332 (2021), URL https://doi.org/10.1038/s41567-020-01105-y.
. E Lawler, Ann. Discrete Math. 275E. Lawler, Ann. Discrete Math. 2, 75 (1978).
. J Lenstra, A , Rinnooy Kan, Oper. Res. 2622J. Lenstra and A. Rinnooy Kan, Oper. Res. 26, 22 (1978).
. J Du, J.-T Leung, G Young, Inform. and Comput. 92219J. Du, J.-T. Leung, and G. Young, Inform. and Comput. 92, 219 (1991).
R Van Bevern, R Bredereck, L Bulteau, C Komusiewicz, N Talmon, G J Woeginger, International Conference on Discrete Optimization and Operations Research. SpringerR. van Bevern, R. Bredereck, L. Bulteau, C. Ko- musiewicz, N. Talmon, and G. J. Woeginger, in Inter- national Conference on Discrete Optimization and Oper- ations Research (Springer, 2016), pp. 105-120.
. J Lenstra, A Rinnooy Kan, P Brucker, Ann. of Discrete Math. 1343J. Lenstra, A. Rinnooy Kan, and P. Brucker, Ann. of Discrete Math. 1, 343 (1977).
. M Garey, D Johnson, J. Assoc. Comput. Mach. 25499M. Garey and D. Johnson, J. Assoc. Comput. Mach. 25, 499 (1978).
Quantum annealing implementation of job-shop scheduling. D Venturelli, D J J Marchand, G Rojo, arXiv:1506.08479D. Venturelli, D. J. J. Marchand, and G. Rojo, Quantum annealing implementation of job-shop scheduling (2015), arXiv:1506.08479.
. K Ikeda, Y Nakamura, T S Humble, 10.1038/s41598-019-49172-32045-2322Scientific Reports. 912837K. Ikeda, Y. Nakamura, and T. S. Humble, Scientific Reports 9, 12837 (2019), ISSN 2045-2322, URL https: //doi.org/10.1038/s41598-019-49172-3.
S Yarkoni, A Huck, H Schülldorf, B Speitkamp, M S Tabrizi, M Leib, T Bäck, F Neukart, 978-3-030-87672-2Computational Logistics. M. Mes, E. Lalla-Ruiz, and S. VoßChamSpringer International PublishingS. Yarkoni, A. Huck, H. Schülldorf, B. Speitkamp, M. S. Tabrizi, M. Leib, T. Bäck, and F. Neukart, in Compu- tational Logistics, edited by M. Mes, E. Lalla-Ruiz, and S. Voß (Springer International Publishing, Cham, 2021), pp. 502-517, ISBN 978-3-030-87672-2.
. Cplex, Ilog, International Business Machines Corporation. 46157CPLEX IBM ILOG, International Business Machines Corporation 46, 157 (2009).
. S Isakov, I Zintchenko, T Rønnow, M Troyer, 10.1016/j.cpc.2015.02.015Computer Physics Communications. 192S. Isakov, I. Zintchenko, T. Rønnow, and M. Troyer, Computer Physics Communications 192, 265 (2015), URL https://doi.org/10.1016/j.cpc.2015.02.015.
The D-Wave Advantage System: An Overview Tech. C Mcgeoch, P Farré, Series 14-1049A-ARep. (D-Wave Systems Inc. Burnaby, BC, CanadaD-Wave Technical ReportC. McGeoch and P. Farré, The D-Wave Advantage Sys- tem: An Overview Tech. Rep. (D-Wave Systems Inc, Burnaby, BC, Canada) D-Wave Technical Report Series 14-1049A-A (2020).
Practical application-specific advantage through hybrid quantum computing. M R Perelshtein, A B Sagingalieva, K Pinto, V Shete, A I Pakhomchik, A A Melnikov, N R Kenbaev, F Neukart, G Gesek, A A Melnikov, To be publishedM. R. Perelshtein, A. B. Sagingalieva, K. Pinto, V. Shete, A. I. Pakhomchik, A. A. Melnikov, N. R. Kenbaev, F. Neukart, G. Gesek, A. A. Melnikov, et al., Practi- cal application-specific advantage through hybrid quantum computing, To be published (2022).
NISQcompatible variational quantum architecture for unconstrained and constrained discrete optimisation. M R Perelshtein, A I Pakhomchik, A A Melnikov, M Podobriy, I Kreidich, B Nuriev, S Yudin, A Termanova, G S Paraoanu, V M Vinokur, To be publishedM. R. Perelshtein, A. I. Pakhomchik, A. A. Melnikov, M. Podobriy, I. Kreidich, B. Nuriev, S. Yudin, A. Ter- manova, G. S. Paraoanu, and V. M. Vinokur, NISQ- compatible variational quantum architecture for uncon- strained and constrained discrete optimisation, To be published (2022).
|
[] |
[
"ON THE Λ-COTORSION SUBGROUP OF THE SELMER GROUP",
"ON THE Λ-COTORSION SUBGROUP OF THE SELMER GROUP"
] |
[
"Ahmed Matar "
] |
[] |
[] |
Let E be an elliptic curve defined over a number field K with supersingular reduction at all primes of K above p. If K∞/K is a Zp-extension such that E(K∞)[p ∞ ] is finite and H 2 (G S (K∞), E[p ∞ ]) = 0, then we prove that the Λ-torsion subgroup of the Pontryagin dual of Sel p ∞ (E/K∞) is pseudoisomorphic to the Pontryagin dual of the fine Selmer group of E over K∞. This is the Galois-cohomological analog of a flat-cohomological result of Wingberg.
|
10.4310/ajm.2020.v24.n3.a3
|
[
"https://arxiv.org/pdf/1812.00207v1.pdf"
] | 119,151,065 |
1812.00207
|
14a1f9cb9f9ffc5f150f670f90649401a7545b2f
|
ON THE Λ-COTORSION SUBGROUP OF THE SELMER GROUP
1 Dec 2018
Ahmed Matar
ON THE Λ-COTORSION SUBGROUP OF THE SELMER GROUP
1 Dec 2018arXiv:1812.00207v1 [math.NT]
Let E be an elliptic curve defined over a number field K with supersingular reduction at all primes of K above p. If K∞/K is a Zp-extension such that E(K∞)[p ∞ ] is finite and H 2 (G S (K∞), E[p ∞ ]) = 0, then we prove that the Λ-torsion subgroup of the Pontryagin dual of Sel p ∞ (E/K∞) is pseudoisomorphic to the Pontryagin dual of the fine Selmer group of E over K∞. This is the Galois-cohomological analog of a flat-cohomological result of Wingberg.
Introduction
If A is a Hausdorff, abelian locally-compact topological group we denote its Pontryagin dual by A * . Let Γ be a pro-p group isomorphic to Z p and let Λ = Z p [[Γ]] be the completed group ring. If A is a finitely generated Λ-module, we let T Λ (A) denote its Λ-torsion submodule. Also we letȦ be the Λ-module A with the inverse Λ-action: γ · a = γ −1 a for a ∈ A, γ ∈ Γ. We denote T Λ (Ȧ) byṪ Λ (A).
We now define the p ∞ -Selmer group and the fine p ∞ -Selmer group. Assume that p is a prime, F a number field and E is an elliptic curve defined over F . Let S be a finite set of primes of F containing all the primes dividing p, all the primes where E has bad reduction and all the archimedean primes. We let F S be the maximal extension of F unramified outside S. Suppose now that L is a field with F ⊆ L ⊆ F S . We let G S (L) = Gal(F S /L) and S L be the set of primes of L above those in S. We define the p ∞ -Selmer group of E/L as
0 −→ Sel p ∞ (E/L) −→ H 1 (G S (L), E[p ∞ ]) −→ v∈SL H 1 (L v , E)[p ∞ ]
Also we define the fine p ∞ -Selmer group of E/L as
0 −→ R p ∞ (E/L) −→ H 1 (G S (L), E[p ∞ ]) −→ v∈SL H 1 (L v , E[p ∞ ])
The goal of this paper is to prove the following result Theorem 1.1. Let K be a number field, E an elliptic curve defined over K and p a rational prime such that E has good supersingular reduction at all primes of K above p. Let K ∞ /K be a Z p -extension such that every prime of K above p ramifies and such that:
(i) E(K ∞ )[p ∞ ] is finite (ii) H 2 (G S (K ∞ ), E[p ∞ ]) = 0.
Then there exists a pseudo-isomorphisṁ
T Λ (Sel p ∞ (E/K ∞ ) * ) ∼ R p ∞ (E/K ∞ ) *
Concerning the conditions in the theorem, condition (i) is a mild one (see proposition 1.2 below) whereas condition (ii) implies that R p ∞ (E/K ∞ ) * is Λ-torsion (see theorem 2.2).
Wingberg ([23] corollary 2.5) has proven a similar theorem stated in terms of flat cohomology rather than Galois cohomology. Although it may appear that the above theorem follows from Wingberg's result, the author has found difficulties in attempting such a deduction. The following arguments illustrate the potential obstacles. I would like to thank K ֒ estutisČesnavičius for his help with these arguments. Let E and K be as in the theorem and let E be the Néron model of E over O K .
As a first step to attempt to deduce the above theorem from Wingberg's result, one would hope to show that Sel p ∞ (E/K ∞ ) * and H 1 (O ∞ , E[p ∞ ]) * are pseudoisomorphic (where O ∞ is the ring of integers of K ∞ ). In hope of showing the existence of such a pseudo-isomorphism, one may use the results ofČesnavičius's paper [3] as they are relevant. Assuming that no prime v of K where E has bad reduction splits completely in K ∞ /K, the proof of [3] prop. 5.4 together with [3] prop. 2.5 show that the difference between the groups Sel p ∞ (E/K n ) and H 1 (O Kn , E[p ∞ ]) is finite and bounded with n. This proves that Sel p ∞ (E/K ∞ ) * and H 1 (O ∞ , E[p ∞ ]) * are pseudo-isomorphic in this case. However in the case when a prime v of K where E has bad reduction splits completely in K ∞ /K, this argument can fail and hence it is unclear that a pseudo-isomorphism exists in this case. Now we turn to the group H 2 (O ∞ , E[p ∞ ]) * . Let S ∞ be the set of primes of K ∞ above those in S. In order to invoke Wingberg's result, one would need to show that H 2 (O ∞ , E[p ∞ ]) * is Λ-torsion. In order to try to show this, one would first identify
H 2 (G S (K ∞ ), E[p ∞ ]) with H 2 (O ∞ − S ∞ , E[p ∞ ])
where O ∞ − S ∞ is the localization of O ∞ away from S ∞ (modulo a limit argument, such an identification is shown in appendix A to [4]). From the flat cohomology with support sequence we have an exact sequence
H 2 S∞ (O ∞ , E[p ∞ ]) θ − → H 2 (O ∞ , E[p ∞ ]) → H 2 (O ∞ − S ∞ , E[p ∞ ])
Under the conditions of theorem 1.1 H 2 (G S (K ∞ ), E[p ∞ ]) = 0 and so the map θ is a surjection. Therefore in order to show that H 2 (O ∞ , E[p ∞ ]) * is Λ-torsion, one needs to get a handle on the Λ-corank of both H 2 S∞ (O ∞ , E[p ∞ ]) and ker θ. The latter group seems difficult to handle so we will only discuss the former. We have
H 2 S∞ (O ∞ , E[p ∞ ]) = ⊕ v∈S∞ H 2 v (O ∞ , E[p ∞ ])
. For any v ∈ S ∞ by excision ([7] lemma 2.9) and [7]
lemma 2.6 we have an isomorphism H 2 v (O ∞ , E[p ∞ ]) ∼ = H 2 v (O ∞,v , E[p ∞ ]) where O ∞,v is the ring of integers of K ∞,v .
By the flat cohomology with support sequence we have a map
H 1 (K ∞,v , E[p ∞ ]) → H 2 v (O ∞,v , E[p ∞ ]). Combining the above observations we get a map ⊕ v∈S∞ H 1 (K ∞,v , E[p ∞ ]) φ − → H 2 S∞ (O ∞ , E[p ∞ ]). For any prime v above p we have H 1 (K ∞,v , E[p ∞ ])
is not Λcotorsion (see [10] prop 1), therefore it is unclear whether img φ and, in turn, img θ is Λ-cotorsion.
The above arguments illustrate the difficulties in attempting to deduce theorem 1.1 from Wingberg's result. Everything is done in this paper with Galois cohomology. Our method of proof generally follows Wingberg's with major differences being that all exact sequences arising from the spectral sequences of Schneider [20] are replaced with sequences arising from the snake lemma together with the Kummer sequence. The other difference is that the Artin-Mazur duality of flat cohomology groups is replaced with the Poitou-Tate duality of Galois cohomology groups.
The following proposition shows that condition (i) in theorem 1.1 is a mild one. As the proposition shows, all elliptic curves without complex multiplication satisfy condition (i) in the theorem. For elliptic curves with complex multiplication a slightly weaker version of theorem 1.1 is given in [1].
Proposition 1.2.
With the setup and conditions in theorem 1.1, we have that E(K ∞ )[p ∞ ] is finite in the following cases:
(1) E does not have complex multiplication
(2) K ∞ /K is the cyclotomic Z p -extension of K(])/K) is an open subgroup of GL 2 (Z p ). Suppose that E(K ∞ )[p ∞ ] is infinite. Then either E(K ∞ )[p ∞ ] has Z p -corank one or E[p ∞ ] is rational over K ∞ . In the first case V p (E) = T p (E) ⊗ Zp Q p has a one- dimensional Gal(K/K)-invariant subspace.
This clearly contradicts Serre's theorem. In the second case K(E[p ∞ ])/K is a subextension of K ∞ /K and hence must be an abelian extension. This also contradicts Serre's theorem.
(ii) Follows from Ribet's theorem [19] (iii) Suppose thet p is odd and splits in K/Q. Choose a prime v of K above p. Since E has supersingular reduction at v, we have
E(K v )[p ∞ ] = E(Q p )[p ∞ ] = E(pZ p )[p ∞ ] whereÊ is the formal group of E/Q p . By [22] ch. 4 th. 6.1Ê(pZ p ) has no p-torsion if p is odd so E(K v )[p ∞ ] = {0}. Therefore E(K ∞ )[p ∞ ] Γ = E(K)[p ∞ ] = {0} which implies that E(K ∞ )[p ∞ ] = {0}.
Proof of Theorem
Theorem 1.1 will be proven in this section. The proof will be broken up into a number of propositions. We keep all the definitions and notation from the introduction and furthermore denote Γ p n by Γ n .
Let A be a finitely generated Λ-module. We let T Λ (A) and T µ (A) be the Λ-torsion submodule and Z p -torsion submodule of A respectively. Then define T λ (A) := T Λ (A)/T µ (A). As in the introduction, we use the notationṪ − (A) = T − (Ȧ). We have the following lemma of Wingberg ([23] lemma 1.1) Lemma 2.1. Let A be a finitely generated Λ-module. Then we have pseudo-
isomorphisms (i) lim ← − n,m (A * /p m ) Γn ∼Ṫ µ (A) (ii) lim ← − n,m (A * [p m ]) Γn ∼Ṫ λ (A) (iii) lim ← − n,m (A * /p m ) Γn ∼ 0
where the inverse limits are taken with respect to multiplication-by-p resp. canonical surjection and the norm map resp. canonical surjection.
Now let F be a number field, S a finite set of primes of F and B a finite G Smodule whose order is only divisible by rational primes lying below primes in S. Define B ′ := Hom(B, µ) where µ is the group of all roots of unity in C. We let F S be the maximal extension of F unramified outside S. Suppose now that L is a number field with F ⊆ L ⊆ F S . We let G S (L) = Gal(F S /L) and S L be the set of primes of L above those in S. Then we have the following perfect Poitou-Tate duality pairing ([17] theorem 8.6.7)
X 1 (G S (L), B ′ ) × X 2 (G S (L), B) → Q/Z (1) where X i (G S (L), M ) (M is any G S -module) is defined to be the kernel of the restriction map H i (G S (L), M ) → v∈SL H i (L v , M ). If L ∞ /F is an infinite exten- sion contained in F S we define X i (G S (L ∞ ), M ) = lim − → X i (G S (L ′ ), M )
where the direct limit is taken over all intermediate finite extensions L ′ /L contained in L ∞ with respect to the restriction maps.
Now if E is an elliptic curve defined over F , p a rational prime and S a finite set of primes of F containing all primes dividing p, then for any n ≥ 0 the Weil pairing together with the above pairing give a perfect pairing
, : X 1 (G S (L), E[p n ]) × X 2 (G S (L), E[p n ]) → Q p /Z p(2)
Now let L ′ be a finite extension of L contained in F S . The definition of this pairing (see [17] theorem 8.6.7) shows that it is induced by the cup product. Therefore for
a ∈ X 1 (G S (L ′ ), E[p n ]) and b ∈ X 2 (G S (L), E[p n ]) we have cor a, b = a, res b where cor : X 1 (G S (L ′ ), E[p n ]) → X 1 (G S (L), E[p n ])
is the corestriction map and res :
X 2 (G S (L), E[p n ]) → X 2 (G S (L ′ ), E[p n ]) is the restriction map.
The following theorem is well-known (see for example [18] prop. 1.3.2). Using the above pairing and a control theorem, we will present another proof of this theorem Theorem 2.2. Let K be a number field, p a rational prime, K ∞ /K a Z p -extension and E an elliptic curve defined over K. Let S be a finite set of primes of K containing all the primes dividing p, all the primes where E has bad reduction and all the archimedean primes. Then
R p ∞ (E/K ∞ ) * is Λ-torsion if and only if X 2 (G S (K ∞ ), E[p ∞ ]) = 0.
If p is odd and no prime in S splits completely in
K ∞ /K, then X 2 (G S (K ∞ ), E[p ∞ ]) = 0 if and only if H 2 (G S (K ∞ ), E[p ∞ ]) = 0.
Proof. First we prove the second statement. So suppose that p is odd and no prime in S splits completely in K ∞ /K. Let w be a prime of K ∞ above a prime v is S.
Since v does not split completely in K ∞ /K, therefore the extension K ∞,w /K v is an infinite pro-p extension. Hence by [17]
theorem 7.1.8(i) cd p (K ∞,w ) ≤ 1. So H 2 (K ∞,w , E[p ∞ ]) = 0 which implies that X 2 (G s (K ∞ ), E[p ∞ ]) = H 2 (G S (K ∞ ), E[p ∞ ]).
Now we prove the first statement. By the restriction-corestriction property of the pairing (2), the Pontryagin dual of
X 2 (G S (K ∞ ), E[p ∞ ]) can be identified with lim ← − n,m X 1 (G S (K n ), E[p m ])
where K n is the fixed field of Γ n and the inverse limit is taken over m with regards to multiplication-by-p and over n with regards to corestriction. Therefore we see that to prove the first statement, we only have to show that lim ← − n,m
X 1 (G S (K n ), E[p m ]) has the same Λ-rank as R p ∞ (E/K ∞ ) * .
Consider the group lim ← − n,m
X 1 (G S (K ∞ ), E[p ∞ ])[p m ] Γn = lim ← − n,m R p ∞ (E/K ∞ )[p m ] Γn
where the inverse limit is taken over m with regards to multiplication-by-p and over n with regards to the norm map. According to [17] prop. 5.5.10(i) this group is a free Λ-module with rank equal to the the Λ-corank of
R p ∞ (E/K ∞ ). Now consider the group lim ← − n,m X 1 (G S (K ∞ ), E[p m ]) Γn . For any m ≥ 0 we have an exact sequence 0 → M p m (E/K ∞ ) → X 1 (G S (K ∞ ), E[p m ]) → X 1 (G S (K ∞ ), E[p ∞ ])[p m ] → 0 where M p m (E/K ∞ ) = E(K ∞ )[p ∞ ]/p m ∩ X 1 (G S (K ∞ ), E[p m ])
For any n ≥ 0 this exact sequence induces another sequence
0 → M p m (E/K ∞ ) Γn → X 1 (G S (K ∞ ), E[p m ]) Γn → X 1 (G S (K ∞ ), E[p ∞ ])[p m ] Γn → M p m (E/K ∞ ) Γn
We claim that all the groups in this exact sequence are finite. Since E(K ∞ )[p ∞ ]/p m is finite, therefore the first and last terms of the sequence are finite. So we only have to show that
X 1 (G S (K ∞ ), E[p ∞ ])[p m ]
Γn is finite. This is easily seen by taking Pontryagin duals and noting that [13] theorem 4.5). Therefore we have seen that all the groups in the above exact sequence are finite and so by taking inverse limits the sequence remains exact
X 1 (G S (K ∞ ), E[p ∞ ]) * is a finitely generated Λ- module (X 1 (G S (K ∞ ), E[p ∞ ]) ⊆ Sel p ∞ (E/K ∞ ) and Sel p ∞ (E/K ∞ ) * is a finitely generated Λ-module by0 → lim ← − n,m M p m (E/K ∞ ) Γn → lim ← − n,m X 1 (G S (K ∞ , E[p m ]) Γn → lim ← − n,m X 1 (G S (K ∞ ), E[p ∞ ])[p m ] Γn → lim ← − n,m M p m (E/K ∞ ) Γn
The groups E(K ∞ )[p ∞ ]/p m are finite of bounded order as m varies whence the groups M p m (E/K ∞ ) Γn and M p m (E/K ∞ ) Γn are finite of bounded order as n and m vary. It follows that the first and last inverse limits in the above sequence are finite.
Therefore the map
lim ← − n,m X 1 (G S (K ∞ ), E[p m ]) Γn → lim ← − n,m X 1 (G S (K ∞ ), E[p ∞ ])[p m ] Γn
has finite kernel and cokernel which shows that
rank Λ (lim ← − n,m X 1 (G S (K ∞ ), E[p m ]) Γn ) = rank Λ (lim ← − n,m X 1 (G S (K ∞ ), E[p ∞ ])[p m ] Γn )
and as observed before we have
rank Λ (lim ← − n,m X 1 (G S (K ∞ ), E[p ∞ ])[p m ] Γn ) = rank Λ (R p ∞ (E/K ∞ ) * ) We denote lim ← − n,m X 1 (G S (K n ), E[p m ]) by X p ∞ (E/K ∞ ) and lim ← − n,m X 1 (G S (K ∞ ), E[p m ]) Γn by Y p ∞ (E/K ∞ ).
From the observations above, we see that it suffices to prove that the map Ξ :
X p ∞ (E/K ∞ ) → Y p ∞ (E/K ∞ )
induced by restriction has Λ-torsion cokernel. We do this by means of a control theorem. Consider the following commutative diagram
0 / / X 1 (G S (K ∞ ), E[p m ]) Γn / / H 1 (G S (K ∞ ), E[p m ]) Γn ψ∞,m / / w∈S∞ H 1 (K ∞,w , E[p m ]) Γn 0 / / X 1 (G S (K n ), E[p m ]) sn,m O O / / H 1 (G S (K n ), E[p m ]) hn,m O O ψn,m / / v∈Sn H 1 (K n,v , E[p m ]) gn,m O O
(3) In the commutative diagram above the sets S n and S ∞ are the sets of primes above S in K n and K ∞ respectively and the vertical maps are restriction. Taking inverse limits over n and m in the above, we get another commutative diagram
0 / / Y p ∞ (E/K ∞ ) / / lim ← − n,m H 1 (G S (K ∞ ), E[p m ]) Γn φ / / lim ← − n,m w∈S∞ H 1 (K ∞,w , E[p m ]) Γn 0 / / X p ∞ (E/K ∞ ) Ξ O O / / lim ← − n,m H 1 (G S (K n ), E[p m ]) Ξ ′ O O ψ / / lim ← − n,m v∈Sn H 1 (K n,v , E[p m ]) Ξ ′′ O O
(4) From the snake lemma, we see that in order to show that coker Ξ is Λ-torsion, we only have to show that both ker Ξ ′′ and coker Ξ ′ are Λ-torsion. Since cd p (Γ) = 1, therefore it follows that coker Ξ ′ = 0. Now we deal with ker Ξ ′′ . Primes in S that split completely in K ∞ /K do not contribute anything to ker Ξ ′′ so we may assume that S has no such primes. Now choose an M such that #S M = #S ∞ and let m = #S M . For every n ≥ M we label the primes in S n as v 1 , v 2 , ..., v m and the primes of S ∞ as w 1 , w 2 , ..., w m . We choose a labelling such that if k ≥ j ≥ M then w i ∈ S ∞ lies above v i ∈ S k lies above v i ∈ S j . With this labelling we have
ker Ξ ′′ = m i=1 lim ← − m lim ← − n≥M H 1 (Gal(K ∞,wi /K n,vi ), E(K ∞,wi )[p m ])
where the inverse limit is taken over n with respect to the corestriction maps and over m with respect to multiplication-by-p.
For any n ≥ M and any i we have Gal
(K ∞,wi /K n,vi ) = Γ n , therefore if g is a topological generator of Γ we have H 1 (Gal(K ∞,wi /K n,vi ), E(K ∞,wi )[p m ]) = E(K ∞,wi )[p m ]/(g p n − 1)E(K ∞,wi )[p m ].
For sufficiently large n we have
E(K ∞,wi )[p m ] = E(K n,vi )[p m ] so (g p n − 1)E(K ∞,wi )[p] = {0} i.e. H 1 (Gal(K ∞,wi /K n,vi ), E(K ∞,wi )[p m ]) = E(K ∞,wi )[p m ]. For such sufficiently large n ′ ≥ n ≥ M one can check that the corestriction map from H 1 (Gal(K ∞,wi /K n ′ ,vi ), E(K ∞,wi )[p m ]) to H 1 (Gal(K ∞,wi /K n,vi ), E(K ∞,wi )[p m ]) is the identity map on E(K ∞,wi )[p m ].
This shows that ker
Ξ ′′ = m i=1 T p (E(K ∞,wi )) (where T p (E(K ∞,wi )
) means the Tate module of E(K ∞,wi )). It follows that ker Ξ ′′ is Λ-torsion which completes the proof.
Throughout the rest of the section let K be a number field, E an elliptic curve defined over K and p a rational prime such that E has good supersingular reduction at all primes of K above p. Let K ∞ /K be a Z p -extension such that every prime of K above p ramifies. We will assume that (i)
E(K ∞ )[p ∞ ] is finite and (ii) H 2 (G S (K ∞ ), E[p ∞ ]) = {0}.
Finally we let S n and S ∞ be the set of primes of K n and K ∞ above the primes in S, respectively.
The first key result is
Proposition 2.3. For any prime w of K ∞ above p we have H 1 (K ∞,w , E)[p ∞ ] = 0
Proof. Let v be the prime of S below w. Since by assumption v ramifies in K ∞ /K therefore the extension K ∞,w /K v is deeply ramified in the sense of [5]. Therefore the result follows as explained in [9] pg. 70.
Now we need
Proposition 2.4. The map H 1 (G S (K ∞ ), E[p ∞ ]) ψ∞ − − → w∈S∞ H 1 (K ∞,w , E)[p ∞ ] is surjective
Proof. To understand coker ψ ∞ we use the Cassels-Poitou-Tate exact sequence [6]. First for any n and m we define
Sel p m (E/K n ) as 0 −→ Sel p m (E/K n ) −→ H 1 (G S (K n ), E[p m ]) −→ v∈Sn H 1 (K n,v , E)[p m ]
Then for any n ≥ 0 the Cassels-Poitou-Tate exact sequence is
H 1 (G S (K n ), E[p ∞ ]) ψn − − → v∈Sn H 1 (K n,v , E)[p ∞ ] → S(E/K n ) * → H 2 (G S (K n ), E[p ∞ ]) where C(E/K n ) = lim ← − m Sel p m (E/K n ) ⊆ H 1 (G S (K n )
, T p (E)) (inverse limit with respect to multiplication-by-p).
Taking the direct limit of the above sequence with respect to restriction over n we get
H 1 (G S (K ∞ ), E[p ∞ ]) ψ∞ − − → w∈S∞ H 1 (K ∞,w , E)[p ∞ ] → S(E/K ∞ ) * → H 2 (G S (K ∞ ), E[p ∞ ]) where S(E/K ∞ ) = lim ← − n,m Sel p m (E/K n ) ⊆ lim ← − n H 1 (G S (K n )
, T p (E)) (inverse limit over n with regards to corestriction and over m with regards to multiplication-by-p).
By assumption, we have H 2 (G S (K ∞ ), E[p ∞ ]) = {0} and so we see from the above sequence that coker ψ ∞ is isomorphic to S(E/K ∞ ) * . We will show that coker ψ ∞ = 0 by showing that the Pontryagin dual of coker ψ ∞ is Λ-torsion while
S(E/K ∞ ) is Λ-torsion-free.
First we show that coker ψ ∞ is Λ-cotorsion. We will also prove that it is cofinitely generated over Λ since we will need this fact later. We do this by actually showing that J := w∈S∞ H 1 (K ∞,w , E)[p ∞ ] is cofinitely generated cotorsion over Λ. Note that by proposition 2.3 we may (and will) assume that S ∞ contains no primes above p. We will also assume that S ∞ contains no complex archimedean primes since they contribute nothing to the group J. Write S ∞ = T · ∪ T ′ where T is the set of all the primes of K ∞ above those primes of S that do not split completely in K ∞ /K and T ′ is its complement containing all primes of S ∞ that lie above a prime of K that splits completely in
K ∞ /K. Let J T := w∈T H 1 (K ∞,w , E)[p ∞ ] and J T ′ := w∈T ′ H 1 (K ∞,w , E)[p ∞ ] so that J = J T × J T ′ .
For any w ∈ T by [10] prop.
2 H 1 (K ∞,w , E[p ∞ ])
is a cofinitely generated Z p -module and hence the same is true for H 1 (K ∞,w , E)[p ∞ ] as this group is a quotient of H 1 (K ∞,w , E[p ∞ ]). Therefore J T is a cofinitely generated Z p -module. Now we deal with J T ′ . Let S ′ be the set of primes of K that split completely in K ∞ /K. For any such prime v ∈ S ′ we let
J v := w|v H 1 (K ∞,w , E)[p ∞ ] where the sum runs over all primes of K ∞ above v. Clearly J T ′ = v∈S ′ J v . By Shapiro's lemma, for any v ∈ S ′ , we have J Γ v = H 1 (K v , E)[p ∞ ].
If v is archimedean, then H 1 (K v , E) is finite (see [11] prop. 1.3). If v is non-archimedean, then by Tate duality for abelian varieties over local fields ([16] I-3.4):
H 1 (K v , E) * ∼ = E(K v ) so H 1 (K v , E)[p ∞ ] * ∼ = lim ← − E(K v )/p m . By Mattuck's theorem E(K v ) = Z [Kv:Q l ] l × T where l = p is the characteristic of the residue field of K v and T is a finite group. It follows that lim ← − E(K v )/p m is the finite p-primary subgroup of E(K v ). So H 1 (K v , E) is finite in the non-archimedean case also. This proves that J Γ v = H 1 (K v , E)[p ∞ ] is finite which shows that J T ′ is cofinitely generated over Λ. Also for any w ∈ T ′ we have H 1 (K ∞,w , E)[p ∞ ] = H 1 (K v , E)[p ∞ ]
where v is the prime of K below w and by what we just showed this is a finite group. All together this shows that J T ′ is a cofinitely generated Λ-module that is annihilated by some power of p.
Thus we have shown that J = A × B (decomposition as cofinitely generated Λmodules) where A is a cofinitely generated Z p -module and B is a torsion Z p -module that is annihilated by some power of p. This shows that J * is a finitely generated Λ-torsion module and hence the same is true of coker ψ ∞ . Now we prove that S(E/K ∞ ) is Λ-torsion-free. finite and bounded as m varies therefore Z = 0 (for large enough n Γ n acts trivially on E(K ∞ )[p ∞ ]/p m and hence the norm maps in the inverse limit eventually become multiplication-by-p). So we see that in fact Y ′ injects into Y . By [17] prop 5.5.10 Y is a free Λ-module. This implies that Y ′ is Λ-torsion-free. Now consider the commutative diagram with vertical maps induced by restriction
Y ′ / / lim ← − n,m H 1 (G S (K ∞ ), E[p m ]) Γn S(E/K ∞ ) Ξ O O / / lim ← − n,m H 1 (G S (K n ), E[p m ]) Ξ ′ O O
Since Y ′ is Λ-torsion-free, therefore to show that S(E/K ∞ ) is Λ-torsion-free, it will suffice to show that the map Ξ is an injection. From the commutative diagram this will be shown once we show that Ξ ′ is an injection. We have
ker Ξ ′ = lim ← − n lim ← − m H 1 (Γ n , E(K ∞ )[p m ])
Since by assumption E(K ∞ )[p ∞ ] is finite, it follows that for any n ≥ 0 that lim ← − m H 1 (Γ n , E(K ∞ )[p m ]) = 0 and hence ker Ξ ′ = 0. This completes the proof.
The next 2 lemmas are the most important ingredients in our proof Lemma 2.5. We have a pseudo-isomorphism
lim ← − n,m X 2 (G S (K ∞ ), E[p m ]) Γn ∼Ṫ µ (Sel p ∞ (E/K ∞ ) * )
Proof. The exact Kummer sequences
0 → E[p m ] → E p m − − → E → 0(5)
and
0 → E[p m ] → E[p ∞ ] p m − − → E[p ∞ ] → 0 (6) yield a commutative diagram 0 / / p m w∈S∞ H 1 (K ∞,w , E)[p ∞ ] / / w∈S∞ H 1 (K ∞,w , E)[p ∞ ] / / w∈S∞ H 2 (K ∞,w , E[p m ]) 0 / / p m H 1 (G S (K ∞ ), E[p ∞ ]) ψ O O / / H 1 (G S (K ∞ ), E[p ∞ ]) ψ ′ O O / / H 2 (G S (K ∞ ), E[p m ]) ψ ′′ O O / / 0 (7)
where the 0 at the right of the lower sequence is because H 2 (G S (K ∞ ), E[p ∞ ]) = 0. Since ker ψ ′ = Sel p ∞ (E/K ∞ ), ker ψ ′′ = X 2 (G S (K ∞ ), E[p m ]) and ψ is surjective by proposition 2.4, therefore by the snake lemma we get the following exact sequence
0 → ker ψ → Sel p ∞ (E/K ∞ ) → X 2 (G S (K ∞ ), E[p m ]) → 0(8)
Now consider the following commutative diagram
0 / / w∈S∞ H 1 (K ∞,w , E)[p m ] / / w∈S∞ H 1 (K ∞,w , E)[p ∞ ] p m / / p m w∈S∞ H 1 (K ∞,w , E)[p ∞ ] H 1 (G S (K ∞ ), E[p m ]) φm O O / / H 1 (G S (K ∞ ), E[p ∞ ]) ψ ′ O O p m / / p m H 1 (G S (K ∞ ), E[p ∞ ]) ψ O O / / 0 (9)
Since ker ψ ′ = Sel p ∞ (E/K ∞ ) and ψ ′ is surjective by proposition 2.4, therefore by the snake lemma we get an exact sequence
0 → p m Sel p ∞ (E/K ∞ ) → ker ψ → coker φ m → 0(10)
This sequence in turn gives the following exact sequence
0 → coker φ m → Sel p ∞ (E/K ∞ )/p m → Sel p ∞ (E/K ∞ )/ ker ψ → 0(11)
From the sequences (8) and (11) we get the following exact sequence
0 → coker φ m → Sel p ∞ (E/K ∞ )/p m → X 2 (G S (K ∞ ), E[p m ]) → 0(12)
For any n ≥ 0 this sequence induces another exact sequence
0 → (coker φ m ) Γn → (Sel p ∞ (E/K ∞ )/p m ) Γn → X 2 (G S (K ∞ ), E[p m ]) Γn → (coker φ m ) Γn(13)
We claim that each of the terms in this exact sequence is finite. Clearly it will suffice to prove that the second term and fourth term are finite. Since Sel p ∞ (E/K ∞ ) * is a finitely generated Λ-module (see [13]
0 → lim ← − n,m (coker φ m ) Γn → lim ← − n,m (Sel p ∞ (E/K ∞ )/p m ) Γn → lim ← − n,m X 2 (G S (K ∞ ), E[p m ]) Γn θ − → lim ← − n,m (coker φ m ) Γn(14)
By lemma 2.1(i) the second term in the above sequence is pseudo-isomorphic tȯ T µ (Sel p ∞ (E/K ∞ ) * ). Therefore to prove the lemma it will suffice to show that the first term and img θ in the above sequence are both finite. (coker φ m ) Γn is also a finitely generated Z p -module.
We now prove that the group lim ← − n,m
X 2 (G S (K ∞ ), E[p m ]) Γn is a torsion Z p -module.
Taking into account the fact that H 2 (G S (K ∞ ), E[p ∞ ]) = 0, the Kummer sequence (6) gives an isomorphism
H 1 (G S (K ∞ ), E[p ∞ ])/p m ∼ = H 2 (G S (K ∞ ), E[p m ]). Com- bining the injection X 2 (G S (K ∞ ), E[p m ]) Γn ֒→ H 2 (G S (K ∞ ), E[p m ]) with this iso- morphism we get an injection lim ← − n,m X 2 (G S (K ∞ ), E[p m ]) Γn ֒→ lim ← − n,m (H 1 (G S (K ∞ ), E[p ∞ ])/p m ) Γn(15)Lemma 2.1 shows that lim ← − n,m (H 1 (G S (K ∞ ), E[p ∞ ])/p m )
Γn is a Z p -torsion module and hence from the injection above the same is true for lim ← − n,m
X 2 (G S (K ∞ ), E[p m ]) Γn .
Therefore img θ is a Z p -torsion module. It is also a finitely generated Z p -module since lim ← − n,m (coker φ m ) Γn is finitely generated over Z p as shown above. This implies that img θ is finite as claimed.
We now deal with lim ← − n,m (coker φ m ) Γn . We will actually show this group is trivial.
Let J := w∈S∞ H 1 (K ∞,w , E)[p ∞ ].
In the proof of proposition 2.4 we have shown that J = A × B where A is a cofinitely generated Z p -module and B is a torsion Z p -module that is annihilated by p t for some t. For any m ≥ 0, coker φ m is the
quotient of J[p m ] = A[p m ] × B[p m ]. Let α = (α n,m ) ∈ lim ← − n,m (coker φ m ) Γn .
Note that the transition map on the first index is the norm map and on the second index it is multiplication-by-p. For each (n, m) ∈ Z ≥0 × Z ≥1 choose a n,m ∈ A[p m ] and b n,m ∈ B[p m ] such that α n,m is represented by (a n,m , b n,m ) Now let (n, m) ∈ Z ≥0 × Z ≥1 . We will show that α n,m = 0. Recall that p t annihilates B. Consider (a n,m ′ , b n,m ′ ) where m ′ = m + t. We claim that (a n,
m ′ , b n,m ′ ) ≡ (0, b ′ ) for some b ′ ∈ B[p m ′ ] (the congruence is modulo img φ m ′ ).
To see this, note that since A is cofinitely generated over Z p , therefore A[p m ′ ] is finite and hence is fixed by Γ n ′ for some n ′ ≥ n. Since for any n ′′ > n ′ we have Tr K n ′′ /K n ′ (α n ′′ ,m ′ ) = α n ′ ,m ′ and Γ n ′ acts trivially on A[p m ′ ], therefore by considering large enough n ′′ > n ′ we easily see that (a n ′ ,
m ′ , b n ′ ,m ′ ) ≡ (0, b ′′ ) for some b ′′ ∈ B[p m ′ ] and hence (a n,m ′ , b n,m ′ ) ≡ (0, b ′ ) for some b ′ ∈ B[p m ′ ] as claimed.
Now we have that p t α n,m ′ = α n,m . Since (a n,m ′ , b n,m ′ ) ≡ (0, b ′ ) and p t annihilates B, therefore we see that (a n,m , b n,m ) ≡ (0, 0) i.e. α n,m = 0. This proves that α = 0 thus showing that lim ← − n,m (coker φ m ) Γn is trivial. This completes the proof of the lemma.
We now define K n,m to be the kernel of the map (induced by restriction)
H 1 (G S (K ∞ ), E[p m ]) Γn → w∈S∞ H 1 (K ∞,w , E[p m ]) Γn Lemma 2.6. We have a pseudo-isomorphism lim ← − n,m K n,m ∼Ṫ λ (Sel p ∞ (E/K ∞ ) * )
Proof. The exact Kummer sequences
0 → E[p m ] → E p m − − → E → 0(16)
and
0 → E[p m ] → E[p ∞ ] p m − − → E[p ∞ ] → 0 (17) yield a commutative diagram 0 / / w∈S∞ E(K ∞,w )/p m / / w∈S∞ H 1 (K ∞,w , E[p m ]) / / w∈S∞ H 1 (K ∞,w , E)[p m ] / / 0 0 / / E(K ∞ )[p ∞ ]/p m φm O O / / H 1 (G S (K ∞ ), E[p m ]) ψm O O / / H 1 (G S (K ∞ ), E[p ∞ ])[p m ] ψ ′ m O O / / 0 (18) Taking Γ n -coinvariants we get 0 / / B n,m / / w∈S∞ H 1 (K ∞,w , E[p m ]) Γn / / w∈S∞ H 1 (K ∞,w , E)[p m ] Γn / / 0 0 / / A n,m φn,m O O / / H 1 (G S (K ∞ ), E[p m ]) Γn ψn,m O O / / H 1 (G S (K ∞ ), E[p ∞ ])[p m ] Γn ψ ′ n,m O O / / 0(19)
where A n,m is the image of the map
(E(K ∞ )[p ∞ ]/p m ) Γn → H 1 (G S (K ∞ ), E[p m ]) Γn and B n,m is the image of the map w∈S∞ E(K ∞,w )/p m Γn → w∈S∞ H 1 (K ∞,w , E[p m ]) Γn
Applying the snake lemma to the diagram (19) we get an exact sequence
0 → ker φ n,m → K n,m → ker ψ ′ n,m → coker φ n,m(20)
We claim that each of the terms in this exact sequence is finite. To simplify arguments, we prove this fact subject to the condition that n ≥ N where N ≥ 0 is an integer such that K ∞ /K N is totally ramified at all primes of K N above p and such that every prime of K N that does not split completely in K ∞ /K N is inert in this extension. Clearly it will suffice to prove that the first, third and fourth terms are finite. In what follows all inverse limits with indices involving n will be taken over n ≥ N Since E(K ∞ )[p ∞ ]/p m is finite, therefore ker φ n,m is finite. Moreover, the order of E(K ∞ )[p ∞ ]/p m is bounded as m varies. Hence the order of (E(K ∞ )[p ∞ ]/p m ) Γn is bounded as n and m vary. This in turn shows that the order of ker φ n,m is bounded as n and m vary. It follows that lim ← − n,m ker φ n,m is finite. This last fact will be needed later. Now we show that coker φ n,m is finite by showing that D(n, m) := w∈S∞ E(K ∞,w )/p m Γn is finite. We will also show that lim ← − n,m coker φ n,m is finite as we will need this later. We write S ∞ = S p · ∪ S split · ∪ S nsplit where S p is the set of primes of S ∞ above p and S split is the set of primes in S ∞ above those in S that split completely in K ∞ /K. First we show that D p (n, m) := w∈Sp E(K ∞,w )/p m Γn is finite. Let w ∈ S p . Note that by the condition on n, Γ n acts on E(K ∞,w )/p m . We let Tor(E(K ∞,w ) be the Z-torsion subgroup of E(K ∞,w ) and E(K ∞,w ) Tor := E(K ∞,w )/Tor(E(K ∞,w )). We have an exact sequence ((Tor(E(K ∞,w )))/p m ) Γn → (E(K ∞,w )/p m ) Γn → (E(K ∞,w ) Tor /p m ) Γn → 0 (21) Note that the Pontryagin dual of E(K ∞,w )[p ∞ ] is a finitely generated Z p [[Γ N ]]module. Since Tor(E(K ∞,w ))/p m = E(K ∞,w )[p ∞ ]/p m , therefore by considering Pontryagin duals it follows that ((Tor(E(K ∞,w )))/p m ) Γn is finite. Also from lemma 2.1 it follows that lim ← − n,m ((Tor(E(K ∞,w )))/p m ) Γn is finite.
Now we turn to the group (E(K ∞,w ) Tor /p m ) Γn . Since E(K ∞,w ) Tor is torsionfree, therefore it follows that (E(K ∞,w ) Tor ⊗ Q p /Z p )[p m ] = E(K ∞,w ) Tor /p m . Also note that E(K ∞,w ) Tor ⊗ Q p /Z p = E(K ∞,w ) ⊗ Q p /Z p . These 2 facts show that
((E(K ∞,w ) ⊗ Q p /Z p )[p m ]) Γn = (E(K ∞,w ) Tor /p m ) Γn(22)
Wingberg ([24] theorem 2.2) has shown that (E(K ∞,w ) ⊗ Q p /Z p ) * is pseudoisomorphic to a finitely generated free Z p [[Γ N ]]-module. This together with the equality (22) show that (E(K ∞,w ) Tor /p m ) Γn is finite. Using lemma 2.1 we also see that lim ← − n,m (E(K ∞,w ) Tor /p m ) Γn is finite. From the facts above and the exact sequence Let w ∈ S nsplit . We claim that E(K ∞,w ) ⊗ Q p /Z p = 0. To see this, it will suffice to show for any t ≥ 0 that E(K t,w ) ⊗ Q p /Z p = 0. By Mattuck's theorem E(K t,w ) = Z [Kt,w:Q l ] l × T where l = p is the characteristic of the residue field of K t,w and T is a finite group. It follows from this that E(K t,w ) ⊗ Q p /Z p = 0 and hence E(K ∞,w ) ⊗ Q p /Z p = 0 as claimed. Then just as in the case of D p (n, m), we also have D nsplit (n, m) is finite and lim ← − n,m D nsplit (n, m) is also finite.
Finally we turn to the group D split (n, m) := w∈S split E(K ∞,w )/p m Γn . Let v ∈ S be a prime that splits completely in K ∞ /K and define C v := w|v E(K ∞,w )/p m where the sum runs over all primes of K ∞ lying over v. We claim that H 1 (Γ n , C v ) = (C v ) Γn = 0. To simplify matters, we prove this for n = 0 (for arbitrary n the proof is similar). The group C v is a direct limit of induced Γ-modules and hence by Shapiro's lemma it follows that H 1 (Γ,
C v ) = H 1 ({1}, E(K v )/p m ) = 0. Thus we see that D split (n, m) = 0.
All in all, we see from the above for any n ≥ N, m ≥ 1 that D(n, m) is finite and that lim ← − n,m D(n, m) is also finite. It follows that coker φ n,m is finite and that lim ← − n,m coker φ n,m is also finite.
Finally we prove that ker ψ ′ n,m is finite.
Since ker ψ ′ n,m ⊆ H 1 (G S (K ∞ ), E[p ∞ ])[p m ]
Γn we easily see that by taking Pontryagin duals that it suffices to prove that H 1 (G S (K ∞ ), E[p ∞ ]) * is a finitely generated Λmodule. To show this last fact we have to show that H 1 (G S (K ∞ ), E[p ∞ ]) Γ is cofinitely generated over Z p . Since cd p (Γ) = 1, therefore we have a surjec-
tion H 1 (G S (K), E[p ∞ ]) ։ H 1 (G S (K ∞ ), E[p ∞ ]) Γ so it suffices to prove that H 1 (G S (K), E[p ∞ ])
is cofinintely generated over Z p i.e. we must show that
H 1 (G S (K), E[p ∞ ])[p] is finite. But H 1 (G S (K), E[p])
is finite (see [17] theorem 8.3.20) and this group surjects onto
H 1 (G S (K), E[p ∞ ])[p] so it indeed follows that H 1 (G S (K), E[p ∞ ])
is cofinitely generated over Z p . This proves that ker ψ ′ n,m is finite.
We have now shown that each of the terms in the exact sequence (20) are finite and so by taking inverse limits the sequence remains exact
0 → lim ← − n,m ker φ n,m → lim ← − n,m K n,m → lim ← − n,m ker ψ ′ n,m → lim ← − n,m coker φ n,m(23)0 → Sel p ∞ (E/K ∞ ) → H 1 (G S (K ∞ ), E[p ∞ ]) ψ ′ − → w∈S∞ H 1 (K ∞,w , E) → 0 (24)
The surjectivity of ψ ′ is due to proposition 2.4. This exact sequence induces another sequence
0 → Sel p ∞ (E/K ∞ )[p m ] → H 1 (G S (K ∞ ), E[p ∞ ])[p m ] ψ ′ m − − → w∈S∞ H 1 (K ∞,w , E)[p m ] ψ ′′ m − − → Sel p ∞ (E/K ∞ )/p m(25)
We break this sequence into 2 exact sequences
0 → Sel p ∞ (E/K ∞ )[p m ] → H 1 (G S (K ∞ ), E[p ∞ ])[p m ] φm − − → img ψ ′ m → 0 (26) 0 → img ψ ′ m θm − − → w∈S∞ H 1 (K ∞,w , E)[p m ] → img ψ ′′ m → 0(27)
Taking Γ n -coinvaraints of both these sequences we get
(img ψ ′ m ) Γn → Sel p ∞ (E/K ∞ )[p m ] Γn → H 1 (G S (K ∞ ), E[p ∞ ])[p m ] Γn φn,m − −− → (img ψ ′ m ) Γn → 0 (28) (img ψ ′′ m ) Γn → (img ψ ′ m ) Γn θn,m − −− → w∈S∞ H 1 (K ∞,w , E)[p m ] Γn(29)
We claim the each of the terms in the sequences (28) and (29) is finite. First we deal with (28):
Let J := w∈S∞ H 1 (K ∞,w , E)[p ∞ ]. Then (img ψ ′ m ) Γn ⊆ J[p m ] Γn .
In the proof of proposition 2.4 we proved that J is a cofinitely generated Λ-module. It follows from this that J[p m ] Γn is finite and hence (img ψ ′ m ) Γn is also finite. We showed above that
H 1 (G S (K ∞ ), E[p ∞ ]) is a cofinitely generated Λ-module. It follows that H 1 (G S (K ∞ ), E[p ∞ ])[p m ] Γn is finite.
This proves that all the terms in the sequence (28) are finite. Now we deal with the sequence (29). We know that Sel p ∞ (E/K ∞ ) * is a finitely generated Λ-module (see [13] theorem 4.5) so it follows that (Sel p ∞ (E/K ∞ )/p m ) Γn is finite. Since (img ψ ′′ m ) Γn ⊆ (Sel p ∞ (E/K ∞ )/p m ) Γn , it follows that (img ψ ′′ m ) Γn is also finite. Also since J is a cofinitely generated Λ-module, it follows that J[p m ] Γn = w∈S∞ H 1 (K ∞,w , E)[p m ] Γn is finite. Therefore all the terms in the sequence (29) are finite.
As all the terms in the sequence (28) and (29) are finite, therefore by taking inverse limits the sequences remain exact
lim ← − n,m (img ψ ′ m ) Γn → lim ← − n,m Sel p ∞ (E/K ∞ )[p m ] Γn → lim ← − n,m H 1 (G S (K ∞ ), E[p ∞ ])[p m ] Γn Φ − → lim ← − n,m (img ψ ′ m ) Γn → 0 (30) lim ← − n,m (img ψ ′′ m ) Γn → lim ← − n,m (img ψ ′ m ) Γn Θ − → lim ← − n,m w∈S∞ H 1 (K ∞,w , E)[p m ] Γn(31)
We have an exact sequence
0 → ker(Φ) → ker(Θ • Φ) Φ − → ker(Θ)(32)Sel p ∞ (E/K ∞ )[p m ] Γn is pseudo- isomorphic toṪ λ (Sel p ∞ (E/K ∞ ) * )
. Therefore from the exact sequence (30), to show the existence of a pseudo-isomorphism ker Φ ∼Ṫ λ (Sel p ∞ (E/K ∞ ) * ), we see that it will suffice to show that lim
← − n,m (img ψ ′ m ) Γn = 0. Let J := w∈S∞ H 1 (K ∞,w , E)[p ∞ ].H 1 (G S (K ∞ ), E[p m ]) Γn .
Therefore to show that Φ(ker(Θ • Φ)) is finitely generated over Z p , it will suffice to show that lim ← − n,m One checks using the definitions of f n,m and g n,m (see [2] prop. XV-5.7) that these morphisms commute with the induced maps on the terms in the diagram (34) and that these induced maps are actually corestriction. This proves that the diagram (34) commutes and hence the first diagram in the statement of the lemma commutes.
H 1 (G S (K ∞ ), E[p m ])
To show that the second diagram in the statement of the lemma commutes one argues similar fashion using the map p m ′ −m : D(G S (K n ), Γ n , E[p m ′ ]) → D(G S (K n ), Γ n , E[p m ]) which is induced by multiplication by p m ′ −m Alternatively, to show that the 2 diagrams commute, one can use the explicit descriptions of f n,m and g n,m : the map g n,m is the restriction map (see [14] prop. XI-10.2) whereas the map f −1 n,m : img f n,m → H 1 (G S (K ∞ ), E[p m ]) Γn is described in [8] (see also [12] theorem 2). Now let v be a prime of K n , w a prime of K ∞ above v and Γ n,w the decomposition group of w in K ∞ /K. We have a Hochschild-Serre spectral sequence H s (Γ n,w , H t (K ∞,w , E[p m ])) ⇒ H s+t (K n,v , E[p m ]). We certainly have cd p (Γ n,w ) = 1 so as in the the global case we get an exact sequence
First consider the groups Y ′ := lim ← − n,m Sel p m (E/K ∞ ) Γn and the group Y := lim ← − n,mSel p ∞ (E/K ∞ )[p m ] Γn . We claim that Y ′ injects into Y .To see this, note that the natural map from Sel p m (E/K ∞ ) to Sel p ∞ (E/K ∞ )[p m ] has kernel E(K ∞ )[p ∞ ]/p m . This map induces a map from Y ′ to Y with kernel Z := lim ← − n,m (E(K ∞ )[p ∞ ]/p m ) Γn . Since the groups E(K ∞ )[p ∞ ]/p m are
(
First we deal with img θ. Consider the group J := w∈S∞ H 1 (K ∞,w , E)[p ∞ ]J[p m ]) Γn ∼Ṫ λ (J * ). Hence lim ← − n,m (J[p m ]) Γn is a finitely generated Z p -module.
∞,w )/p m ) Γn is finite. Thus we have shown that for any n ≥ N, m ≥ 1 that D p (n, m) is finite and lim ← − n,m D p (n, m) is also finite. Now we turn to the group D nsplit (n, m) := w∈S nsplit E(K ∞,w )/p m Γn .
H 1 (
1G S (K ∞ ), E[p m ]) Γn → lim ← − n,m H 1 (G S (K ∞ ), E[p ∞ ])[p m ] Γn → 0 H 1 (Γ n ′ , H 1 (G S (K ∞ ), E[p m ])) cor f n ′ ,m / / H 2 (G S (K n ′ ), E[p m ]) cor g n ′ ,m / / H 2 (G S (K ∞ ), E[p m ]) Γ n ′ cor H 1 (Γ n , H 1 (G S (K ∞ ), E[p m ])) fn,m / / H 2 (G S (K n ), E[p m ]) gn,m / / H 2 (G S (K ∞ ), E[p m ]) Γn (34) As in the proof of [17] theorem 2.4.1, the data (G S (K n ′ ), Γ n ′ , E[p m ]) determines a double complex D(G S (K n ′ ), Γ n ′ , E[p m ]). Similarly, we have a double complex D(G S (K n ), Γ n , E[p m ]). By taking the column-wise filtrations of the total complexes of these double complexes we obtain 2 Hochschild-Serre spectral sequences SS(G S (K n ′ ), Γ n ′ , E[p m ]) and SS(G S (K n ), Γ n , E[p m ]) (see[15] theorem 2.15 and [17] theorem 2.4.1). The obvious corestriction map on cochains induces a morphism of double complex cor : D(G S (K n ′ ), Γ n ′ , E[p m ]) → D(G S (K n , Γ n , E[p m ]) which, in turn, induces a morphism of spectral sequences cor : SS(G S (K n ′ ), Γ n ′ , E[p m ]) → SS(G S (K n , Γ n , E[p m ]) and their corresponding limit terms.
0//
→ H 1 (K ∞,w , E[p m ]) Γn,w fw,n,m −−−−→ H 2 (K n,v , E[p m ]) gw,n,m −−−−→ H 2 (K ∞,w , E[p m ]) Γn,w → 0 (35) By Shapiro's lemma H 1 (K ∞,w , E[p m ]) Γn,w = ( w|v H 1 (K ∞,w , E[p m ])) Γn and H 2 (K ∞,w , E[p m ]) Γn,w = ( w|v H 2 (K ∞,w , E[p m ]))Γn where the direct sum runs over all primes of w dividing v. Taking direct sums, we get a diagram0 / / ( w∈S∞ H 1 (G S (K ∞,w ), E[p m ])) (G S (K ∞ ), E[p m ])) Γn / / 0 0 / / H 1 (G S (K ∞ ), E[p m ]) / H 2 (G S (K n ), E[p m ]) / H 2 (G S (K ∞ ), E[p m ])maps in the diagram are induced by restriction. This diagram commutes. To see this, one argues as in the proof of lemma 2.7 taking the restriction map of the appropriate double complexes. Applying the snake lemma we get an
ker ψ ′ n,m . Therefore from the exact sequence, we see that to prove the lemma, we only have to show that ker Φ is pseudo-isomorphic tȯ T λ (Sel p ∞ (E/K ∞ ) * ) and that Φ(ker(Θ • Φ)) is finite. First we deal with ker Φ. From lemma 2.1 lim ← −Note that ker(Θ • Φ) = lim
← −
n,m
n,m
J[p m ] Γn . According to[17] prop. 5.5.10 lim ← −J[p m ] Γnis a free Λ-module with the same rank as J * . In the proof of proposition 2.4 we showed that J * is a torsion Λ-module. Therefore it follows that lim ← − Γn = 0. This proves that ker Φ is pseudo-isomorphic tȯT λ (Sel p ∞ (E/K ∞ ) * ). Now we show that Φ(ker(Θ•Φ)) is finite. First we show that this group is finitely generated over Z p . Recall that we showed above that ker(Θ • Φ) = lim ← −Then lim
← −
n,m
(img ψ ′
m ) Γn ⊆ lim
← −
n,m
n,m
n,m
J[p m ] Γn = 0
and hence also lim
← −
n,m
(img ψ ′
m ) n,m
ker ψ ′
n,m is
pseudo-isomorphic to lim
← −
n,m
K n,m . We have lim
← −
n,m
K n,m ⊆ lim
← −
n,m
Γn is finitely generated over Z p . Consider the bottom row of the commutative diagram(19) (E(K ∞ )[p ∞ ]/p m ) Γn → H 1 (G S (K ∞ ), E[p m ]) Γn → H 1 (G S (K ∞ ), E[p ∞ ])[p m ] Γn → 0We showed above that all the terms in the above exact sequence are finite. Therefore by taking inverse limits the sequence remains exact(E(K ∞ )[p ∞ ]/p m ) Γn → lim ← −lim
← −
n,m
n,m
Acknowledgments The author would like to thank K ֒ estutisČesnavičius for many helpful correspondences about some of the arguments in the introduction. The author would also like to thank Robert Pollack for his helpful comments about this paper.Applying lemma 2.1 to the first and last terms of the above sequence, it follows that lim ← − n,m H 1 (G S (K ∞ ), E[p m ]) Γn is in fact finitely generated over Z p as desired.Now we show that Φ(ker(Θ•Φ)) is a Z p -torsion module. From the exact sequence (32), this will follow if we can prove that ker Θ is a torsion Z p -module. From the exact sequence (31), this will follow once we show that lim ← −Now let n, m ≥ 0. Since cd p (Γ n ) = 1, therefore we can apply the above result to the Hochschild-Serre spectral sequenceLemma 2.7. For n ′ > n we have a commutative diagramwhere the maps can, cor and norm are the canonical projection, corestriction and norm, respectively. Also for m ′ > m we have a commutative diagramwhere the vertical maps are induced by multiplication by p m ′ −m Proof. From the formula for the corestriction map ([25]prop. 2.5.2) it is easy to show that the corestriction map cor :) corresponds to the canonical projection can :Γn. Also we know that the corestriction map cor :Γn is equal to the norm map. Therefore we see that to show that the first diagram commutes, it suffices to show the commutativity of the following diagram exact sequence 0 → ker ψ n,m → ker ψ ′ n,m → ker ψ ′′ n,m → coker ψ n,mLemma 2.7 and its local analog allow us to take inverse limits of this exact sequence. By[17]is finite and therefore ker ψ ′ n,m is also finite. Therefore by taking inverse limits the sequence remains exactThis, in turn, induces another exact sequenceIn the proof of proposition 2.4 we showed that J * is a finitely generated torsion Λ-module. Hence it follows that J[p m ] Γn is finite. Also in lemma 2.6 we showed that ( w∈S∞ E(K ∞,w )/p m ) Γn is finite. Therefore it follows from the exact sequence Since the groups in the exact sequence (38) are finite, therefore by taking inverse limits the sequence remains exactSince J * is a finitely generated Λ-module, it follows from lemma 2.1 that limis a finitely generated Z p -module and so the same is true for coker(ϕ•ψ). Also in the proof of lemma 2.6 we showed that lim ← − n,m ( w∈S∞ E(K ∞,w )/p m ) Γn is finite. Therefore from the above exact sequence lim ← − n,m coker ψ n,m = coker ψ is a finitely generated Z p -module. It follows that img θ is finite since as we showed above it is a torsion Z p -module. It follows from this, the exact sequence (37) and the observations noted after this sequence that we have an exact sequence
Quelques aspects de la descente sur une courbe elliptique dans le cas de reduction supersinguliere. P Billot, Compos. Math. 58P. Billot, Quelques aspects de la descente sur une courbe elliptique dans le cas de reduction supersinguliere, Compos. Math. 58 (1986), 341-369.
. E Cartan, S Eilenberg, Homological Algebra, Princeton Math Ser. 19E. Cartan, S. Eilenberg, Homological Algebra, Princeton Math Ser. 19, Princeton 1956
Selmer groups as flat cohomology groups. K Česnavičius, J. Ramanujan Math. Soc. 311K.Česnavičius Selmer groups as flat cohomology groups, J. Ramanujan Math. Soc. 31 (2016), no. 1, 31-61
Poitou-Tate without restrictions on the order. K Česnavičius, Mathematical Research Letters. 226K.Česnavičius Poitou-Tate without restrictions on the order, Mathematical Research Letters 22 (2015), no. 6, 1621-1666.
Kummer Theory for Abelian Varieties over Local Fields. J Coates, R Greenberg, Invent. Math. 124J. Coates, R. Greenberg, Kummer Theory for Abelian Varieties over Local Fields, Invent. Math., 124 (1996), 129-174.
. J Coates, R Sujatha, Galois Cohomology of Elliptic Curves Tata Inst. Fund. Res. Lecture Notes. Narosa Publishing HouseJ. Coates, R. Sujatha, Galois Cohomology of Elliptic Curves Tata Inst. Fund. Res. Lecture Notes, Narosa Publishing House, 2000.
Artin-Mazur-Milne duality for fppf cohomology. C Demarche, D Harari, C. Demarche, D. Harari, Artin-Mazur-Milne duality for fppf cohomology, ht- tps://arxiv.org/abs/1804.03941
A seven-term exact sequence for the cohomology of a group extension. K Dekimpe, M Hartl, S Wauters, J. Algebra. K. Dekimpe, M. Hartl, S. Wauters, A seven-term exact sequence for the cohomology of a group extension, J. Algebra, 369 (2012), 70-95.
Iwasawa theory for elliptic curves. R Greenberg, Lecture Notes in Math. 1716SpringerR. Greenberg, Iwasawa theory for elliptic curves. Lecture Notes in Math. 1716, Springer, New York 1999, pp.51-144.
R Greenberg, Iwasawa theory for p-adic representations. Boston, MAAcademic Press17Algebraic number theoryR. Greenberg, Iwasawa theory for p-adic representations, in Algebraic number theory, 97-137, Adv. Stud. Pure Math., 17, Academic Press, Boston, MA, 1989.
Real algebraic curves. B Gross, J Harris, Ann. Sci.École. Norm. Sup. 144B. Gross, J. Harris, Real algebraic curves, Ann. Sci.École. Norm. Sup. 14 (4) 1981, 157-182.
Exact sequences in the cohomology of a group extension. J Huebschmann, J. Algebra. 444J. Huebschmann, Exact sequences in the cohomology of a group extension, J. Algebra, 444 (2015), 297-312
Cyclotomic fields and modular curves. Y I Manin, Russian Math. Surveys. 266Y.I. Manin, Cyclotomic fields and modular curves. Russian Math. Surveys 26(6) 1971, 7-78.
. S Maclane, Homology , SpringerS. Maclane, Homology, Springer 1967
A user's guide to spectral sequences. J Mccleary, Cambridge Studies in Advanced Mathematics. 58Cambridge University Presssecond editionJ. McCleary, A user's guide to spectral sequences, second edition, Cambridge Studies in Advanced Mathematics, 58, Cambridge University Press, Cambridge, 2001.
Arithmetic Duality Theorems. J S Milne, BookSurge, LLC, Charleston, SCsecond ed.J.S. Milne, Arithmetic Duality Theorems, second ed., BookSurge, LLC, Charleston, SC, 2006.
J Neukirch, A Schmidt, K Wingberg, Cohomology of Number Fields. Springer323825second editionJ. Neukirch, A. Schmidt, K. Wingberg, Cohomology of Number Fields, second edition, Grundlehren der Mathematischen Wissenschaften 323, Springer, 2008, xvi+825.
B Perrin-Riou, Fonctions L p-adiques des représentations p-adiques. 229ppB. Perrin-Riou, Fonctions L p-adiques des représentations p-adiques, Astérisque 229 (1995), 198 pp
Torsion points of Abelian varieties in cyclotomic extensions, appendix to N. Katz and S. Lang, Finiteness theorems in geometric classfield theory. K Ribet, Enseign. Math. 27K. Ribet, Torsion points of Abelian varieties in cyclotomic extensions, appendix to N. Katz and S. Lang, Finiteness theorems in geometric classfield theory, Enseign. Math. 27 (1981), 285-319.
Iwasawa L-functions of varieties over algebraic number fields. A first approach. P Schneider, Invent. Math. 71P. Schneider, Iwasawa L-functions of varieties over algebraic number fields. A first approach, Invent. Math. 71 (1983), 251-293.
Propriétés galoisiennes des points d'ordre fini des courbes elliptiques. J.-P Serre, Invent. Math. 154J.-P. Serre, Propriétés galoisiennes des points d'ordre fini des courbes elliptiques, Invent. Math. 15 (1972), no. 4, 259-331
The Arithmetic of Elliptic Curves, Grad. Texts in Math. J Silverman, Springer-Verlag106J. Silverman, The Arithmetic of Elliptic Curves, Grad. Texts in Math. 106, Springer-Verlag (1986)
Duality theorems for abelian varieties over Zp-extensions. K Wingberg, Algebraic number theory. Boston, MAAcademic Press17K. Wingberg, Duality theorems for abelian varieties over Zp-extensions, in Algebraic number theory, 471-492, Adv. Stud. Pure Math., 17, Academic Press, Boston, MA, 1989.
On the rational points of abelian varieties over Zp-extensions of number fields. K Wingberg, Math. Ann. 279K. Wingberg, On the rational points of abelian varieties over Zp-extensions of number fields, Math. Ann. 279 (1987), 9-24.
E Weiss, Cohomology of groups. Academic PressE. Weiss, Cohomology of groups, Academic Press 1969
|
[] |
[
"Development of a general analysis and unfolding scheme and its application to measure the energy spectrum of atmospheric neutrinos with IceCube IceCube Collaboration",
"Development of a general analysis and unfolding scheme and its application to measure the energy spectrum of atmospheric neutrinos with IceCube IceCube Collaboration"
] |
[
"M G Aartsen \nSchool of Chemistry and Physics\nUniversity of Adelaide\n5005AdelaideSAAustralia\n",
"M Ackermann \nDESY\n15735ZeuthenGermany\n",
"J Adams \nDepartment of Physics and Astronomy\nUniversity of Canterbury\nPrivate Bag 4800ChristchurchNew Zealand\n",
"J A Aguilar \nDépartement de physique nucléaire et corpusculaire\nUniversité de Genève\n1211GenevaSwitzerland\n",
"M Ahlers \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"M Ahrens \nDepartment of Physics\nOskar Klein Centre\nStockholm University\n10691StockholmSweden\n",
"D Altmann \nErlangen Centre for Astroparticle Physics\nFriedrich-Alexander-Universität Erlangen-Nürnberg\n91058ErlangenGermany\n",
"T Anderson \nDepartment of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"C Arguelles \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"T C Arlen \nDepartment of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"J Auffenberg \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"X Bai \nPhysics Department\nSouth Dakota School of Mines and Technology\n57701Rapid CitySDUSA\n",
"S W Barwick \nDepartment of Physics and Astronomy\nUniversity of California\n92697IrvineCAUSA\n",
"V Baum \nInstitute of Physics\nUniversity of Mainz\nStaudinger Weg 755099MainzGermany\n",
"J J Beatty \nDepartment of Physics and Center for Cosmology and Astro-Particle Physics\nOhio State University\n43210ColumbusOHUSA\n\nDepartment of Astronomy\nOhio State University\n43210ColumbusOHUSA\n",
"J Becker Tjus \nFakultät für Physik & Astronomie\nRuhr-Universität Bochum\n44780BochumGermany\n",
"K.-H Becker \nDepartment of Physics\nUniversity of Wuppertal\n42119WuppertalGermany\n",
"S Benzvi \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"P Berghaus \nDESY\n15735ZeuthenGermany\n",
"D Berley \nDepartment of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"E Bernardini \nDESY\n15735ZeuthenGermany\n",
"A Bernhard \nTechnische Universität München\n85748GarchingGermany\n",
"D Z Besson \nDepartment of Physics and Astronomy\nUniversity of Kansas\n66045LawrenceKSUSA\n",
"G Binder \nDepartment of Physics\nUniversity of California\n94720BerkeleyCAUSA\n\nLawrence Berkeley National Laboratory\n94720BerkeleyCAUSA\n",
"D Bindig \nDepartment of Physics\nUniversity of Wuppertal\n42119WuppertalGermany\n",
"M Bissok \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"E Blaufuss \nDepartment of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"J Blumenthal \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"D J Boersma \nDepartment of Physics and Astronomy\nUppsala University\nBox 51675120UppsalaSweden\n",
"C Bohm \nDepartment of Physics\nOskar Klein Centre\nStockholm University\n10691StockholmSweden\n",
"F Bos \nFakultät für Physik & Astronomie\nRuhr-Universität Bochum\n44780BochumGermany\n",
"D Bose \nDepartment of Physics\nSungkyunkwan University\n440-746SuwonKorea\n",
"S Böser \nPhysikalisches Institut\nUniversität Bonn\nNussallee 1253115BonnGermany\n",
"O Botner \nDepartment of Physics and Astronomy\nUppsala University\nBox 51675120UppsalaSweden\n",
"L Brayeur \nDienst ELEM\nVrije Universiteit Brussel\n1050BrusselsBelgium\n",
"H P Bretz \nDESY\n15735ZeuthenGermany\n",
"A M Brown \nDepartment of Physics and Astronomy\nUniversity of Canterbury\nPrivate Bag 4800ChristchurchNew Zealand\n",
"J Casey \nSchool of Physics and Center for Relativistic Astrophysics\nGeorgia Institute of Technology\n30332AtlantaGAUSA\n",
"M Casier \nDienst ELEM\nVrije Universiteit Brussel\n1050BrusselsBelgium\n",
"E Cheung \nDepartment of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"D Chirkin \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"A Christov \nDépartement de physique nucléaire et corpusculaire\nUniversité de Genève\n1211GenevaSwitzerland\n",
"B Christy \nDepartment of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"K Clark \nDepartment of Physics\nUniversity of Toronto\nM5S 1A7TorontoONCanada\n",
"L Classen \nErlangen Centre for Astroparticle Physics\nFriedrich-Alexander-Universität Erlangen-Nürnberg\n91058ErlangenGermany\n",
"F Clevermann \nDepartment of Physics\nTU Dortmund University\n44221DortmundGermany\n",
"S Coenders \nTechnische Universität München\n85748GarchingGermany\n",
"D F Cowen \nDepartment of Astronomy and Astrophysics\nPennsylvania State University\n16802University ParkPAUSA\n\nDepartment of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"A H Cruz Silva \nDESY\n15735ZeuthenGermany\n",
"M Danninger \nDepartment of Physics\nOskar Klein Centre\nStockholm University\n10691StockholmSweden\n",
"J Daughhetee \nSchool of Physics and Center for Relativistic Astrophysics\nGeorgia Institute of Technology\n30332AtlantaGAUSA\n",
"J C Davis \nDepartment of Physics and Center for Cosmology and Astro-Particle Physics\nOhio State University\n43210ColumbusOHUSA\n",
"M Day \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"J P A M De André \nDepartment of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"C De Clercq \nDienst ELEM\nVrije Universiteit Brussel\n1050BrusselsBelgium\n",
"S De Ridder \nDepartment of Physics and Astronomy\nUniversity of Gent\n9000GhentBelgium\n",
"P Desiati \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"K D De Vries \nDienst ELEM\nVrije Universiteit Brussel\n1050BrusselsBelgium\n",
"M De With \nInstitut für Physik\nHumboldt-Universität zu Berlin\n12489BerlinGermany\n",
"T Deyoung \nDepartment of Physics\nPennsylvania State University\n16802University ParkPAUSA\n\nPresent address Department of Physics and Astronomy\nMichigan State University\n567 Wilson Road48824East LansingMIUSA\n",
"J C Díaz-Vélez \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"M Dunkman \nDepartment of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"R Eagan \nDepartment of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"B Eberhardt \nInstitute of Physics\nUniversity of Mainz\nStaudinger Weg 755099MainzGermany\n",
"B Eichmann \nFakultät für Physik & Astronomie\nRuhr-Universität Bochum\n44780BochumGermany\n",
"J Eisch \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"S Euler \nDepartment of Physics and Astronomy\nUppsala University\nBox 51675120UppsalaSweden\n",
"P A Evenson \nDepartment of Physics and Astronomy\nBartol Research Institute\nUniversity of Delaware\n19716NewarkDEUSA\n",
"O Fadiran \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"A R Fazely \nDepartment of Physics\nSouthern University\n70813Baton RougeLAUSA\n",
"A Fedynitch \nFakultät für Physik & Astronomie\nRuhr-Universität Bochum\n44780BochumGermany\n",
"J Feintzeig \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"J Felde \nDepartment of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"T Feusels \nDepartment of Physics and Astronomy\nUniversity of Gent\n9000GhentBelgium\n",
"K Filimonov \nDepartment of Physics\nUniversity of California\n94720BerkeleyCAUSA\n",
"C Finley \nDepartment of Physics\nOskar Klein Centre\nStockholm University\n10691StockholmSweden\n",
"T Fischer-Wasels \nDepartment of Physics\nUniversity of Wuppertal\n42119WuppertalGermany\n",
"S Flis \nDepartment of Physics\nOskar Klein Centre\nStockholm University\n10691StockholmSweden\n",
"A Franckowiak \nPhysikalisches Institut\nUniversität Bonn\nNussallee 1253115BonnGermany\n",
"K Frantzen \nDepartment of Physics\nTU Dortmund University\n44221DortmundGermany\n",
"T Fuchs \nDepartment of Physics\nTU Dortmund University\n44221DortmundGermany\n",
"T K Gaisser \nDepartment of Physics and Astronomy\nBartol Research Institute\nUniversity of Delaware\n19716NewarkDEUSA\n",
"R Gaior \nDepartment of Physics\nChiba University\n263-8522ChibaJapan\n",
"J Gallagher \nDepartment of Astronomy\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"L Gerhardt \nDepartment of Physics\nUniversity of California\n94720BerkeleyCAUSA\n\nLawrence Berkeley National Laboratory\n94720BerkeleyCAUSA\n",
"D Gier \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"L Gladstone \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"T Glüsenkamp \nDESY\n15735ZeuthenGermany\n",
"A Goldschmidt \nLawrence Berkeley National Laboratory\n94720BerkeleyCAUSA\n",
"G Golup \nDienst ELEM\nVrije Universiteit Brussel\n1050BrusselsBelgium\n",
"J G Gonzalez \nDepartment of Physics and Astronomy\nBartol Research Institute\nUniversity of Delaware\n19716NewarkDEUSA\n",
"J A Goodman \nDepartment of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"D Góra \nDESY\n15735ZeuthenGermany\n",
"D Grant \nDepartment of Physics\nUniversity of Alberta\nT6G 2E1EdmontonABCanada\n",
"P Gretskov \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"J C Groh \nDepartment of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"A Groß \nTechnische Universität München\n85748GarchingGermany\n",
"C Ha \nDepartment of Physics\nUniversity of California\n94720BerkeleyCAUSA\n\nLawrence Berkeley National Laboratory\n94720BerkeleyCAUSA\n",
"C Haack \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"A Haj Ismail \nDepartment of Physics and Astronomy\nUniversity of Gent\n9000GhentBelgium\n",
"P Hallen \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"A Hallgren \nDepartment of Physics and Astronomy\nUppsala University\nBox 51675120UppsalaSweden\n",
"F Halzen \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"K Hanson \nScience Faculty CP230\nUniversité Libre de Bruxelles\n1050BrusselsBelgium\n",
"D Hebecker \nPhysikalisches Institut\nUniversität Bonn\nNussallee 1253115BonnGermany\n",
"D Heereman \nScience Faculty CP230\nUniversité Libre de Bruxelles\n1050BrusselsBelgium\n",
"D Heinen \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"K Helbing \nDepartment of Physics\nUniversity of Wuppertal\n42119WuppertalGermany\n",
"R Hellauer \nDepartment of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"D Hellwig \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"S Hickford \nDepartment of Physics and Astronomy\nUniversity of Canterbury\nPrivate Bag 4800ChristchurchNew Zealand\n",
"G C Hill \nSchool of Chemistry and Physics\nUniversity of Adelaide\n5005AdelaideSAAustralia\n",
"K D Hoffman \nDepartment of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"R Hoffmann \nDepartment of Physics\nUniversity of Wuppertal\n42119WuppertalGermany\n",
"A Homeier \nPhysikalisches Institut\nUniversität Bonn\nNussallee 1253115BonnGermany\n",
"K Hoshina \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n\nEarthquake Research Institute\nUniversity of Tokyo\n113-0032BunkyoTokyoJapan\n",
"F Huang \nDepartment of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"W Huelsnitz \nDepartment of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"P O Hulth \nDepartment of Physics\nOskar Klein Centre\nStockholm University\n10691StockholmSweden\n",
"K Hultqvist \nDepartment of Physics\nOskar Klein Centre\nStockholm University\n10691StockholmSweden\n",
"S Hussain \nDepartment of Physics and Astronomy\nBartol Research Institute\nUniversity of Delaware\n19716NewarkDEUSA\n",
"A Ishihara \nDepartment of Physics\nChiba University\n263-8522ChibaJapan\n",
"E Jacobi \nDESY\n15735ZeuthenGermany\n",
"J Jacobsen \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"K Jagielski \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"G S Japaridze \nCTSPS\nClark-Atlanta University\n30314AtlantaGAUSA\n",
"K Jero \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"O Jlelati \nDepartment of Physics and Astronomy\nUniversity of Gent\n9000GhentBelgium\n",
"M Jurkovic \nTechnische Universität München\n85748GarchingGermany\n",
"B Kaminsky \nDESY\n15735ZeuthenGermany\n",
"A Kappes \nErlangen Centre for Astroparticle Physics\nFriedrich-Alexander-Universität Erlangen-Nürnberg\n91058ErlangenGermany\n",
"T Karg \nDESY\n15735ZeuthenGermany\n",
"A Karle \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"M Kauer \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"A Keivani \nDepartment of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"J L Kelley \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"A Kheirandish \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"J Kiryluk \nDepartment of Physics and Astronomy\nStony Brook University\n11794-3800Stony BrookNYUSA\n",
"J Kläs \nDepartment of Physics\nUniversity of Wuppertal\n42119WuppertalGermany\n",
"S R Klein \nDepartment of Physics\nUniversity of California\n94720BerkeleyCAUSA\n\nLawrence Berkeley National Laboratory\n94720BerkeleyCAUSA\n",
"J H Köhne \nDepartment of Physics\nTU Dortmund University\n44221DortmundGermany\n",
"G Kohnen \nUniversité de Mons\n7000MonsBelgium\n",
"H Kolanoski \nInstitut für Physik\nHumboldt-Universität zu Berlin\n12489BerlinGermany\n",
"A Koob \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"L Köpke \nInstitute of Physics\nUniversity of Mainz\nStaudinger Weg 755099MainzGermany\n",
"C Kopper \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"S Kopper \nDepartment of Physics\nUniversity of Wuppertal\n42119WuppertalGermany\n",
"D J Koskinen \nNiels Bohr Institute\nUniversity of Copenhagen\n2100CopenhagenDenmark\n",
"M Kowalski \nPhysikalisches Institut\nUniversität Bonn\nNussallee 1253115BonnGermany\n",
"A Kriesten \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"K Krings \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"G Kroll \nInstitute of Physics\nUniversity of Mainz\nStaudinger Weg 755099MainzGermany\n",
"M Kroll \nFakultät für Physik & Astronomie\nRuhr-Universität Bochum\n44780BochumGermany\n",
"J Kunnen \nDienst ELEM\nVrije Universiteit Brussel\n1050BrusselsBelgium\n",
"N Kurahashi \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"T Kuwabara \nDepartment of Physics\nChiba University\n263-8522ChibaJapan\n",
"M Labare \nDepartment of Physics and Astronomy\nUniversity of Gent\n9000GhentBelgium\n",
"D T Larsen \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"M J Larson \nNiels Bohr Institute\nUniversity of Copenhagen\n2100CopenhagenDenmark\n",
"M Lesiak-Bzdak \nDepartment of Physics and Astronomy\nStony Brook University\n11794-3800Stony BrookNYUSA\n",
"M Leuermann \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"J Leute \nTechnische Universität München\n85748GarchingGermany\n",
"J Lünemann \nInstitute of Physics\nUniversity of Mainz\nStaudinger Weg 755099MainzGermany\n",
"J Madsen \nDepartment of Physics\nUniversity of Wisconsin\nRiver Falls54022WIUSA\n",
"G Maggi \nDienst ELEM\nVrije Universiteit Brussel\n1050BrusselsBelgium\n",
"R Maruyama \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"K Mase \nDepartment of Physics\nChiba University\n263-8522ChibaJapan\n",
"H S Matis \nLawrence Berkeley National Laboratory\n94720BerkeleyCAUSA\n",
"R Maunu \nDepartment of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"F Mcnally \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"K Meagher \nDepartment of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"M Medici \nNiels Bohr Institute\nUniversity of Copenhagen\n2100CopenhagenDenmark\n",
"A Meli \nDepartment of Physics and Astronomy\nUniversity of Gent\n9000GhentBelgium\n",
"T Meures \nScience Faculty CP230\nUniversité Libre de Bruxelles\n1050BrusselsBelgium\n",
"S Miarecki \nDepartment of Physics\nUniversity of California\n94720BerkeleyCAUSA\n\nLawrence Berkeley National Laboratory\n94720BerkeleyCAUSA\n",
"E Middell \nDESY\n15735ZeuthenGermany\n",
"E Middlemas \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"N Milke \nDepartment of Physics\nTU Dortmund University\n44221DortmundGermany\n",
"J Miller \nDienst ELEM\nVrije Universiteit Brussel\n1050BrusselsBelgium\n",
"L Mohrmann \nDESY\n15735ZeuthenGermany\n",
"T Montaruli \nDépartement de physique nucléaire et corpusculaire\nUniversité de Genève\n1211GenevaSwitzerland\n",
"R Morse \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"R Nahnhauer \nDESY\n15735ZeuthenGermany\n",
"U Naumann \nDepartment of Physics\nUniversity of Wuppertal\n42119WuppertalGermany\n",
"H Niederhausen \nDepartment of Physics and Astronomy\nStony Brook University\n11794-3800Stony BrookNYUSA\n",
"S C Nowicki \nDepartment of Physics\nUniversity of Alberta\nT6G 2E1EdmontonABCanada\n",
"D R Nygren \nLawrence Berkeley National Laboratory\n94720BerkeleyCAUSA\n",
"A Obertacke \nDepartment of Physics\nUniversity of Wuppertal\n42119WuppertalGermany\n",
"S Odrowski \nDepartment of Physics\nUniversity of Alberta\nT6G 2E1EdmontonABCanada\n",
"A Olivas \nDepartment of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"A Omairat \nDepartment of Physics\nUniversity of Wuppertal\n42119WuppertalGermany\n",
"A O'murchadha \nScience Faculty CP230\nUniversité Libre de Bruxelles\n1050BrusselsBelgium\n",
"T Palczewski \nDepartment of Physics and Astronomy\nUniversity of Alabama\n35487TuscaloosaALUSA\n",
"L Paul \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"Ö Penek \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"J A Pepper \nDepartment of Physics and Astronomy\nUniversity of Alabama\n35487TuscaloosaALUSA\n",
"C Pérez De Los Heros \nDepartment of Physics and Astronomy\nUppsala University\nBox 51675120UppsalaSweden\n",
"C Pfendner \nDepartment of Physics and Center for Cosmology and Astro-Particle Physics\nOhio State University\n43210ColumbusOHUSA\n",
"D Pieloth \nDepartment of Physics\nTU Dortmund University\n44221DortmundGermany\n",
"E Pinat \nScience Faculty CP230\nUniversité Libre de Bruxelles\n1050BrusselsBelgium\n",
"J Posselt \nDepartment of Physics\nUniversity of Wuppertal\n42119WuppertalGermany\n",
"P B Price \nDepartment of Physics\nUniversity of California\n94720BerkeleyCAUSA\n",
"G T Przybylski \nLawrence Berkeley National Laboratory\n94720BerkeleyCAUSA\n",
"J Pütz \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"M Quinnan \nDepartment of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"L Rädel \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"M Rameez \nDépartement de physique nucléaire et corpusculaire\nUniversité de Genève\n1211GenevaSwitzerland\n",
"K Rawlins \nDepartment of Physics and Astronomy\nUniversity of Alaska Anchorage\n3211 Providence Dr99508AnchorageAKUSA\n",
"P Redl \nDepartment of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"I Rees \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"R Reimann \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"M Relich \nDepartment of Physics\nChiba University\n263-8522ChibaJapan\n",
"E Resconi \nTechnische Universität München\n85748GarchingGermany\n",
"W Rhode \nDepartment of Physics\nTU Dortmund University\n44221DortmundGermany\n",
"M Richman \nDepartment of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"B Riedel \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"S Robertson \nSchool of Chemistry and Physics\nUniversity of Adelaide\n5005AdelaideSAAustralia\n",
"J P Rodrigues \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"M Rongen \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"C Rott \nDepartment of Physics\nSungkyunkwan University\n440-746SuwonKorea\n",
"T Ruhe [email protected] \nDepartment of Physics\nTU Dortmund University\n44221DortmundGermany\n",
"B Ruzybayev \nDepartment of Physics and Astronomy\nBartol Research Institute\nUniversity of Delaware\n19716NewarkDEUSA\n",
"D Ryckbosch \nDepartment of Physics and Astronomy\nUniversity of Gent\n9000GhentBelgium\n",
"S M Saba \nFakultät für Physik & Astronomie\nRuhr-Universität Bochum\n44780BochumGermany\n",
"H.-G Sander \nInstitute of Physics\nUniversity of Mainz\nStaudinger Weg 755099MainzGermany\n",
"J Sandroos \nNiels Bohr Institute\nUniversity of Copenhagen\n2100CopenhagenDenmark\n",
"M Santander \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"S Sarkar \nNiels Bohr Institute\nUniversity of Copenhagen\n2100CopenhagenDenmark\n\nDepartment of Physics\nUniversity of Oxford\n1 Keble RoadOX1 3NPOxfordUK\n",
"K Schatto \nInstitute of Physics\nUniversity of Mainz\nStaudinger Weg 755099MainzGermany\n",
"F Scheriau \nDepartment of Physics\nTU Dortmund University\n44221DortmundGermany\n",
"T Schmidt \nDepartment of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"M Schmitz \nDepartment of Physics\nTU Dortmund University\n44221DortmundGermany\n",
"S Schoenen \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"S Schöneberg \nFakultät für Physik & Astronomie\nRuhr-Universität Bochum\n44780BochumGermany\n",
"A Schönwald \nDESY\n15735ZeuthenGermany\n",
"A Schukraft \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"L Schulte \nPhysikalisches Institut\nUniversität Bonn\nNussallee 1253115BonnGermany\n",
"O Schulz \nTechnische Universität München\n85748GarchingGermany\n",
"D Seckel \nDepartment of Physics and Astronomy\nBartol Research Institute\nUniversity of Delaware\n19716NewarkDEUSA\n",
"Y Sestayo \nTechnische Universität München\n85748GarchingGermany\n",
"S Seunarine \nDepartment of Physics\nUniversity of Wisconsin\nRiver Falls54022WIUSA\n",
"R Shanidze \nDESY\n15735ZeuthenGermany\n",
"M W E Smith \nDepartment of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"D Soldin \nDepartment of Physics\nUniversity of Wuppertal\n42119WuppertalGermany\n",
"G M Spiczak \nDepartment of Physics\nUniversity of Wisconsin\nRiver Falls54022WIUSA\n",
"C Spiering \nDESY\n15735ZeuthenGermany\n",
"M Stamatikos \nDepartment of Physics and Center for Cosmology and Astro-Particle Physics\nOhio State University\n43210ColumbusOHUSA\n\nNASA Goddard Space Flight Center\n20771GreenbeltMDUSA\n",
"T Stanev \nDepartment of Physics and Astronomy\nBartol Research Institute\nUniversity of Delaware\n19716NewarkDEUSA\n",
"N A Stanisha \nDepartment of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"A Stasik \nPhysikalisches Institut\nUniversität Bonn\nNussallee 1253115BonnGermany\n",
"T Stezelberger \nLawrence Berkeley National Laboratory\n94720BerkeleyCAUSA\n",
"R G Stokstad \nLawrence Berkeley National Laboratory\n94720BerkeleyCAUSA\n",
"A Stößl \nDESY\n15735ZeuthenGermany\n",
"E A Strahler \nDienst ELEM\nVrije Universiteit Brussel\n1050BrusselsBelgium\n",
"R Ström \nDepartment of Physics and Astronomy\nUppsala University\nBox 51675120UppsalaSweden\n",
"N L Strotjohann \nPhysikalisches Institut\nUniversität Bonn\nNussallee 1253115BonnGermany\n",
"G W Sullivan \nDepartment of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"H Taavola \nDepartment of Physics and Astronomy\nUppsala University\nBox 51675120UppsalaSweden\n",
"I Taboada \nSchool of Physics and Center for Relativistic Astrophysics\nGeorgia Institute of Technology\n30332AtlantaGAUSA\n",
"A Tamburro \nDepartment of Physics and Astronomy\nBartol Research Institute\nUniversity of Delaware\n19716NewarkDEUSA\n",
"A Tepe \nDepartment of Physics\nUniversity of Wuppertal\n42119WuppertalGermany\n",
"S Ter-Antonyan \nDepartment of Physics\nSouthern University\n70813Baton RougeLAUSA\n",
"A Terliuk \nDESY\n15735ZeuthenGermany\n",
"G Tešić \nDepartment of Physics\nPennsylvania State University\n16802University ParkPAUSA\n",
"S Tilav \nDepartment of Physics and Astronomy\nBartol Research Institute\nUniversity of Delaware\n19716NewarkDEUSA\n",
"P A Toale \nDepartment of Physics and Astronomy\nUniversity of Alabama\n35487TuscaloosaALUSA\n",
"M N Tobin \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"D Tosi \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"M Tselengidou \nErlangen Centre for Astroparticle Physics\nFriedrich-Alexander-Universität Erlangen-Nürnberg\n91058ErlangenGermany\n",
"E Unger \nDepartment of Physics and Astronomy\nUppsala University\nBox 51675120UppsalaSweden\n",
"M Usner \nPhysikalisches Institut\nUniversität Bonn\nNussallee 1253115BonnGermany\n",
"S Vallecorsa \nDépartement de physique nucléaire et corpusculaire\nUniversité de Genève\n1211GenevaSwitzerland\n",
"N Van Eijndhoven \nDienst ELEM\nVrije Universiteit Brussel\n1050BrusselsBelgium\n",
"J Vandenbroucke \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"J Van Santen \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"M Vehring \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"M Voge \nPhysikalisches Institut\nUniversität Bonn\nNussallee 1253115BonnGermany\n",
"M Vraeghe \nDepartment of Physics and Astronomy\nUniversity of Gent\n9000GhentBelgium\n",
"C Walck \nDepartment of Physics\nOskar Klein Centre\nStockholm University\n10691StockholmSweden\n",
"M Wallraff \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"Ch Weaver \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"M Wellons \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"C Wendt \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"S Westerhoff \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"B J Whelan \nSchool of Chemistry and Physics\nUniversity of Adelaide\n5005AdelaideSAAustralia\n",
"N Whitehorn \nDepartment of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA\n",
"C Wichary \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"K Wiebe \nInstitute of Physics\nUniversity of Mainz\nStaudinger Weg 755099MainzGermany\n",
"C H Wiebusch \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"D R Williams \nDepartment of Physics and Astronomy\nUniversity of Alabama\n35487TuscaloosaALUSA\n",
"H Wissing \nDepartment of Physics\nUniversity of Maryland\n20742College ParkMDUSA\n",
"M Wolf \nDepartment of Physics\nOskar Klein Centre\nStockholm University\n10691StockholmSweden\n",
"T R Wood \nDepartment of Physics\nUniversity of Alberta\nT6G 2E1EdmontonABCanada\n",
"K Woschnagg \nDepartment of Physics\nUniversity of California\n94720BerkeleyCAUSA\n",
"D L Xu \nDepartment of Physics and Astronomy\nUniversity of Alabama\n35487TuscaloosaALUSA\n",
"X W Xu \nDepartment of Physics\nSouthern University\n70813Baton RougeLAUSA\n",
"J P Yanez \nDESY\n15735ZeuthenGermany\n",
"G Yodh \nDepartment of Physics and Astronomy\nUniversity of California\n92697IrvineCAUSA\n",
"S Yoshida \nDepartment of Physics\nChiba University\n263-8522ChibaJapan\n",
"P Zarzhitsky \nDepartment of Physics and Astronomy\nUniversity of Alabama\n35487TuscaloosaALUSA\n",
"J Ziemann \nDepartment of Physics\nTU Dortmund University\n44221DortmundGermany\n",
"S Zierke \nIII. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany\n",
"M Zoll \nDepartment of Physics\nOskar Klein Centre\nStockholm University\n10691StockholmSweden\n",
"K Morik \nDepartment of Computer Science\nTU Dortmund University\n44221DortmundGermany\n"
] |
[
"School of Chemistry and Physics\nUniversity of Adelaide\n5005AdelaideSAAustralia",
"DESY\n15735ZeuthenGermany",
"Department of Physics and Astronomy\nUniversity of Canterbury\nPrivate Bag 4800ChristchurchNew Zealand",
"Département de physique nucléaire et corpusculaire\nUniversité de Genève\n1211GenevaSwitzerland",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics\nOskar Klein Centre\nStockholm University\n10691StockholmSweden",
"Erlangen Centre for Astroparticle Physics\nFriedrich-Alexander-Universität Erlangen-Nürnberg\n91058ErlangenGermany",
"Department of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Physics Department\nSouth Dakota School of Mines and Technology\n57701Rapid CitySDUSA",
"Department of Physics and Astronomy\nUniversity of California\n92697IrvineCAUSA",
"Institute of Physics\nUniversity of Mainz\nStaudinger Weg 755099MainzGermany",
"Department of Physics and Center for Cosmology and Astro-Particle Physics\nOhio State University\n43210ColumbusOHUSA",
"Department of Astronomy\nOhio State University\n43210ColumbusOHUSA",
"Fakultät für Physik & Astronomie\nRuhr-Universität Bochum\n44780BochumGermany",
"Department of Physics\nUniversity of Wuppertal\n42119WuppertalGermany",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"DESY\n15735ZeuthenGermany",
"Department of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"DESY\n15735ZeuthenGermany",
"Technische Universität München\n85748GarchingGermany",
"Department of Physics and Astronomy\nUniversity of Kansas\n66045LawrenceKSUSA",
"Department of Physics\nUniversity of California\n94720BerkeleyCAUSA",
"Lawrence Berkeley National Laboratory\n94720BerkeleyCAUSA",
"Department of Physics\nUniversity of Wuppertal\n42119WuppertalGermany",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Department of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Department of Physics and Astronomy\nUppsala University\nBox 51675120UppsalaSweden",
"Department of Physics\nOskar Klein Centre\nStockholm University\n10691StockholmSweden",
"Fakultät für Physik & Astronomie\nRuhr-Universität Bochum\n44780BochumGermany",
"Department of Physics\nSungkyunkwan University\n440-746SuwonKorea",
"Physikalisches Institut\nUniversität Bonn\nNussallee 1253115BonnGermany",
"Department of Physics and Astronomy\nUppsala University\nBox 51675120UppsalaSweden",
"Dienst ELEM\nVrije Universiteit Brussel\n1050BrusselsBelgium",
"DESY\n15735ZeuthenGermany",
"Department of Physics and Astronomy\nUniversity of Canterbury\nPrivate Bag 4800ChristchurchNew Zealand",
"School of Physics and Center for Relativistic Astrophysics\nGeorgia Institute of Technology\n30332AtlantaGAUSA",
"Dienst ELEM\nVrije Universiteit Brussel\n1050BrusselsBelgium",
"Department of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Département de physique nucléaire et corpusculaire\nUniversité de Genève\n1211GenevaSwitzerland",
"Department of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"Department of Physics\nUniversity of Toronto\nM5S 1A7TorontoONCanada",
"Erlangen Centre for Astroparticle Physics\nFriedrich-Alexander-Universität Erlangen-Nürnberg\n91058ErlangenGermany",
"Department of Physics\nTU Dortmund University\n44221DortmundGermany",
"Technische Universität München\n85748GarchingGermany",
"Department of Astronomy and Astrophysics\nPennsylvania State University\n16802University ParkPAUSA",
"Department of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"DESY\n15735ZeuthenGermany",
"Department of Physics\nOskar Klein Centre\nStockholm University\n10691StockholmSweden",
"School of Physics and Center for Relativistic Astrophysics\nGeorgia Institute of Technology\n30332AtlantaGAUSA",
"Department of Physics and Center for Cosmology and Astro-Particle Physics\nOhio State University\n43210ColumbusOHUSA",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"Dienst ELEM\nVrije Universiteit Brussel\n1050BrusselsBelgium",
"Department of Physics and Astronomy\nUniversity of Gent\n9000GhentBelgium",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Dienst ELEM\nVrije Universiteit Brussel\n1050BrusselsBelgium",
"Institut für Physik\nHumboldt-Universität zu Berlin\n12489BerlinGermany",
"Department of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"Present address Department of Physics and Astronomy\nMichigan State University\n567 Wilson Road48824East LansingMIUSA",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"Department of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"Institute of Physics\nUniversity of Mainz\nStaudinger Weg 755099MainzGermany",
"Fakultät für Physik & Astronomie\nRuhr-Universität Bochum\n44780BochumGermany",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics and Astronomy\nUppsala University\nBox 51675120UppsalaSweden",
"Department of Physics and Astronomy\nBartol Research Institute\nUniversity of Delaware\n19716NewarkDEUSA",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics\nSouthern University\n70813Baton RougeLAUSA",
"Fakultät für Physik & Astronomie\nRuhr-Universität Bochum\n44780BochumGermany",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"Department of Physics and Astronomy\nUniversity of Gent\n9000GhentBelgium",
"Department of Physics\nUniversity of California\n94720BerkeleyCAUSA",
"Department of Physics\nOskar Klein Centre\nStockholm University\n10691StockholmSweden",
"Department of Physics\nUniversity of Wuppertal\n42119WuppertalGermany",
"Department of Physics\nOskar Klein Centre\nStockholm University\n10691StockholmSweden",
"Physikalisches Institut\nUniversität Bonn\nNussallee 1253115BonnGermany",
"Department of Physics\nTU Dortmund University\n44221DortmundGermany",
"Department of Physics\nTU Dortmund University\n44221DortmundGermany",
"Department of Physics and Astronomy\nBartol Research Institute\nUniversity of Delaware\n19716NewarkDEUSA",
"Department of Physics\nChiba University\n263-8522ChibaJapan",
"Department of Astronomy\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics\nUniversity of California\n94720BerkeleyCAUSA",
"Lawrence Berkeley National Laboratory\n94720BerkeleyCAUSA",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"DESY\n15735ZeuthenGermany",
"Lawrence Berkeley National Laboratory\n94720BerkeleyCAUSA",
"Dienst ELEM\nVrije Universiteit Brussel\n1050BrusselsBelgium",
"Department of Physics and Astronomy\nBartol Research Institute\nUniversity of Delaware\n19716NewarkDEUSA",
"Department of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"DESY\n15735ZeuthenGermany",
"Department of Physics\nUniversity of Alberta\nT6G 2E1EdmontonABCanada",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Department of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"Technische Universität München\n85748GarchingGermany",
"Department of Physics\nUniversity of California\n94720BerkeleyCAUSA",
"Lawrence Berkeley National Laboratory\n94720BerkeleyCAUSA",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Department of Physics and Astronomy\nUniversity of Gent\n9000GhentBelgium",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Department of Physics and Astronomy\nUppsala University\nBox 51675120UppsalaSweden",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Science Faculty CP230\nUniversité Libre de Bruxelles\n1050BrusselsBelgium",
"Physikalisches Institut\nUniversität Bonn\nNussallee 1253115BonnGermany",
"Science Faculty CP230\nUniversité Libre de Bruxelles\n1050BrusselsBelgium",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Department of Physics\nUniversity of Wuppertal\n42119WuppertalGermany",
"Department of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Department of Physics and Astronomy\nUniversity of Canterbury\nPrivate Bag 4800ChristchurchNew Zealand",
"School of Chemistry and Physics\nUniversity of Adelaide\n5005AdelaideSAAustralia",
"Department of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"Department of Physics\nUniversity of Wuppertal\n42119WuppertalGermany",
"Physikalisches Institut\nUniversität Bonn\nNussallee 1253115BonnGermany",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Earthquake Research Institute\nUniversity of Tokyo\n113-0032BunkyoTokyoJapan",
"Department of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"Department of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"Department of Physics\nOskar Klein Centre\nStockholm University\n10691StockholmSweden",
"Department of Physics\nOskar Klein Centre\nStockholm University\n10691StockholmSweden",
"Department of Physics and Astronomy\nBartol Research Institute\nUniversity of Delaware\n19716NewarkDEUSA",
"Department of Physics\nChiba University\n263-8522ChibaJapan",
"DESY\n15735ZeuthenGermany",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"CTSPS\nClark-Atlanta University\n30314AtlantaGAUSA",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics and Astronomy\nUniversity of Gent\n9000GhentBelgium",
"Technische Universität München\n85748GarchingGermany",
"DESY\n15735ZeuthenGermany",
"Erlangen Centre for Astroparticle Physics\nFriedrich-Alexander-Universität Erlangen-Nürnberg\n91058ErlangenGermany",
"DESY\n15735ZeuthenGermany",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics and Astronomy\nStony Brook University\n11794-3800Stony BrookNYUSA",
"Department of Physics\nUniversity of Wuppertal\n42119WuppertalGermany",
"Department of Physics\nUniversity of California\n94720BerkeleyCAUSA",
"Lawrence Berkeley National Laboratory\n94720BerkeleyCAUSA",
"Department of Physics\nTU Dortmund University\n44221DortmundGermany",
"Université de Mons\n7000MonsBelgium",
"Institut für Physik\nHumboldt-Universität zu Berlin\n12489BerlinGermany",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Institute of Physics\nUniversity of Mainz\nStaudinger Weg 755099MainzGermany",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics\nUniversity of Wuppertal\n42119WuppertalGermany",
"Niels Bohr Institute\nUniversity of Copenhagen\n2100CopenhagenDenmark",
"Physikalisches Institut\nUniversität Bonn\nNussallee 1253115BonnGermany",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Institute of Physics\nUniversity of Mainz\nStaudinger Weg 755099MainzGermany",
"Fakultät für Physik & Astronomie\nRuhr-Universität Bochum\n44780BochumGermany",
"Dienst ELEM\nVrije Universiteit Brussel\n1050BrusselsBelgium",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics\nChiba University\n263-8522ChibaJapan",
"Department of Physics and Astronomy\nUniversity of Gent\n9000GhentBelgium",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Niels Bohr Institute\nUniversity of Copenhagen\n2100CopenhagenDenmark",
"Department of Physics and Astronomy\nStony Brook University\n11794-3800Stony BrookNYUSA",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Technische Universität München\n85748GarchingGermany",
"Institute of Physics\nUniversity of Mainz\nStaudinger Weg 755099MainzGermany",
"Department of Physics\nUniversity of Wisconsin\nRiver Falls54022WIUSA",
"Dienst ELEM\nVrije Universiteit Brussel\n1050BrusselsBelgium",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics\nChiba University\n263-8522ChibaJapan",
"Lawrence Berkeley National Laboratory\n94720BerkeleyCAUSA",
"Department of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"Niels Bohr Institute\nUniversity of Copenhagen\n2100CopenhagenDenmark",
"Department of Physics and Astronomy\nUniversity of Gent\n9000GhentBelgium",
"Science Faculty CP230\nUniversité Libre de Bruxelles\n1050BrusselsBelgium",
"Department of Physics\nUniversity of California\n94720BerkeleyCAUSA",
"Lawrence Berkeley National Laboratory\n94720BerkeleyCAUSA",
"DESY\n15735ZeuthenGermany",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics\nTU Dortmund University\n44221DortmundGermany",
"Dienst ELEM\nVrije Universiteit Brussel\n1050BrusselsBelgium",
"DESY\n15735ZeuthenGermany",
"Département de physique nucléaire et corpusculaire\nUniversité de Genève\n1211GenevaSwitzerland",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"DESY\n15735ZeuthenGermany",
"Department of Physics\nUniversity of Wuppertal\n42119WuppertalGermany",
"Department of Physics and Astronomy\nStony Brook University\n11794-3800Stony BrookNYUSA",
"Department of Physics\nUniversity of Alberta\nT6G 2E1EdmontonABCanada",
"Lawrence Berkeley National Laboratory\n94720BerkeleyCAUSA",
"Department of Physics\nUniversity of Wuppertal\n42119WuppertalGermany",
"Department of Physics\nUniversity of Alberta\nT6G 2E1EdmontonABCanada",
"Department of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"Department of Physics\nUniversity of Wuppertal\n42119WuppertalGermany",
"Science Faculty CP230\nUniversité Libre de Bruxelles\n1050BrusselsBelgium",
"Department of Physics and Astronomy\nUniversity of Alabama\n35487TuscaloosaALUSA",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Department of Physics and Astronomy\nUniversity of Alabama\n35487TuscaloosaALUSA",
"Department of Physics and Astronomy\nUppsala University\nBox 51675120UppsalaSweden",
"Department of Physics and Center for Cosmology and Astro-Particle Physics\nOhio State University\n43210ColumbusOHUSA",
"Department of Physics\nTU Dortmund University\n44221DortmundGermany",
"Science Faculty CP230\nUniversité Libre de Bruxelles\n1050BrusselsBelgium",
"Department of Physics\nUniversity of Wuppertal\n42119WuppertalGermany",
"Department of Physics\nUniversity of California\n94720BerkeleyCAUSA",
"Lawrence Berkeley National Laboratory\n94720BerkeleyCAUSA",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Department of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Département de physique nucléaire et corpusculaire\nUniversité de Genève\n1211GenevaSwitzerland",
"Department of Physics and Astronomy\nUniversity of Alaska Anchorage\n3211 Providence Dr99508AnchorageAKUSA",
"Department of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Department of Physics\nChiba University\n263-8522ChibaJapan",
"Technische Universität München\n85748GarchingGermany",
"Department of Physics\nTU Dortmund University\n44221DortmundGermany",
"Department of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"School of Chemistry and Physics\nUniversity of Adelaide\n5005AdelaideSAAustralia",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Department of Physics\nSungkyunkwan University\n440-746SuwonKorea",
"Department of Physics\nTU Dortmund University\n44221DortmundGermany",
"Department of Physics and Astronomy\nBartol Research Institute\nUniversity of Delaware\n19716NewarkDEUSA",
"Department of Physics and Astronomy\nUniversity of Gent\n9000GhentBelgium",
"Fakultät für Physik & Astronomie\nRuhr-Universität Bochum\n44780BochumGermany",
"Institute of Physics\nUniversity of Mainz\nStaudinger Weg 755099MainzGermany",
"Niels Bohr Institute\nUniversity of Copenhagen\n2100CopenhagenDenmark",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Niels Bohr Institute\nUniversity of Copenhagen\n2100CopenhagenDenmark",
"Department of Physics\nUniversity of Oxford\n1 Keble RoadOX1 3NPOxfordUK",
"Institute of Physics\nUniversity of Mainz\nStaudinger Weg 755099MainzGermany",
"Department of Physics\nTU Dortmund University\n44221DortmundGermany",
"Department of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"Department of Physics\nTU Dortmund University\n44221DortmundGermany",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Fakultät für Physik & Astronomie\nRuhr-Universität Bochum\n44780BochumGermany",
"DESY\n15735ZeuthenGermany",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Physikalisches Institut\nUniversität Bonn\nNussallee 1253115BonnGermany",
"Technische Universität München\n85748GarchingGermany",
"Department of Physics and Astronomy\nBartol Research Institute\nUniversity of Delaware\n19716NewarkDEUSA",
"Technische Universität München\n85748GarchingGermany",
"Department of Physics\nUniversity of Wisconsin\nRiver Falls54022WIUSA",
"DESY\n15735ZeuthenGermany",
"Department of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"Department of Physics\nUniversity of Wuppertal\n42119WuppertalGermany",
"Department of Physics\nUniversity of Wisconsin\nRiver Falls54022WIUSA",
"DESY\n15735ZeuthenGermany",
"Department of Physics and Center for Cosmology and Astro-Particle Physics\nOhio State University\n43210ColumbusOHUSA",
"NASA Goddard Space Flight Center\n20771GreenbeltMDUSA",
"Department of Physics and Astronomy\nBartol Research Institute\nUniversity of Delaware\n19716NewarkDEUSA",
"Department of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"Physikalisches Institut\nUniversität Bonn\nNussallee 1253115BonnGermany",
"Lawrence Berkeley National Laboratory\n94720BerkeleyCAUSA",
"Lawrence Berkeley National Laboratory\n94720BerkeleyCAUSA",
"DESY\n15735ZeuthenGermany",
"Dienst ELEM\nVrije Universiteit Brussel\n1050BrusselsBelgium",
"Department of Physics and Astronomy\nUppsala University\nBox 51675120UppsalaSweden",
"Physikalisches Institut\nUniversität Bonn\nNussallee 1253115BonnGermany",
"Department of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"Department of Physics and Astronomy\nUppsala University\nBox 51675120UppsalaSweden",
"School of Physics and Center for Relativistic Astrophysics\nGeorgia Institute of Technology\n30332AtlantaGAUSA",
"Department of Physics and Astronomy\nBartol Research Institute\nUniversity of Delaware\n19716NewarkDEUSA",
"Department of Physics\nUniversity of Wuppertal\n42119WuppertalGermany",
"Department of Physics\nSouthern University\n70813Baton RougeLAUSA",
"DESY\n15735ZeuthenGermany",
"Department of Physics\nPennsylvania State University\n16802University ParkPAUSA",
"Department of Physics and Astronomy\nBartol Research Institute\nUniversity of Delaware\n19716NewarkDEUSA",
"Department of Physics and Astronomy\nUniversity of Alabama\n35487TuscaloosaALUSA",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Erlangen Centre for Astroparticle Physics\nFriedrich-Alexander-Universität Erlangen-Nürnberg\n91058ErlangenGermany",
"Department of Physics and Astronomy\nUppsala University\nBox 51675120UppsalaSweden",
"Physikalisches Institut\nUniversität Bonn\nNussallee 1253115BonnGermany",
"Département de physique nucléaire et corpusculaire\nUniversité de Genève\n1211GenevaSwitzerland",
"Dienst ELEM\nVrije Universiteit Brussel\n1050BrusselsBelgium",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Physikalisches Institut\nUniversität Bonn\nNussallee 1253115BonnGermany",
"Department of Physics and Astronomy\nUniversity of Gent\n9000GhentBelgium",
"Department of Physics\nOskar Klein Centre\nStockholm University\n10691StockholmSweden",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"School of Chemistry and Physics\nUniversity of Adelaide\n5005AdelaideSAAustralia",
"Department of Physics and Wisconsin IceCube Particle Astrophysics Center\nUniversity of Wisconsin\n53706MadisonWIUSA",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Institute of Physics\nUniversity of Mainz\nStaudinger Weg 755099MainzGermany",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Department of Physics and Astronomy\nUniversity of Alabama\n35487TuscaloosaALUSA",
"Department of Physics\nUniversity of Maryland\n20742College ParkMDUSA",
"Department of Physics\nOskar Klein Centre\nStockholm University\n10691StockholmSweden",
"Department of Physics\nUniversity of Alberta\nT6G 2E1EdmontonABCanada",
"Department of Physics\nUniversity of California\n94720BerkeleyCAUSA",
"Department of Physics and Astronomy\nUniversity of Alabama\n35487TuscaloosaALUSA",
"Department of Physics\nSouthern University\n70813Baton RougeLAUSA",
"DESY\n15735ZeuthenGermany",
"Department of Physics and Astronomy\nUniversity of California\n92697IrvineCAUSA",
"Department of Physics\nChiba University\n263-8522ChibaJapan",
"Department of Physics and Astronomy\nUniversity of Alabama\n35487TuscaloosaALUSA",
"Department of Physics\nTU Dortmund University\n44221DortmundGermany",
"III. Physikalisches Institut\nRWTH Aachen University\n52056AachenGermany",
"Department of Physics\nOskar Klein Centre\nStockholm University\n10691StockholmSweden",
"Department of Computer Science\nTU Dortmund University\n44221DortmundGermany"
] |
[
"Eur. Phys. J. C"
] |
We present the development and application of a generic analysis scheme for the measurement of neutrino spectra with the IceCube detector. This scheme is based on regularized unfolding, preceded by an event selection which a uses a Minimum Redundancy Maximum Relevance algorithm to select the relevant variables and a random forest for the classification of events. The analysis has been developed using IceCube data from the 59-string configuration of the detector. 27,771 neutrino candidates were detected in 346 days of livetime. A rejection of 99.9999 % of the atmospheric muon background is achieved. The energy spectrum of the atmospheric neutrino flux is obtained using the TRUEE unfolding program. The unfolded spectrum of atmospheric muon neutrinos covers an energy range from 100 GeV to 123
|
10.1140/epjc/s10052-015-3330-z
| null | 17,791,357 |
1409.4535
|
9198f6a40468d6d1bd4901e04e7acb4eae2a5549
|
Development of a general analysis and unfolding scheme and its application to measure the energy spectrum of atmospheric neutrinos with IceCube IceCube Collaboration
2015. 2015
M G Aartsen
School of Chemistry and Physics
University of Adelaide
5005AdelaideSAAustralia
M Ackermann
DESY
15735ZeuthenGermany
J Adams
Department of Physics and Astronomy
University of Canterbury
Private Bag 4800ChristchurchNew Zealand
J A Aguilar
Département de physique nucléaire et corpusculaire
Université de Genève
1211GenevaSwitzerland
M Ahlers
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
M Ahrens
Department of Physics
Oskar Klein Centre
Stockholm University
10691StockholmSweden
D Altmann
Erlangen Centre for Astroparticle Physics
Friedrich-Alexander-Universität Erlangen-Nürnberg
91058ErlangenGermany
T Anderson
Department of Physics
Pennsylvania State University
16802University ParkPAUSA
C Arguelles
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
T C Arlen
Department of Physics
Pennsylvania State University
16802University ParkPAUSA
J Auffenberg
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
X Bai
Physics Department
South Dakota School of Mines and Technology
57701Rapid CitySDUSA
S W Barwick
Department of Physics and Astronomy
University of California
92697IrvineCAUSA
V Baum
Institute of Physics
University of Mainz
Staudinger Weg 755099MainzGermany
J J Beatty
Department of Physics and Center for Cosmology and Astro-Particle Physics
Ohio State University
43210ColumbusOHUSA
Department of Astronomy
Ohio State University
43210ColumbusOHUSA
J Becker Tjus
Fakultät für Physik & Astronomie
Ruhr-Universität Bochum
44780BochumGermany
K.-H Becker
Department of Physics
University of Wuppertal
42119WuppertalGermany
S Benzvi
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
P Berghaus
DESY
15735ZeuthenGermany
D Berley
Department of Physics
University of Maryland
20742College ParkMDUSA
E Bernardini
DESY
15735ZeuthenGermany
A Bernhard
Technische Universität München
85748GarchingGermany
D Z Besson
Department of Physics and Astronomy
University of Kansas
66045LawrenceKSUSA
G Binder
Department of Physics
University of California
94720BerkeleyCAUSA
Lawrence Berkeley National Laboratory
94720BerkeleyCAUSA
D Bindig
Department of Physics
University of Wuppertal
42119WuppertalGermany
M Bissok
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
E Blaufuss
Department of Physics
University of Maryland
20742College ParkMDUSA
J Blumenthal
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
D J Boersma
Department of Physics and Astronomy
Uppsala University
Box 51675120UppsalaSweden
C Bohm
Department of Physics
Oskar Klein Centre
Stockholm University
10691StockholmSweden
F Bos
Fakultät für Physik & Astronomie
Ruhr-Universität Bochum
44780BochumGermany
D Bose
Department of Physics
Sungkyunkwan University
440-746SuwonKorea
S Böser
Physikalisches Institut
Universität Bonn
Nussallee 1253115BonnGermany
O Botner
Department of Physics and Astronomy
Uppsala University
Box 51675120UppsalaSweden
L Brayeur
Dienst ELEM
Vrije Universiteit Brussel
1050BrusselsBelgium
H P Bretz
DESY
15735ZeuthenGermany
A M Brown
Department of Physics and Astronomy
University of Canterbury
Private Bag 4800ChristchurchNew Zealand
J Casey
School of Physics and Center for Relativistic Astrophysics
Georgia Institute of Technology
30332AtlantaGAUSA
M Casier
Dienst ELEM
Vrije Universiteit Brussel
1050BrusselsBelgium
E Cheung
Department of Physics
University of Maryland
20742College ParkMDUSA
D Chirkin
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
A Christov
Département de physique nucléaire et corpusculaire
Université de Genève
1211GenevaSwitzerland
B Christy
Department of Physics
University of Maryland
20742College ParkMDUSA
K Clark
Department of Physics
University of Toronto
M5S 1A7TorontoONCanada
L Classen
Erlangen Centre for Astroparticle Physics
Friedrich-Alexander-Universität Erlangen-Nürnberg
91058ErlangenGermany
F Clevermann
Department of Physics
TU Dortmund University
44221DortmundGermany
S Coenders
Technische Universität München
85748GarchingGermany
D F Cowen
Department of Astronomy and Astrophysics
Pennsylvania State University
16802University ParkPAUSA
Department of Physics
Pennsylvania State University
16802University ParkPAUSA
A H Cruz Silva
DESY
15735ZeuthenGermany
M Danninger
Department of Physics
Oskar Klein Centre
Stockholm University
10691StockholmSweden
J Daughhetee
School of Physics and Center for Relativistic Astrophysics
Georgia Institute of Technology
30332AtlantaGAUSA
J C Davis
Department of Physics and Center for Cosmology and Astro-Particle Physics
Ohio State University
43210ColumbusOHUSA
M Day
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
J P A M De André
Department of Physics
Pennsylvania State University
16802University ParkPAUSA
C De Clercq
Dienst ELEM
Vrije Universiteit Brussel
1050BrusselsBelgium
S De Ridder
Department of Physics and Astronomy
University of Gent
9000GhentBelgium
P Desiati
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
K D De Vries
Dienst ELEM
Vrije Universiteit Brussel
1050BrusselsBelgium
M De With
Institut für Physik
Humboldt-Universität zu Berlin
12489BerlinGermany
T Deyoung
Department of Physics
Pennsylvania State University
16802University ParkPAUSA
Present address Department of Physics and Astronomy
Michigan State University
567 Wilson Road48824East LansingMIUSA
J C Díaz-Vélez
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
M Dunkman
Department of Physics
Pennsylvania State University
16802University ParkPAUSA
R Eagan
Department of Physics
Pennsylvania State University
16802University ParkPAUSA
B Eberhardt
Institute of Physics
University of Mainz
Staudinger Weg 755099MainzGermany
B Eichmann
Fakultät für Physik & Astronomie
Ruhr-Universität Bochum
44780BochumGermany
J Eisch
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
S Euler
Department of Physics and Astronomy
Uppsala University
Box 51675120UppsalaSweden
P A Evenson
Department of Physics and Astronomy
Bartol Research Institute
University of Delaware
19716NewarkDEUSA
O Fadiran
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
A R Fazely
Department of Physics
Southern University
70813Baton RougeLAUSA
A Fedynitch
Fakultät für Physik & Astronomie
Ruhr-Universität Bochum
44780BochumGermany
J Feintzeig
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
J Felde
Department of Physics
University of Maryland
20742College ParkMDUSA
T Feusels
Department of Physics and Astronomy
University of Gent
9000GhentBelgium
K Filimonov
Department of Physics
University of California
94720BerkeleyCAUSA
C Finley
Department of Physics
Oskar Klein Centre
Stockholm University
10691StockholmSweden
T Fischer-Wasels
Department of Physics
University of Wuppertal
42119WuppertalGermany
S Flis
Department of Physics
Oskar Klein Centre
Stockholm University
10691StockholmSweden
A Franckowiak
Physikalisches Institut
Universität Bonn
Nussallee 1253115BonnGermany
K Frantzen
Department of Physics
TU Dortmund University
44221DortmundGermany
T Fuchs
Department of Physics
TU Dortmund University
44221DortmundGermany
T K Gaisser
Department of Physics and Astronomy
Bartol Research Institute
University of Delaware
19716NewarkDEUSA
R Gaior
Department of Physics
Chiba University
263-8522ChibaJapan
J Gallagher
Department of Astronomy
University of Wisconsin
53706MadisonWIUSA
L Gerhardt
Department of Physics
University of California
94720BerkeleyCAUSA
Lawrence Berkeley National Laboratory
94720BerkeleyCAUSA
D Gier
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
L Gladstone
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
T Glüsenkamp
DESY
15735ZeuthenGermany
A Goldschmidt
Lawrence Berkeley National Laboratory
94720BerkeleyCAUSA
G Golup
Dienst ELEM
Vrije Universiteit Brussel
1050BrusselsBelgium
J G Gonzalez
Department of Physics and Astronomy
Bartol Research Institute
University of Delaware
19716NewarkDEUSA
J A Goodman
Department of Physics
University of Maryland
20742College ParkMDUSA
D Góra
DESY
15735ZeuthenGermany
D Grant
Department of Physics
University of Alberta
T6G 2E1EdmontonABCanada
P Gretskov
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
J C Groh
Department of Physics
Pennsylvania State University
16802University ParkPAUSA
A Groß
Technische Universität München
85748GarchingGermany
C Ha
Department of Physics
University of California
94720BerkeleyCAUSA
Lawrence Berkeley National Laboratory
94720BerkeleyCAUSA
C Haack
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
A Haj Ismail
Department of Physics and Astronomy
University of Gent
9000GhentBelgium
P Hallen
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
A Hallgren
Department of Physics and Astronomy
Uppsala University
Box 51675120UppsalaSweden
F Halzen
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
K Hanson
Science Faculty CP230
Université Libre de Bruxelles
1050BrusselsBelgium
D Hebecker
Physikalisches Institut
Universität Bonn
Nussallee 1253115BonnGermany
D Heereman
Science Faculty CP230
Université Libre de Bruxelles
1050BrusselsBelgium
D Heinen
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
K Helbing
Department of Physics
University of Wuppertal
42119WuppertalGermany
R Hellauer
Department of Physics
University of Maryland
20742College ParkMDUSA
D Hellwig
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
S Hickford
Department of Physics and Astronomy
University of Canterbury
Private Bag 4800ChristchurchNew Zealand
G C Hill
School of Chemistry and Physics
University of Adelaide
5005AdelaideSAAustralia
K D Hoffman
Department of Physics
University of Maryland
20742College ParkMDUSA
R Hoffmann
Department of Physics
University of Wuppertal
42119WuppertalGermany
A Homeier
Physikalisches Institut
Universität Bonn
Nussallee 1253115BonnGermany
K Hoshina
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
Earthquake Research Institute
University of Tokyo
113-0032BunkyoTokyoJapan
F Huang
Department of Physics
Pennsylvania State University
16802University ParkPAUSA
W Huelsnitz
Department of Physics
University of Maryland
20742College ParkMDUSA
P O Hulth
Department of Physics
Oskar Klein Centre
Stockholm University
10691StockholmSweden
K Hultqvist
Department of Physics
Oskar Klein Centre
Stockholm University
10691StockholmSweden
S Hussain
Department of Physics and Astronomy
Bartol Research Institute
University of Delaware
19716NewarkDEUSA
A Ishihara
Department of Physics
Chiba University
263-8522ChibaJapan
E Jacobi
DESY
15735ZeuthenGermany
J Jacobsen
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
K Jagielski
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
G S Japaridze
CTSPS
Clark-Atlanta University
30314AtlantaGAUSA
K Jero
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
O Jlelati
Department of Physics and Astronomy
University of Gent
9000GhentBelgium
M Jurkovic
Technische Universität München
85748GarchingGermany
B Kaminsky
DESY
15735ZeuthenGermany
A Kappes
Erlangen Centre for Astroparticle Physics
Friedrich-Alexander-Universität Erlangen-Nürnberg
91058ErlangenGermany
T Karg
DESY
15735ZeuthenGermany
A Karle
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
M Kauer
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
A Keivani
Department of Physics
Pennsylvania State University
16802University ParkPAUSA
J L Kelley
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
A Kheirandish
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
J Kiryluk
Department of Physics and Astronomy
Stony Brook University
11794-3800Stony BrookNYUSA
J Kläs
Department of Physics
University of Wuppertal
42119WuppertalGermany
S R Klein
Department of Physics
University of California
94720BerkeleyCAUSA
Lawrence Berkeley National Laboratory
94720BerkeleyCAUSA
J H Köhne
Department of Physics
TU Dortmund University
44221DortmundGermany
G Kohnen
Université de Mons
7000MonsBelgium
H Kolanoski
Institut für Physik
Humboldt-Universität zu Berlin
12489BerlinGermany
A Koob
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
L Köpke
Institute of Physics
University of Mainz
Staudinger Weg 755099MainzGermany
C Kopper
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
S Kopper
Department of Physics
University of Wuppertal
42119WuppertalGermany
D J Koskinen
Niels Bohr Institute
University of Copenhagen
2100CopenhagenDenmark
M Kowalski
Physikalisches Institut
Universität Bonn
Nussallee 1253115BonnGermany
A Kriesten
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
K Krings
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
G Kroll
Institute of Physics
University of Mainz
Staudinger Weg 755099MainzGermany
M Kroll
Fakultät für Physik & Astronomie
Ruhr-Universität Bochum
44780BochumGermany
J Kunnen
Dienst ELEM
Vrije Universiteit Brussel
1050BrusselsBelgium
N Kurahashi
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
T Kuwabara
Department of Physics
Chiba University
263-8522ChibaJapan
M Labare
Department of Physics and Astronomy
University of Gent
9000GhentBelgium
D T Larsen
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
M J Larson
Niels Bohr Institute
University of Copenhagen
2100CopenhagenDenmark
M Lesiak-Bzdak
Department of Physics and Astronomy
Stony Brook University
11794-3800Stony BrookNYUSA
M Leuermann
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
J Leute
Technische Universität München
85748GarchingGermany
J Lünemann
Institute of Physics
University of Mainz
Staudinger Weg 755099MainzGermany
J Madsen
Department of Physics
University of Wisconsin
River Falls54022WIUSA
G Maggi
Dienst ELEM
Vrije Universiteit Brussel
1050BrusselsBelgium
R Maruyama
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
K Mase
Department of Physics
Chiba University
263-8522ChibaJapan
H S Matis
Lawrence Berkeley National Laboratory
94720BerkeleyCAUSA
R Maunu
Department of Physics
University of Maryland
20742College ParkMDUSA
F Mcnally
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
K Meagher
Department of Physics
University of Maryland
20742College ParkMDUSA
M Medici
Niels Bohr Institute
University of Copenhagen
2100CopenhagenDenmark
A Meli
Department of Physics and Astronomy
University of Gent
9000GhentBelgium
T Meures
Science Faculty CP230
Université Libre de Bruxelles
1050BrusselsBelgium
S Miarecki
Department of Physics
University of California
94720BerkeleyCAUSA
Lawrence Berkeley National Laboratory
94720BerkeleyCAUSA
E Middell
DESY
15735ZeuthenGermany
E Middlemas
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
N Milke
Department of Physics
TU Dortmund University
44221DortmundGermany
J Miller
Dienst ELEM
Vrije Universiteit Brussel
1050BrusselsBelgium
L Mohrmann
DESY
15735ZeuthenGermany
T Montaruli
Département de physique nucléaire et corpusculaire
Université de Genève
1211GenevaSwitzerland
R Morse
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
R Nahnhauer
DESY
15735ZeuthenGermany
U Naumann
Department of Physics
University of Wuppertal
42119WuppertalGermany
H Niederhausen
Department of Physics and Astronomy
Stony Brook University
11794-3800Stony BrookNYUSA
S C Nowicki
Department of Physics
University of Alberta
T6G 2E1EdmontonABCanada
D R Nygren
Lawrence Berkeley National Laboratory
94720BerkeleyCAUSA
A Obertacke
Department of Physics
University of Wuppertal
42119WuppertalGermany
S Odrowski
Department of Physics
University of Alberta
T6G 2E1EdmontonABCanada
A Olivas
Department of Physics
University of Maryland
20742College ParkMDUSA
A Omairat
Department of Physics
University of Wuppertal
42119WuppertalGermany
A O'murchadha
Science Faculty CP230
Université Libre de Bruxelles
1050BrusselsBelgium
T Palczewski
Department of Physics and Astronomy
University of Alabama
35487TuscaloosaALUSA
L Paul
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
Ö Penek
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
J A Pepper
Department of Physics and Astronomy
University of Alabama
35487TuscaloosaALUSA
C Pérez De Los Heros
Department of Physics and Astronomy
Uppsala University
Box 51675120UppsalaSweden
C Pfendner
Department of Physics and Center for Cosmology and Astro-Particle Physics
Ohio State University
43210ColumbusOHUSA
D Pieloth
Department of Physics
TU Dortmund University
44221DortmundGermany
E Pinat
Science Faculty CP230
Université Libre de Bruxelles
1050BrusselsBelgium
J Posselt
Department of Physics
University of Wuppertal
42119WuppertalGermany
P B Price
Department of Physics
University of California
94720BerkeleyCAUSA
G T Przybylski
Lawrence Berkeley National Laboratory
94720BerkeleyCAUSA
J Pütz
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
M Quinnan
Department of Physics
Pennsylvania State University
16802University ParkPAUSA
L Rädel
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
M Rameez
Département de physique nucléaire et corpusculaire
Université de Genève
1211GenevaSwitzerland
K Rawlins
Department of Physics and Astronomy
University of Alaska Anchorage
3211 Providence Dr99508AnchorageAKUSA
P Redl
Department of Physics
University of Maryland
20742College ParkMDUSA
I Rees
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
R Reimann
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
M Relich
Department of Physics
Chiba University
263-8522ChibaJapan
E Resconi
Technische Universität München
85748GarchingGermany
W Rhode
Department of Physics
TU Dortmund University
44221DortmundGermany
M Richman
Department of Physics
University of Maryland
20742College ParkMDUSA
B Riedel
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
S Robertson
School of Chemistry and Physics
University of Adelaide
5005AdelaideSAAustralia
J P Rodrigues
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
M Rongen
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
C Rott
Department of Physics
Sungkyunkwan University
440-746SuwonKorea
T Ruhe [email protected]
Department of Physics
TU Dortmund University
44221DortmundGermany
B Ruzybayev
Department of Physics and Astronomy
Bartol Research Institute
University of Delaware
19716NewarkDEUSA
D Ryckbosch
Department of Physics and Astronomy
University of Gent
9000GhentBelgium
S M Saba
Fakultät für Physik & Astronomie
Ruhr-Universität Bochum
44780BochumGermany
H.-G Sander
Institute of Physics
University of Mainz
Staudinger Weg 755099MainzGermany
J Sandroos
Niels Bohr Institute
University of Copenhagen
2100CopenhagenDenmark
M Santander
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
S Sarkar
Niels Bohr Institute
University of Copenhagen
2100CopenhagenDenmark
Department of Physics
University of Oxford
1 Keble RoadOX1 3NPOxfordUK
K Schatto
Institute of Physics
University of Mainz
Staudinger Weg 755099MainzGermany
F Scheriau
Department of Physics
TU Dortmund University
44221DortmundGermany
T Schmidt
Department of Physics
University of Maryland
20742College ParkMDUSA
M Schmitz
Department of Physics
TU Dortmund University
44221DortmundGermany
S Schoenen
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
S Schöneberg
Fakultät für Physik & Astronomie
Ruhr-Universität Bochum
44780BochumGermany
A Schönwald
DESY
15735ZeuthenGermany
A Schukraft
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
L Schulte
Physikalisches Institut
Universität Bonn
Nussallee 1253115BonnGermany
O Schulz
Technische Universität München
85748GarchingGermany
D Seckel
Department of Physics and Astronomy
Bartol Research Institute
University of Delaware
19716NewarkDEUSA
Y Sestayo
Technische Universität München
85748GarchingGermany
S Seunarine
Department of Physics
University of Wisconsin
River Falls54022WIUSA
R Shanidze
DESY
15735ZeuthenGermany
M W E Smith
Department of Physics
Pennsylvania State University
16802University ParkPAUSA
D Soldin
Department of Physics
University of Wuppertal
42119WuppertalGermany
G M Spiczak
Department of Physics
University of Wisconsin
River Falls54022WIUSA
C Spiering
DESY
15735ZeuthenGermany
M Stamatikos
Department of Physics and Center for Cosmology and Astro-Particle Physics
Ohio State University
43210ColumbusOHUSA
NASA Goddard Space Flight Center
20771GreenbeltMDUSA
T Stanev
Department of Physics and Astronomy
Bartol Research Institute
University of Delaware
19716NewarkDEUSA
N A Stanisha
Department of Physics
Pennsylvania State University
16802University ParkPAUSA
A Stasik
Physikalisches Institut
Universität Bonn
Nussallee 1253115BonnGermany
T Stezelberger
Lawrence Berkeley National Laboratory
94720BerkeleyCAUSA
R G Stokstad
Lawrence Berkeley National Laboratory
94720BerkeleyCAUSA
A Stößl
DESY
15735ZeuthenGermany
E A Strahler
Dienst ELEM
Vrije Universiteit Brussel
1050BrusselsBelgium
R Ström
Department of Physics and Astronomy
Uppsala University
Box 51675120UppsalaSweden
N L Strotjohann
Physikalisches Institut
Universität Bonn
Nussallee 1253115BonnGermany
G W Sullivan
Department of Physics
University of Maryland
20742College ParkMDUSA
H Taavola
Department of Physics and Astronomy
Uppsala University
Box 51675120UppsalaSweden
I Taboada
School of Physics and Center for Relativistic Astrophysics
Georgia Institute of Technology
30332AtlantaGAUSA
A Tamburro
Department of Physics and Astronomy
Bartol Research Institute
University of Delaware
19716NewarkDEUSA
A Tepe
Department of Physics
University of Wuppertal
42119WuppertalGermany
S Ter-Antonyan
Department of Physics
Southern University
70813Baton RougeLAUSA
A Terliuk
DESY
15735ZeuthenGermany
G Tešić
Department of Physics
Pennsylvania State University
16802University ParkPAUSA
S Tilav
Department of Physics and Astronomy
Bartol Research Institute
University of Delaware
19716NewarkDEUSA
P A Toale
Department of Physics and Astronomy
University of Alabama
35487TuscaloosaALUSA
M N Tobin
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
D Tosi
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
M Tselengidou
Erlangen Centre for Astroparticle Physics
Friedrich-Alexander-Universität Erlangen-Nürnberg
91058ErlangenGermany
E Unger
Department of Physics and Astronomy
Uppsala University
Box 51675120UppsalaSweden
M Usner
Physikalisches Institut
Universität Bonn
Nussallee 1253115BonnGermany
S Vallecorsa
Département de physique nucléaire et corpusculaire
Université de Genève
1211GenevaSwitzerland
N Van Eijndhoven
Dienst ELEM
Vrije Universiteit Brussel
1050BrusselsBelgium
J Vandenbroucke
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
J Van Santen
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
M Vehring
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
M Voge
Physikalisches Institut
Universität Bonn
Nussallee 1253115BonnGermany
M Vraeghe
Department of Physics and Astronomy
University of Gent
9000GhentBelgium
C Walck
Department of Physics
Oskar Klein Centre
Stockholm University
10691StockholmSweden
M Wallraff
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
Ch Weaver
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
M Wellons
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
C Wendt
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
S Westerhoff
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
B J Whelan
School of Chemistry and Physics
University of Adelaide
5005AdelaideSAAustralia
N Whitehorn
Department of Physics and Wisconsin IceCube Particle Astrophysics Center
University of Wisconsin
53706MadisonWIUSA
C Wichary
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
K Wiebe
Institute of Physics
University of Mainz
Staudinger Weg 755099MainzGermany
C H Wiebusch
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
D R Williams
Department of Physics and Astronomy
University of Alabama
35487TuscaloosaALUSA
H Wissing
Department of Physics
University of Maryland
20742College ParkMDUSA
M Wolf
Department of Physics
Oskar Klein Centre
Stockholm University
10691StockholmSweden
T R Wood
Department of Physics
University of Alberta
T6G 2E1EdmontonABCanada
K Woschnagg
Department of Physics
University of California
94720BerkeleyCAUSA
D L Xu
Department of Physics and Astronomy
University of Alabama
35487TuscaloosaALUSA
X W Xu
Department of Physics
Southern University
70813Baton RougeLAUSA
J P Yanez
DESY
15735ZeuthenGermany
G Yodh
Department of Physics and Astronomy
University of California
92697IrvineCAUSA
S Yoshida
Department of Physics
Chiba University
263-8522ChibaJapan
P Zarzhitsky
Department of Physics and Astronomy
University of Alabama
35487TuscaloosaALUSA
J Ziemann
Department of Physics
TU Dortmund University
44221DortmundGermany
S Zierke
III. Physikalisches Institut
RWTH Aachen University
52056AachenGermany
M Zoll
Department of Physics
Oskar Klein Centre
Stockholm University
10691StockholmSweden
K Morik
Department of Computer Science
TU Dortmund University
44221DortmundGermany
Development of a general analysis and unfolding scheme and its application to measure the energy spectrum of atmospheric neutrinos with IceCube IceCube Collaboration
Eur. Phys. J. C
751162015. 201510.1140/epjc/s10052-015-3330-zReceived: 15 September 2014 / Accepted: 19 February 2015 / Published online: 11 March 2015Regular Article -Experimental Physics 123 116 Page 2 of 14 Eur. Phys. J. C (2015) 75 :116 Page 3 of 14 116
We present the development and application of a generic analysis scheme for the measurement of neutrino spectra with the IceCube detector. This scheme is based on regularized unfolding, preceded by an event selection which a uses a Minimum Redundancy Maximum Relevance algorithm to select the relevant variables and a random forest for the classification of events. The analysis has been developed using IceCube data from the 59-string configuration of the detector. 27,771 neutrino candidates were detected in 346 days of livetime. A rejection of 99.9999 % of the atmospheric muon background is achieved. The energy spectrum of the atmospheric neutrino flux is obtained using the TRUEE unfolding program. The unfolded spectrum of atmospheric muon neutrinos covers an energy range from 100 GeV to 123
Abstract We present the development and application of a generic analysis scheme for the measurement of neutrino spectra with the IceCube detector. This scheme is based on regularized unfolding, preceded by an event selection which a e-mail: [email protected] b Present address Department of Physics and Astronomy, Michigan State University, 567 Wilson Road, East Lansing, MI 48824, USA c Earthquake Research Institute, University of Tokyo, Bunkyo, Tokyo 113-0032, Japan d NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA uses a Minimum Redundancy Maximum Relevance algorithm to select the relevant variables and a random forest for the classification of events. The analysis has been developed using IceCube data from the 59-string configuration of the detector. 27,771 neutrino candidates were detected in 346 days of livetime. A rejection of 99.9999 % of the atmospheric muon background is achieved. The energy spectrum of the atmospheric neutrino flux is obtained using the TRUEE unfolding program. The unfolded spectrum of atmospheric muon neutrinos covers an energy range from 100 GeV to 1 PeV. Compared to the previous measurement using the detector in the 40-string configuration, the analysis presented here, extends the upper end of the atmospheric neutrino spectrum by more than a factor of two, reaching an energy region that has not been previously accessed by spectral measurements.
Introduction
Measuring the energy spectrum of atmospheric muon neutrinos is particularly challenging due to its steeply falling behavior. As neutrinos cannot be detected directly, their flux is measured through the detection of neutrino-induced muons. However, atmospheric muons produced in extended air showers when a cosmic ray interacts with a nucleus in the Earth's atmosphere constitute a natural background to atmospheric neutrino searches. In a detector like IceCube [1], the majority of this atmospheric muon background can be rejected by the selection of upward going tracks. Remaining background events consist of originally downward-going muons falsely reconstructed as upward going. Thus, an effective selection of events is required.
Furthermore, the energy of the neutrino cannot be accessed directly, but needs to be inferred from energy dependent observables. These challenges demand a sophisticated data analysis chain, considering both the separation of signal and background events and the reconstruction of the spectrum by using unfolding techniques. This paper describes a novel analysis approach aimed at measuring the atmospheric muon-neutrino spectrum. We use experimental data taken with IceCube in the 59-string configuration. The analysis consists of an event selection based on a data pre-processing using quality cuts on a few selected variables, followed by a machine learning algorithm for final event selection.
In a machine learning algorithm events are classified according to their properties. Rules for this classification are automatically derived from a set of events for which the class is known, e.g. simulated events. The induction of classification rules is generally referred to as training.
All analysis steps were carefully validated and are based on well established methods from Computer Science and Statistics. This approach was found to outperform previous measurements [2] with respect to background rejection and signal efficiency. We then present the first application of the new unfolding program TRUEE [3] on IceCube data. This analysis procedure proved capable of producing a neutrino energy spectrum from 100 GeV to 1 PeV.
The paper is organized as follows: In Sect. 2 we describe the IceCube detector. Section 3 summarizes the basic physics of atmospheric neutrinos. The machine learning algorithms used for event selection, their validation and their application to IceCube data are covered in Sect. 4. An enhanced unfolding algorithm and its application in an atmospheric neutrino analysis are presented in Sect. 5. In Sect. 6 the spectrum is unfolded for two different zenith bins. A comparison of the results to previous measurements is given in Sect. 7. A summary of the results concludes the paper (Sect. 8).
IceCube
IceCube is a cubic-kilometer neutrino detector located at the geographic South Pole. Neutrinos are detected through the Cherenkov light emitted by secondary particles produced in neutrino-nucleon interactions in or around the detector. The detector consists of an array of digital optical modules (DOMs) mounted on 86 cables (or strings). The strings are arranged in an hexagon with typical horizontal spacing of 125 m, and hold 60 DOMs each. The vertical separation between DOMs is 17 m and they are deployed at depths between 1450 m and 2450 m. Eight strings at the center of the array were deployed with a distance of about 70 m and vertical DOM distance of 7 m. This denser configuration is part of the DeepCore detector [4]. Each DOM consists of a 25 cm Hamamatsu R7081-02 Photo-multiplier Tube (PMT) and a suite of electronics board assemblies contained within a spherical glass pressure housing of 35.6 cm diameter. High accuracy and a wide dynamic range can be achieved by the DOMs by internally digitizing and time-stamping the photonic signals. Packaged digitized data is then transmitted to the surface. Each DOM can operate as a complete and autonomous data acquisition system [1,5]. IceCube was successfully completed in December 2010.
IceTop stations are located on the top of the strings, forming an air-shower array with a nominal grid spacing matching the 125 m of the in-ice part of the detector. Each station consists of two tanks equipped with downward facing DOMs with their lower hemisphere embedded in the ice. Two DOMs are deployed per tank for redundancy and flexibility [1].
The Cherenkov light emitted by muons produced in neutrino interactions can be used to reconstruct the muon trajectory. Since at high energies (TeV or above) the direction of the muon deviates only marginally from the direction of the neutrino, the direction of the incoming neutrino can be reconstructed as well. The pointing resolution of IceCube was found to be 0.7 • in a moon shadow analysis using TeV cosmic rays [6].
There are two primary detection channels in IceCube, the first one being track-like events originating from charged current (CC) ν μ interactions of the form:
ν μ + N −→ μ + X,(1)
where N represents a nucleon and X denotes the rest of the particles produced in the interaction. The second channel are cascade-like events produced in CC interactions of ν e and ν τ and in neutral current (NC) interactions of all neutrino flavors. Only ν μ CC interactions are relevant for the atmospheric neutrino analysis presented in this paper. Data for this analysis were taken between May 2009 and May 2010, when the detector consisted of 59 strings. This configuration is referred to as IceCube-59. The analysis is based on a preselection of events which is provided to the analyzers by the IceCube Collaboration.
Atmospheric neutrinos
Although primarily designed for the detection of high-energy neutrinos from astrophysical sources, IceCube can also be used for investigating the atmospheric neutrino spectrum over several orders of magnitude in energy. Despite the fact that the atmospheric ν μ spectrum has been measured by various experiments including Frejus [7], AMANDA [8], ANTARES [9] and IceCube in the 40-string configuration [2], the flux, especially at high energies, is still subject to rather large uncertainties [10].
The flux of atmospheric muon neutrinos is dominated by neutrinos originating from the decay of pions and kaons, produced in extended air showers, up to energies of E ν ≈ 500 TeV [8] (conventional atmospheric neutrino flux). Due to their relatively long lifetime, pions and kaons lose part of their energy prior to decaying. As the flux of cosmic rays follows a power law, the atmospheric neutrino spectrum is also expected to follow a power law, which is one power steeper (asymptotically dΦ d E ∝ E −3.7 ) compared to the spectrum of primary cosmic rays [2].
However, despite the isotropic distribution of cosmic rays, the flux of conventional atmospheric neutrinos is a function of the zenith angle, since horizontally travelling mesons have a much higher probability to decay before losing energy in collisions [11]. This results in a harder neutrino spectrum of horizontal events compared to vertical events.
At energies exceeding 500 TeV, neutrinos from the decay of charmed mesons, so called prompt neutrinos, are expected to contribute notably to the spectrum. Since neutrinos from the decay of charmed mesons have not been conclusively detected, the exact threshold depends strongly on the underlying model. Due to their short lifetime (t life ≈ 10 −12 s [12]), these mesons decay before interacting and follow the initial spectrum of cosmic rays more closely, therefore causing a flattening of the overall neutrino flux [2,8].
A detailed measurement of the conventional and prompt atmospheric neutrino spectrum is made difficult by its steeply falling characteristic and the finite energy resolution of neutrino energy reconstruction. We have developed an analysis technique making use of machine learning processes to select a sample of neutrino candidates with high purity.
Event selection
The signature of atmospheric muons entering the detector from above is similar to the event pattern of a neutrinoinduced muon. Both signatures can be distinguished by their reconstructed track parameters and quality measures, which form an n-dimensional parameter space. Selecting events from this parameter space can be achieved by making good use of machine learning algorithms.
Selecting only upward going tracks can remove a large fraction of the atmospheric muon background. A certain fraction of muon events, however, is falsely reconstructed as upward going. This type of event still occurs 1,000 times more frequently than neutrino-induced events. As misreconstructed muons are significantly harder to reject, a multi-faceted event selection needs to be carried out to obtain a highly pure sample of neutrino candidates.
The event selection presented here consists of several consecutive steps: Initially, two simple cuts are applied to reduce the event sample to a manageable size. As a second step, variables to be used as input for the learner are selected using an automated variable selection. As IceCube runs multiple reconstruction algorithms on each interesting event, there are hundreds of variables that are potential inputs to the classification algorithm. We use an automated variable selection process to select the variables that have the most power for separating signal and background events. Data preprocessing, variable selection and performance of the classification algorithm were thoroughly validated in cross validations, where the average performance over many splits in disjoint training and test data is obtained.
Data preprocessing
The preprocessing consisted of a cut on the LineFit velocity (v LineFit > 0.19 c) and a cut on the reconstructed zenith angle (θ > 88 • ). 1 The LineFit algorithm reconstructs a track on the basis of the position, r i , and hit times, t i , of all DOMs with a hit in the event. The geometry of the Cherenkov cone as well as the optical properties of the medium are ignored, and the method assumes that the photons propagate along a 1-dimensional line with constant speed, v LineFit . Minimizing the following X 2 :
X 2 = N i (r i − r LineFit − v LineFit · t i ) 2 ,(2)
one obtains the fit parameters, v LineFit and r LineFit , where i runs over the DOMs with a hit in the event. Cascade-like events will produce a spherical light pattern from which small values of |v LineFit | are reconstructed. As long muon tracks of high quality are required for a reliable reconstruction of the energy spectrum, a cut on |v LineFit | can be utilized to select such events. The zenith-angle cut is aimed at reducing the contamination of atmospheric muons entering the detector at angles θ < 90 • . Choosing a cut at θ = 88 • rather than at θ = 90 • aims at a slight extension of the field of view in order to detect high energy neutrinos from above the horizon. Muons approaching the detector at angles between θ = 88 • and θ = 90 • , are very likely to range out before reaching the detector.
Both cuts were optimized simultaneously with respect to background rejection and signal efficiency. The application of the two cuts yielded a background rejection of 91.4 % at a signal efficiency of 57.1 %.
Automated variable selection
The quality of an automated, machine learning-based, event selection largely depends on the set of variables used (in machine learning these are generally referred to as "features" or "attributes"). In this analysis the variables considered as input for the learner were the reconstructed properties of the events and different measures of the quality of the reconstruction. As not all variables are equally well suited for the event selection, and since using all available variables would result in an unreasonably large consumption of computing resources, a representation in fewer dimensions needs to be found. In general, a manual selection based on knowledge about the detector and the classification problem at hand will result in a good set of variables for training the classification algorithm. It will, however, not necessarily result in the best set of variables. In the event selection presented in this paper, we therefore used the Minimum Redundancy Maximum Relevance (MRMR) Algorithm [13] for the selection of variables.
Within MRMR the relevance of a set of variables is computed from an F-test, whereas its redundancy V can be obtained from the following equation [13]:
V = 1 |F| 2 i, j c(x i , x j ) ,(3)
where F represents a set of variables. To compute the similarity between two variables x i and x j the absolute value c(x i , x j ) of Pearson's correlation coefficient is used. As a final selection criterion the quotient Q between relevance and redundancy is computed. The variable set, which maximizes Q is returned. MRMR is particularly useful when certain quantities (e.g. zenith angle) are obtained from a number of different reconstruction algorithms. For futher details on MRMR we refer to Ref. [13]. As variable selections are in general carried out on a limited number of events, their performance might be influenced by statistical fluctuations within those subsets. The average performance given by the cross validation is a valid output. However, one might want to additionally inspect the stability of the variable selection. The stability expresses the variance over different cross validation splits. Two stability measures, Jaccard index and Kuncheva's index [14] were used to determine the stability of the MRMR variable selection. They express the ratio between the data splits returning the same variables and the number of variables returned by all splits. The basic equation for the Jaccard index is:
J = |F i ∩ F j | |F i ∪ F j | ,(4)
where F i and F j represent two subsets of variables, selected on two disjoint sets of events drawn at random from the same distribution.
A similar stability measure is Kuncheva's index, defined as:
I C (F i , F j ) = rn − k 2 k(n − k) .(5)
In Eq. (5) the parameter k represents the size of the subset, whereas r = |A ∩ B| represents the cardinality of the intersection. The total number of variables available is denoted by n.
The stability of the variable selection was tested with respect to the number of variables selected. To perform this test the number of variables was increased stepwise by one variable in the range between one and 50 variables. For each number of variables the MRMR variable selection was restarted and repeated 10 times on 10 disjoint subsets of events. The overall stabilityS as depicted in Fig. 1 is defined as the average of the indices I for all combinations of those feature selections [15]:
S = 2 l 2 − l l i=1 l j=i+1 I (F i , F j ),(6)
where l is the total number of feature selections for a specific number of variables. The quantity I in Eq. 6 represents the Jaccard index or Kuncheva's index, respectively. In total 10,000 events were used for the calculation of the indices. The stability measures presented in Eqs. 4 and 5 can take values between 0 and 1. In general a selection is considered stable if the indices are close to 1 and considered unstable if the indices are close to 0. Figure 1 depicts the stability of the MRMR variable selection as a function of the number of selected variables. The stability of the variable selection is found to increase with the number of variables selected. It is Twenty-five variables were selected as this number represents a reasonable trade-off between variable selection stability and the anticipated resource consumption of the learner. Moreover, the separation power of the remaining variables was found to be close to zero.
Attributes found to yield large separation power in this analysis are zenith angles, the length of the track obtained from direct photons and the number of direct photons detected in various time windows. Photons are referred to as direct when their arrival time at the DOM agrees with that expected for unscattered cherenkov photons [16].
Performance of the random forest
In general, the evaluation of the performance of a classification algorithm consists of the two important steps of training and testing the algorithm. From the machine learning point of view the event selection can be formalized in terms of a classification task with the classes signal (atmospheric neutrinos) and background (atmospheric muons).
A random forest [17], which utilizes an ensemble of simple decision trees, was chosen as the machine learning algorithm because ensemble algorithms are well known for their robustness and stability. In general trees can be interpreted easily and performed well in previous IceCube analyses [2]. Moreover, a study by Bock et al. has shown that random forests outperform other classification algorithms [18]. Training and testing were carried out in a standard fivefold cross-validation.
Within the cross validation 70,000 simulated neutrino events and 750,000 simulated background events were used. In a cross validation events are split into n disjoint sub-sets of events. In every iteration one of the disjoint sets is used to test the performance of the random forest, whereas the remaining sets are used for training. Thus, 14,000 neutrino events and 150,000 background events were available for testing in every iteration in the fivefold cross validation used in this analysis. Accordingly, 56,000 neutrino events and 600,000 background events per iteration were available for training. The neutrino events were generated by the IceCube neutrino-generator (NuGen). Background events were simulated according to the Polygonato model [19] using COR-SIKA [20].
The 25 variables selected by the MRMR algorithm were used for the training of the forest. In order to improve the overall performance of the event selection three additional parameters were created and added according to the findings in [2]. The first variable added is the absolute difference between the zenith angle obtained from a simple LineFit and the reconstructed zenith angle obtained from a multiphoto-electron (MPE) fit. As a second variable the difference between the log-likelihood obtained from a Bayesian fit and a single-photo-electron (SPE) fit was added. The third variable added was the log-likelihood derived from an MPEfit, divided by the number of hit DOMs. For details on the individual fit algorithms we refer to [16].
Within the forest, every event is labeled as signal or background according to its attributes by every tree. The final output score is then computed by averaging over the classifications of the individual trees in the forest.
The ratio of signal and background events used for training the forest was varied systematically. These tests yielded that the signal-to-background ratio available for training did not result in an optimal performance of the learner. Within the tests it was found that very good results in terms of signal efficiency and background rejection can be obtained using 27,000 simulated signal-and 27,000 simulated background events for the training of the forest. Furthermore, a reasonable trade-off between signal efficiency and background rejection could be achieved using this setting. In order to provide the learner with this number of events a simple sampling operation was carried out inside the cross validation. Within this sampling 27,000 simulated neutrino events and 27,000 simulated background events were drawn at random. Helping the learning algorithm by using balanced training and test sets does not imply that the application of the learned function works only on balanced class distributions. Empirically, we have observed that the decision function obtained from balanced samples can be successfully applied to extremely biased samples. As the sampling only concerned the training of the random forest, the number of events available for testing remained unchanged.
The neutrino events used in the training process were simulated according to an E −2 flux. Using an E −2 flux instead of an atmospheric neutrino flux will provide the learner with enough events also at high energies. This is required in order to obtain a reliable classification over the entire energy range. Although this flux deviates from an atmospheric neutrino flux it can still be used for the training of the forest as the classification is achieved on an event-by-event basis. Therefore, once a certain event pattern is memorized as neutrino-like by the forest, events with similar patterns will always be labelled as signal, independent of the underlying energy distribution. Furthermore, the result achieved using a decision tree depends only weakly on the underlying distribution used for training. After classification every event was re-weighted according to an atmospheric flux in order to obtain a prediction of the neutrino rate.
In general, the performance of a random forest is found to increase with the number of trees. However, the larger the number of trees, the larger the computational cost for training and testing (CPU time and memory). It was found that 500 trees provided a reasonable tradeoff between the performance of the classification algorithm and the computational cost. Therefore, the forest was trained and validated using 500 trees.
The output scores of the random forest for simulated events and experimental data are shown in Figs. 2 and 3. Figure 3 focuses on the region between 490 and 500 trees, whereas the entire output range of the random forest is depicted in Fig. 2. The well matching distributions of experimental data and simulated events indicate a stable performance of the forest. The rather poor agreement of simulated events and experimental data for n trees < 100 originates from poorly reconstructed muons of low energy. Unfolding the energy distribution of the neutrino sample requires an extremely strict rejection of atmospheric muons. This is due to the fact that only a small number of events is found to populate the highest energy bins. Therefore, a single high energy muon might cause a flattening of the unfolded spectrum at high energies and thus mimic a prompt or astrophysical flux of neutrinos. We chose a very strict cut of n trees = 500, thus selecting only events that were classified as signal by every tree in the forest.
The statistical uncertainty of the event selection, which is introduced due to statistical fluctuations in the training and test sets, was estimated from the cross validation results. The statistical uncertainty can be calculated from the signal efficiency and background rejection of the individual iterations. A statistical uncertainty of 1.6 % was estimated for the expected number of neutrino candidates, which indicates a stable and reliable performance of the forest.
The systematic uncertainty of the event selection was estimated by applying the forest to simulated events produced with different DOM efficiencies and a different modeling of the ice. For this purpose the efficiencies of all DOMs were either increased or decreased by 10 % from their nominal values. The modeling of the ice was taken into account by using the SPICE Mie ice model [21] instead of its predecessor SPICE-1. It was found that the uncertainty of the event selection due to the ice model is on the order of 5 %, whereas the uncertainty due to the DOM efficiency was estimated to be 18 %. Combining both values one finds that the total systematic uncertainty of the event selection is 19 %.
After verifying the performance of the random forest the final model was trained using 27,000 simulated neutrino events and 27,000 simulated background events. The events for each class were drawn at random from the total sample of available simulated events.
The application of the entire event selection chain on the full set of IceCube-59 data yielded 27,771 neutrino candi-dates in 346 days of detector live-time (≈80 neutrino candidates per day). The number of remaining atmospheric muons was estimated to be 114 ± 103. The purity of the final neutrino event sample was estimated to be (99.59 +0.36 −0.37 ) %. No events with a zenith angle θ < 90 • were observed in the sample after the application of the random forest.
The number of events surviving the two preselection cuts on the zenith angle and the LineFit velocity is 15.3 × 10 6 . This corresponds to an estimated background rejection of 91.4 % at a signal efficiency of 57.1 %.
Comparing the total number of neutrino candidates at final level an increase of 62 % is observed with respect to [2], which used IceCube in the 40-string configuration. Taking into account the larger volume of the detector (59 compared to 40 strings) and the increased trigger rate, the event selection method presented in this paper succeeds in an increase of 8 % in the number of neutrino candidates compared to the event selection presented in [2]. The relative contamination of the sample with atmospheric muons was found to be of the same size as in [2].
In the event selection, which is the basis for the subsequent unfolding of the ν μ energy spectrum, a signal efficiency of 18.2 % was achieved at a background rejection of 99.9999 %, which corresponds to a reduction of the contamination of the event sample with atmospheric muons by six orders of magnitude. Both signal efficiency and background rejection were computed for events with θ Zenith ≥ 88 • , with respect to the starting level of the analysis and for neutrino energies between E ν = 100 GeV and E ν = 1 PeV.
All event selection steps regarding machine learning, preprocessing, and validation were carried out using the Rapid-Miner [22] machine learning environment.
Spectrum unfolding
As the neutrino energy spectrum cannot be accessed directly, it needs to be inferred from the reconstructed energy of the muons. This task is generally referred to as an inverse, or ill-posed, problem and described by the Fredholm integral equation of first kind [3]:
g(y) = a b A(y, E) f (E) d E.(7)
For the discrete case this transforms to:
g(y) = A(y, E)f(E),(8)
where f(E) is the sought energy distribution and the measured energy dependent distribution is given as g(y). The matrix A(y, E) represents the response matrix of the detector, which accounts for the physics of neutrino interactions in or near the detector as well as for the propagation of the muon.
Several approaches to the solution of inverse problems exist. The unfolding program Truee [3], which is an extension of the RUN [23] algorithm, was used for unfolding in this analysis. The stability of the unfolding as well as the results obtained on experimental data are addressed in the following.
Unfolding input
The spectrum is unfolded in ten logarithmic energy bins between 100 GeV and 1 PeV. Three variables (track length, number of hit DOMs, number of direct photons) were used as input for the unfolding. Direct hits have not suffered scattering in the ice from their emission point to the DOM and therefore keep precise timing information, which is essential for an accurate track reconstruction. For the unfolding only direct hits from a time window ranging from −15 to 75 ns have been used. An estimate of the track length inside the detector is obtained by projecting all DOMs that recorded direct photons onto the reconstructed track.
The energy dependence of the three input variables for simulated events is depicted in Figs. 4, 5 and 6. Good correlation with energy was found for all three observables. A sample of 300,000 simulated neutrino events was used for the determination of the response matrix. This number corresponds to approximately ten times the livetime of IceCube in the 59-string configuration. The sample was obtained by sampling events according to their atmospheric weights. The energy distribution of simulated events thus, matches the one of an atmospheric neutrino spectrum.
Verification
The verification of the unfolding result consists of two different tests. The first test is based on multiple unfoldings of a specified number of simulated events, which are drawn at random. This kind of test can be accessed via Truee [3]. The second test is based on re-weighting simulated events accord- The result of the first test is shown in Fig. 7. Within this test a fraction of simulated events is drawn at random. For every bin the unfolding result is then compared to the number of injected events in that bin. For the analysis reported here 500 test unfoldings were carried out. The number of injected events from the Monte Carlo distribution is depicted on the x-axis of Fig. 7 and the number of unfolded events is shown on the y-axis.
The individual populations observed in the figure correspond to the individual energy bins of the final unfolding result. The line-like structures observed for small event numbers are due to the fact that only integers are possible as event number for the true MC distributions, whereas real numbers can be returned as the unfolding result for the individual bins.
The rather large deviation between the unfolding result and the number of injected events obtained for the highest energy bins is a result of the steeply falling spectrum of atmospheric neutrinos and the applied bootstrapping procedure. Due to the small number of expected events in the last bin, either 0 or 1 events are drawn randomly from the true distribution. Two or more events are only drawn in rather rare cases. Based on the response matrix, which accounts for the limited statistics in the highest energy bins by using ten times more events compared to experimental data, only a fraction of an event is reconstructed for the highest energy bin. As the statistical uncertainties derived in Truee fail to account for the difference between the predicted bin content and the number of injected events, large deviations are observed. This further implies that an overestimation is obtained in case no events are present in the last bin on experimental data. As soon as one event is present in this bin, an underestimation is observed.
Within Truee the statistical uncertainties are computed as the square root of the diagonal elements of the covariance matrix. This test can therefore be used to validate the statistical uncertainties returned by the algorithm. The unfolding result is compared to the underlying distribution of events. If the difference between the unfolding result and the true value is covered by the statistical uncertainty returned by Truee, the statistical uncertainties are estimated correctly. For cases where the statistical uncertainty fails to cover this difference the statistical uncertainty is scaled up. For the analysis presented here an underestimation of the number of injected events is observed for the 9th and 10th bin, respectively. This underestimation is not covered by the statistical uncertainty, which is thus scaled up by a factor of 1.9 for the 9th, and a factor of 6.3 for the 10th bin.
In a second test, simulated events are re-weighted according to the unfolding result (see Fig. 13 successful unfolding, data and simulated events are expected to agree after re-weighting. This test was carried out for the three variables used as input for the unfolding but also for two additional energy dependent observables (energy loss per unit length d E/d X and total charge Q tot ). The outcome of the re-weighting is depicted in Figs. 8, 9, 10, 11 and 12. A good agreement between data and the re-weighted simulation is observed over the entire range of the individual parameters.
Estimation of systematic uncertainties
As the unfolding result is obtained by using a response matrix determined from Monte Carlo simulation, the properties of the simulation will affect the unfolding result. In order to
Fig. 12
Simulated events (red) re-weighted to the unfolding result ( Fig. 13) compared to real data (black) for the total charge collected in an event Q tot determine the effect of different simulation settings on the spectrum of atmospheric neutrinos, additional unfoldings were carried out using different sets of simulated events for the determination of the response matrix. For each simulation set used for the estimation of systematic uncertainties one property was changed with respect to the default simulation set. The setting for the efficiency of the DOMs was varied by ±10 % with respect to the nominal value. Within this simulation the efficiency of all DOMs was simultaneously increased or decreased, respectively. A shift of ±10 % with respect to the nominal value is slightly larger than the 7.7 % cited in [24] and is thus a bit more conservative.
Further systematic tests were carried out by using simulated events generated with a ±5 % increased and decreased pair production cross section, respectively. The value of ±5 % was chosen to be slightly more conservative than the theoretical uncertainty cited in [25]. The modeling of the ice was varied as well, by using the SPICE Mie ice model [21] instead of its predecessor SPICE-1.
The response matrices obtained for the individual systematic sets of data were then applied to real data in order to estimate the size of the systematic uncertainties. Prior to the application on real data, however, every setting was checked using the multiple unfoldings in Truee. No indications for instabilities were observed for any of the systematic tests.
Thus, five additional unfolding results were obtained on real data. The difference between the unfolding result obtained using the standard Monte Carlo sets and the systematic Monte Carlo sets were computed bin-wise and for every setting. The final uncertainties were calculated by adding the obtained differences in quadrature. This procedure further offers the advantage that all systematic uncertainties are derived on experimental data.
For energies up to 1 TeV the total systematic uncertainty is dominated by the uncertainty arising from the modeling of the ice. For energies above 1 TeV the uncertainties due to the DOM efficiencies and the modeling of the ice were found to be of approximately the same size. A more precise modeling of the ice and a better understanding of the DOM efficiency, are therfore likely to reduce the systematic uncertainties of future measurements.
Unfolding result
The number of unfolded events as returned by Truee is depicted in Fig. 13. The energies of the bins were obtained as the mean of the distribution of simulated atmospheric neutrino events for every bin. This result can now be converted into a flux of atmospheric neutrinos by utilizing the effective area A eff and the livetime of the detector as well as the solid angle. The effective area for this analysis is shown in Fig. 14. Figure 15 shows the acceptance-corrected and zenithaveraged flux of atmospheric neutrinos obtained with Ice-Cube in the 59-string configuration of the detector. The spectrum covers the energy range from 100 GeV to 1 PeV. Six In general, a good agreement between the unfolded flux and the models is observed. Deviations of 3.2 σ and 2.6 σ are observed between the unfolded distribution and the theoretical model obtained using SIBYLL-2.1 as a hadronic interaction model, for the second (E ν = 418 GeV) and third bin (E ν = 1013 GeV), respectively. However, a correlation of the systematic uncertainties of these two bins should be noted.
The acceptance-corrected flux of atmospheric neutrinos as well as the relative uncertainties are summarized in Table 1.
Unfolding of different angular regions
In order to study the dependence of the atmospheric neutrino flux on the zenith angle, additional unfoldings were carried out dividing the data into two separate sets according to the reconstructed zenith angle. The first zenith band contains events with a reconstructed zenith angle between 90 • and 120 • , whereas events with reconstructed zenith angles between 120 • and 180 • were used for the second zenith band. Using the 500 unfoldings of simulated events selected randomly it was found that no changes in the unfolding settings were required in order to unfold the two different angular regions. The same input parameters as for the unfolding of the full angular range were used and the systematic uncertainties were estimated in the same way as described above. Because of the smaller statistics the unfolding was not extended as high in energy as for the full sample. The upper end of the spectrum extends to E ν = 316 TeV for events with a reconstructed zenith angle between 90 • and 120 • . An upper end of E ν = 158 TeV is reached for events with a reconstructed zenith angle between 120 • and 180 • .
The result of unfolding the two different angular regions is depicted in Fig. 16. The flux obtained for the zenith band from Fig. 16 Unfolded atmospheric neutrino flux for the energy range from 100 GeV to 316 TeV and for two different zenith bands. Events with a reconstructed zenith angle from 90 • to 120 • are depicted in black, whereas events with a reconstructed zenith angle from 120 • to 180 • are shown in red. The Honda H3a + ERS model is shown for comparison. Compared to the neutrino spectrum obtained for the full angular range, a smaller range in energy is covered, which is due to the smaller statistics of the two unfolded samples Table 2 Bin-wise summary of the acceptance-corrected unfolding result for zenith angles between 90 • and 120 • , which corresponds to the differential flux of atmospheric neutrinos, scaled by E 2 and given in GeV cm −2 sr −1 s −1 In general, a good agreement between the unfolded distribution and the theoretical model is observed. The unfolding results for the two angular bins are summarized in Tables 2 and 3. Table 3 Bin-wise summary of the acceptance-corrected unfolding result for zenith angles between 120 • and 180 • , which corresponds to the differential flux of atmospheric neutrinos, scaled by E 2 and given in GeV cm −2 sr −1 s −1 Fig. 17 Comparison of the unfolding result obtained using IceCube in the 59-string configuration to previous experiments. At the low energy end of the spectrum the results of the Frejus experiment [7] are depicted as black squares for ν μ , whereas the Frejus results for ν e are shown as hollow squares. The unfolding results obtained with the AMANDA experiment [8] are shown as black triangles. Results from the ANTARES neutrino telescope [9] are depicted in blue. The ν e spectrum obtained using IceCube in the 79 string configuration [31] is shown as green triangles. The results of the analysis presented here are shown as red circles. Theoretical models are shown for comparison 7 Comparison to previous experiments Figure 17 shows the results of the measurement presented in this paper, depicted as red circles, in the wider context of measurements obtained with previous experiments. We find that the results derived in this measurement are in good agreement with both the theoretical models and previous measurements of the atmospheric ν μ flux. Comparing our results to the spectrum obtained using the AMANDA detector we find that the measurement extends to energies that are larger by almost an order of magnitude. The two measurements are found to agree well within their estimated systematic uncertainties. Due to the different energy thresholds, the IceCube and Frejus spectra overlap only between 100 GeV and 1 TeV. Both measurements agree within their error bars. Comparing the measurement presented in this paper to the results obtained with the ANTARES neutrino telescope [9] we find that both measurements are fully compatible within their systematic uncertainties. A gap in experimental data points exists at energies between 30 and 300 GeV. Within this energy region neutrino oscillations become important and, thus, the spectrum becomes more complicated. This gap can most likely be closed by utilizing the full capabilities of IceCube DeepCore, which has an energy threshold of 10 GeV [32]. The measurement presented here did not benefit from the more densely instrumented DeepCore strings, as only one such string had been deployed at the time of the measurement.
log 10 (E/GeV) E 2 Φ σ statlog 10 (E/GeV) E 2 Φ σ stat
Summary
In this paper we presented the measurement of the atmospheric ν μ flux obtained using IceCube in the 59-string configuration. The unfolded spectrum of atmospheric muon neutrinos covers an energy range from 100 GeV to 1 PeV, thus covering four orders of magnitude in energy. Compared to the previous measurement of the atmospheric ν μ flux, which utilized the detector in the 40-string configuration, the analysis presented here extended the upper end of the atmospheric neutrino spectrum by more than a factor of two.
This increase in the accessible energy was achieved by using a dedicated event selection procedure, which utilized state of the art algorithms from the field of machine learning and data mining. Using a random forest preceded by an Minimum Redundancy Maximum Relevance variable selection we were able to reject 99.9999 % of the incoming background events. At this background rejection 27,771 atmospheric neutrino candidates were detected in 346 days of IceCube-59. This corresponds to 80.3 neutrino events per day, which is a significant improvement over the 49.3 neutrino events per day reported in [2]. The purity of the final neutrino sample was estimated to (99.59 +0.36 −0.37 ) %. Taking into account the excellent agreement between expectations derived on the basis of simulated events and results obtained on experimental data (see Fig. 2) we find that the combination of a random forest and an MRMR can be applied to real life problems, delivering excellent results in terms of both background rejection and signal efficiency.
An energy spectrum of the atmospheric ν μ was obtained using the new unfolding software Truee. The unfolding result was validated using a bootstrapping procedure implemented in Truee. A test using multiple unfoldings of simulated neutrino events selected at random yielded a very good agreement between the unfolding result and the true distri-bution of events, thus validating the overall stability of the unfolding process. Comparing the unfolding results to theoretical models, one finds that no statement on a possible contribution of a prompt and/or astrophysical component to the overall flux of atmospheric neutrinos can be made, due to the relatively large uncertainties at high energies.
Additional years of measurements with IceCube in the 79string and in the 86-string configurations are likely to confirm the results from [28] in spectral measurements. It is further expected that the systematic uncertainties will decrease due to a better understanding of systematic effects and due to the homogeneous shape of the detector.
In summary we find that the data analysis chain presented in this paper yields highly stable results for both event selection and the reconstruction of the spectrum. The entire analysis procedure can therefore be applied to all other sets of IceCube data with only minor changes. The analysis chain is especially well suited for measurements of the atmospheric neutrino flux, where future analyzers only have to account for the different detector geometry.
Fig. 1
1Stability of the MRMR variable selection as a function of the number of variables considered. The Jaccard and Kuncheva's indices were used as stability measures. One finds that both stability measures increase with the number of variables considered. As both measures are well above 0.7, indicating a stable selection, if 25 or more variables are selected, MRMR can be considered stable in case this threshold is exceeded further observed that both stability measures are well above 0.7, in case the number of selected variables exceeds 25.
Fig. 2 Fig. 3
23Number of trees classifying an event as signal. Atmospheric neutrinos are depicted in blue, whereas atmospheric muons are shown in red. Experimental data is shown in black, whereas the sum of simulated signal and background events is depicted in green. The sum of simulated signal events and background events is found to agree well with the distribution of experimental data, indicating a stable performance of the random forest Same asFig. 2, zoom into the region where the final selection cut is considered
Fig. 4 Fig. 5 Fig. 6
456Neutrino energy E vs. the number of hit DOMs (N Ch ) for the simulated events used for the determination of the response matrix Neutrino energy E vs. the estimated track length inside the detector L dir for the simulated events used for the determination of the Neutrino energy E vs. the number of direct photons N ph,dir for the simulated events used for the determination of the response matrix ing to the unfolded spectrum of atmospheric ν μ . Both tests were successfully carried out and are individually addressed in the following.
Fig. 7
7Results of 500 unfoldings for all bins. The x-axis depicts the number of simulated events, whereas the number of unfolded events is shown on the y-axis. Unfoldings where the difference between the true number of events in a certain bin and the unfolding result for that bin lies within the statistical uncertainty returned by Truee are shown in red. Unfoldings where this is not the case are depicted in black. The energy spectrum of the simulated events corresponds to an atmospheric spectrum. In general, we find that the number of unfolded events is highly correlated with the true number of events in a certain unfolding. The individual populations observed in the plot, correspond to the individual energy bins of the unfolded distribution
Fig. 8 Fig. 9 Fig. 10
8910of Sect.5.4). For a Simulated events (red) re-weighted to the unfolding result (Fig. 13) compared to real data (black) for the number of hit DOMs N Ch Simulated events (red) re-weighted to the unfolding result (Fig. 13) compared to real data (black) for the estimated track length inside the detector L dir Simulated events (red) re-weighted to the unfolding result (Fig. 13) compared to real data (black) for the number of direct photons N ph,dir
Fig. 11
11Simulated events (red) re-weighted to the unfolding result (Fig. 13) compared to real data (black) for the energy loss per unit length d E/d X
Fig. 13 Fig. 14 Fig. 15
131415theoretical model expectations are shown for comparison. The model by Honda et al. [26] (Honda2006), extrapolated to higher energies is shown as a solid black line. The Honda2006 model only models the conventional atmospheric neutrino flux. The model by Honda et al. together with a model of the prompt component by Enberg et al. [27] (ERS) is shown as a solid green line. The recent best fit to an astrophysical Number of unfolded events per bin as returned by Truee Neutrino effective area for the analysis presented in this paper Acceptance corrected flux of atmospheric neutrinos from 100 GeV to 1 PeV, compared to several theoretical models (please see the text for more details on the individual models) flux obtained in the IceCube high energy starting event analysis (HESE) [28] are included as a third component in the blue dashed line. An additional modeling of the knee of the cosmic ray flux is included in the model labeled Honda H3a + ERS (solid blue line). Atmospheric neutrino flux predictions obtained from ANFlux [10] using QGSJET-II [29] and SIBYLL-2.1 [30] as hadronic interaction models are shown as a solid red line and a red dashed-dotted line respectively. Compared to the IceCube-40 result the systematic uncertainties of the spectrum were reduced, especially at low and intermediate energies. The decreased error bars are due to a better understanding of systematic effects in IceCube. Due to the relatively large systematic uncertainties at high energies, no statement can be made about a possible contribution of neutrinos from the decay of charmed mesons. Furthermore, no statement about a possible contribution of neutrinos from astrophysical sources can be made in this analysis.
Table 1
1Bin-wise summary of the acceptance-corrected unfolding result, which corresponds to the differential flux of atmospheric neutrinos, scaled by E 2 and given in GeV cm −2 sr −1 s −1log 10 (E/GeV)
E 2 Φ
σ stat.
rel. (%)
σ
syst.
rel. (%)
2.25
2.54 × 10 −4
±2.5
+63
−53
2.62
0.97 × 10 −4
±2.3
+19
−49
3.01
3.06 × 10 −5
±3.2
+32
−42
3.39
1.00 × 10 −5
±4.4
+65
−28
3.78
3.64 × 10 −6
±4.5
+69
−43
4.17
1.01 × 10 −6
±6.7
+60
−40
4.56
2.65 × 10 −7
±13.1
+66
−37
4.96
6.44 × 10 −8
±19.0
+54
−52
5.36
1.85 × 10 −8
+45.8
−23.5
+61
−68
5.76
3.81 × 10 −9
+163
−26.0
+130
−68
90 • to 120 • is depicted in black, whereas the flux obtained for the zenith band from 120 • to 180 • is shown in red. The Honda2006 model, accounting for a different modeling of the knee plus using the ERS model for the prompt component of the atmospheric flux, is shown for both angular regions for comparison..
rel. (%)
σ
syst.
rel. (%)
2.25
2.45 × 10 −4
±4.3
+23
−89
2.62
1.13 × 10 −4
±3.2
+20
−46
3.01
3.80 × 10 −5
±3.9
+22
−32
3.39
1.12 × 10 −5
±5.5
+63
−19
3.78
4.45 × 10 −6
±5.8
+82
−28
4.17
1.61 × 10 −6
±7.2
+70
−31
4.56
4.15 × 10 −7
±13.9
+105
−27
4.96
8.76 × 10 −8
±22.2
+112
−115
5.36
2.22 × 10 −8
+58.2
−29.1
+129
−94
A reconstructed zenith angle of 0 • corresponds to an event entering the detector from above (the South), whereas a reconstructed zenith angle of 180 • corresponds to an event entering the detector from below (the North).
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. Funded by SCOAP 3 / License Version CC BY 4.0.
. A Achterberg, 10.1016/j.astropartphys.2006.06.007doi:10.1016/ j.astropartphys.2006.06.007Astropart. Phys. 26155A. Achterberg et al., Astropart. Phys. 26, 155 (2006). doi:10.1016/ j.astropartphys.2006.06.007
. R Abbasi, 10.1103/PhysRevD.83.012001doi:10.1103/ PhysRevD.83.012001Phys. Rev. D. 83112001R. Abbasi et al., Phys. Rev. D 83(1), 012001 (2011). doi:10.1103/ PhysRevD.83.012001
. N Milke, 10.1016/j.nima.2012.08.105Nucl. Instrum. Methods Phys. Res. A. 697133N. Milke et al., Nucl. Instrum. Methods Phys. Res. A 697, 133 (2013). doi:10.1016/j.nima.2012.08.105
. T Deyoung, 10.1142/S0217732309031417doi:10.1142/ S0217732309031417Mod. Phys. Lett. A. 241543T. DeYoung, Mod. Phys. Lett. A 24, 1543 (2009). doi:10.1142/ S0217732309031417
. R Abbasi, 10.1016/j.nima.2009.01.001Nucl. Instrum. Methods Phys. Res. A. 601294R. Abbasi et al., Nucl. Instrum. Methods Phys. Res. A 601, 294 (2009). doi:10.1016/j.nima.2009.01.001
. M G Aartsen, 10.1103/PhysRevD.89.102004doi:10. 1103/PhysRevD.89.102004Phys. Rev. D. 8910102004M.G. Aartsen et al., Phys. Rev. D 89(10), 102004 (2014). doi:10. 1103/PhysRevD.89.102004
. K Daum, 10.1007/BF01556368Zeitschrift fur Physik C Particles and Fields. 66417K. Daum et al., Zeitschrift fur Physik C Particles and Fields 66, 417 (1995). doi:10.1007/BF01556368
. R Abbasi, 10.1016/j.astropartphys.2010.05.001doi:10.1016/j. astropartphys.2010.05.001Astropart. Phys. 3448R. Abbasi et al., Astropart. Phys. 34, 48 (2010). doi:10.1016/j. astropartphys.2010.05.001
. S Adrián-Martínez, 10.1140/epjc/s10052-013-2606-4doi:10. 1140/epjc/s10052-013-2606-4Eur. Phys. J. C. 732606S. Adrián-Martínez et al., Eur. Phys. J. C 73, 2606 (2013). doi:10. 1140/epjc/s10052-013-2606-4
. A Fedynitch, J Becker Tjus, P Desiati, Phys. Rev. D. 8611114024A. Fedynitch, J. Becker Tjus, P. Desiati, Phys. Rev. D 86(11), 114024 (2012)
T K Gaisser, Cosmic Rays and Particle Physics. CambridgeCambridge University PressT.K. Gaisser, Cosmic Rays and Particle Physics (Cambridge Uni- versity Press, Cambridge, 1991)
Particle Data Group. J Behringer, Phys. Rev. D. 86J. Behringer et al., Particle Data Group. Phys. Rev. D 86, 34-76 (2012)
. C Ding, H Peng, J. Bioinform. Comput. Biol. 32C. Ding, H. Peng, J. Bioinform. Comput. Biol. 3(2), 185-205 (2005)
. L Kuncheva, Artificial Intelligence and Applications. L. Kuncheva, in Artificial Intelligence and Applications (2007), pp. 421-427
B Schowe, K Morik, Ensembles in Machine Learning Applications. O. Okun, G. Valentini, M. ReSpringerB. Schowe, K. Morik, in Ensembles in Machine Learning Applica- tions, eds. by O. Okun, G. Valentini, M. Re (Springer, 2011), pp. 75-95
. J Ahrens, 10.1016/j.nima.2004.01.065Nucl. Instrum. Methods Phys. Res. A. 524169J. Ahrens et al., Nucl. Instrum. Methods Phys. Res. A 524, 169 (2004). doi:10.1016/j.nima.2004.01.065
. L Breiman, Mach. Learn. 45L. Breiman, Mach. Learn. 45, 5 (2001)
. R K Bock, 10.1016/j.nima.2003.08.157Nucl. Instrum. Methods Phys. Res. A. 516511R.K. Bock et al., Nucl. Instrum. Methods Phys. Res. A 516, 511 (2004). doi:10.1016/j.nima.2003.08.157
J R Hoerandel, N N Kalmykov, A I Pavlov, Proceedings of International Cosmic Ray Conference. International Cosmic Ray Conference1243J.R. Hoerandel, N.N. Kalmykov, A.I. Pavlov, in Proceedings of International Cosmic Ray Conference 2003, vol. 1 (2003), p. 243
COR-SIKA: a Monte Carlo code to simulate extensive air showers. D Heck, J Knapp, J N Capdevielle, G Schatz, T Thouw, D. Heck, J. Knapp, J.N. Capdevielle, G. Schatz, T. Thouw (Forschungszentrum Karlsruhe GmbHKarlsruheD. Heck, J. Knapp, J.N. Capdevielle, G. Schatz, T. Thouw, COR- SIKA: a Monte Carlo code to simulate extensive air showers, eds. by D. Heck, J. Knapp, J.N. Capdevielle, G. Schatz, T. Thouw (Forschungszentrum Karlsruhe GmbH, Karlsruhe, 1998). http:// adsabs.harvard.edu/abs/1998cmcc.book.....H
. M G Aartsen, 10.1016/j.nima.2013.01.054Nucl. Instrum. Methods Phys. Res. A. 71173M.G. Aartsen et al., Nucl. Instrum. Methods Phys. Res. A 711, 73 (2013). doi:10.1016/j.nima.2013.01.054
Yale: Yet Another Learning Environment-Tutorial. S Fischer, CI-136/02Collaborative Research Center. 531University of DortmundTechnical ReportS. Fischer et al., Yale: Yet Another Learning Environment- Tutorial. Technical Report CI-136/02, Collaborative Research Cen- ter 531, University of Dortmund, Dortmund, Germany (2002)
Regularized unfolding for high-energy physics experiments. V Blobel, Technical Note. 361V. Blobel, Regularized unfolding for high-energy physics experi- ments. Technical Note TN361, OPAL (1996)
. R Abbasi, 10.1016/j.nima.2010.03.102Nucl. Instrum. Methods Phys. Res. A. 618139R. Abbasi et al., Nucl. Instrum. Methods Phys. Res. A 618, 139 (2010). doi:10.1016/j.nima.2010.03.102
. R P Kokoulin, 10.1016/S0920-5632(98)00475-7doi:10. 1016/S0920-5632Nucl. Phys. B Proc. Suppl. 7098R.P. Kokoulin, Nucl. Phys. B Proc. Suppl. 70, 475 (1999). doi:10. 1016/S0920-5632(98)00475-7
. M Honda, 10.1103/PhysRevD.75.043006doi:10.1103/ PhysRevD.75.043006Phys. Rev. D. 75443006M. Honda et al., Phys. Rev. D 75(4), 043006 (2007). doi:10.1103/ PhysRevD.75.043006
. R Enberg, M H Reno, I Sarcevic, 10.1103/PhysRevD.78.043005Phys. Rev. D. 78443005R. Enberg, M.H. Reno, I. Sarcevic, Phys. Rev. D 78(4), 043005 (2008). doi:10.1103/PhysRevD.78.043005
. M G Aartsen, 10.1126/science.1242856Science. 3421M.G. Aartsen et al., Science 342, 1 (2013). doi:10.1126/science. 1242856
. S Ostapchenko, 10.1016/j.nuclphysbps.2005.07.026Nucl. Phys. B Proc. Suppl. 151143S. Ostapchenko, Nucl. Phys. B Proc. Suppl. 151, 143 (2006). doi:10.1016/j.nuclphysbps.2005.07.026
. E J Ahn, 10.1103/PhysRevD.80.094003doi:10.1103/ PhysRevD.80.094003Phys. Rev. D. 80994003E.J. Ahn et al., Phys. Rev. D 80(9), 094003 (2009). doi:10.1103/ PhysRevD.80.094003
. M G Aartsen, 10.1103/PhysRevLett.110.151105Phys. Rev. Lett. 11015151105M.G. Aartsen et al., Phys. Rev. Lett. 110(15), 151105 (2013). doi:10.1103/PhysRevLett.110.151105
. R Abbasi, 10.1016/j.astropartphys.2012.01.004doi:10.1016/j. astropartphys.2012.01.004Astropart. Phys. 35615R. Abbasi et al., Astropart. Phys. 35, 615 (2012). doi:10.1016/j. astropartphys.2012.01.004
|
[] |
[
"VERTEX TYPES IN THRESHOLD AND CHAIN GRAPHS",
"VERTEX TYPES IN THRESHOLD AND CHAIN GRAPHS"
] |
[
"M Andelić ",
"E Ghorbani ",
"S K Simić "
] |
[] |
[] |
A graph is called a chain graph if it is bipartite and the neighborhoods of the vertices in each color class form a chain with respect to inclusion. A threshold graph can be obtained from a chain graph by making adjacent all pairs of vertices in one color class. Given a graph G, let λ be an eigenvalue (of the adjacency matrix) of G with multiplicity k ≥ 1. A vertex v of G is a downer, or neutral, or Parter depending whether the multiplicity of λ in G − v is k − 1, or k, or k + 1, respectively. We consider vertex types in the above sense in threshold and chain graphs. In particular, we show that chain graphs can have neutral vertices, disproving a conjecture by Alazemi et al.2000 Mathematics Subject Classification. 05C50.
|
10.1016/j.dam.2019.02.040
|
[
"https://arxiv.org/pdf/1803.00245v1.pdf"
] | 119,127,471 |
1803.00245
|
19c2c2861e914f350b71e5a45c2fee1eae3cc61b
|
VERTEX TYPES IN THRESHOLD AND CHAIN GRAPHS
1 Mar 2018
M Andelić
E Ghorbani
S K Simić
VERTEX TYPES IN THRESHOLD AND CHAIN GRAPHS
1 Mar 2018In honour of Domingos M. Cardoso on the occasion of his 65th birthdayarXiv:1803.00245v1 [math.CO]
A graph is called a chain graph if it is bipartite and the neighborhoods of the vertices in each color class form a chain with respect to inclusion. A threshold graph can be obtained from a chain graph by making adjacent all pairs of vertices in one color class. Given a graph G, let λ be an eigenvalue (of the adjacency matrix) of G with multiplicity k ≥ 1. A vertex v of G is a downer, or neutral, or Parter depending whether the multiplicity of λ in G − v is k − 1, or k, or k + 1, respectively. We consider vertex types in the above sense in threshold and chain graphs. In particular, we show that chain graphs can have neutral vertices, disproving a conjecture by Alazemi et al.2000 Mathematics Subject Classification. 05C50.
Introduction
This paper is a successor of [4] in which vertex types (see the Abstract) in the lexicographic products of an arbitrary graph over cliques and/or co-cliques were investigated. Such class of graphs includes threshold graphs and chain graphs as particular instances. Both of these types (or classes) of graphs were discovered, and also rediscovered by various researchers in different contexts (see, for example, [5,6,12], and references therein). Needles to say, they were named by different names mostly depending on applications in which they arise. It is also noteworthy that threshold graphs are subclass of cographs, i.e. of P 4 -free graphs. Recall that threshold graphs are {P 4 , 2K 2 , C 4 }-free graphs, while chain graphs are {2K 2 , C 3 , C 5 }-free graphs -see [1,3] for more details. Note, if these graphs are not connected then (since 2K 2 is forbidden) at most one of its components is non-trivial (others are trivial, i.e. isolated vertices). Moreover, stars are the only connected graphs which belong to both of two classes of graphs under consideration.
Recall, these graphs play a very important role in Spectral Graph Theory, since the maximizers for the largest eigenvalue of the adjacency matrix (for graphs of fixed order and size, either connected or disconnected) belong to these classes (threshold graphs in general case, and chain graphs in bipartite case). Such graphs (in both classes) have a very specific structure (embodied in nesting property), and this fact enables us to tell more on the type of certain vertices. Here, we also disprove Conjecture 3.1 from [3].
Throughout, we will consider simple graphs, i.e. finite undirected graphs without loops or multiple edges. In addition, without loss of generality, we will assume that any such graph is connected. For a graph G we denote its vertex set by V (G), and by n = |V (G)| its order. An n × n matrix A(G) = [a ij ] is its adjacency matrix if a ij = 1 whenever vertices i and j are adjacent, or a ij = 0 otherwise. For a vertex v of G, let N(v) denote the neighborhood of v, i.e. the set of all vertices of G adjacent to v.
The eigenvalues of G are the eigenvalues of its adjacency matrix. In non-increasing order they are denoted by
λ 1 (G) ≥ λ 2 (G) ≥ · · · ≥ λ n (G), or by µ 1 (G) > µ 2 (G) > · · · > µ r (G)
if only distinct eigenvalues are considered. If understandable from the context we will drop out graph names from the notation of eigenvalues (or other related objects). The eigenvalues comprise (together with multiplicities, say k 1 , k 2 , . . . , k r , respectively) the spectrum of G, denoted by Spec(G). The characteristic polynomial of G, denoted by φ(x; G), is the characteristic polynomial of its adjacency matrix. Both, the spectrum and characteristic polynomial of a graph G are its invariants. Further on, all spectral invariants (and other relevant quantities) associated to the adjacency matrix will be prescribed to the corresponding graph. For a given eigenvalue λ ∈ Spec(G), mult(λ, G) denotes its multiplicity, while E(λ; G) its eigenspace (provided G is a labeled graph). The equation Ax = λx, is called the eigenvalue equation for λ. Here A is the adjacency matrix, while x a λ-eigenvector also of the labeled graph G. If G is of order n, then x can be seen as an element of R n , or a mapping x : V (G) → R n (so its i-th entry can be denoted by x i or x(i)). Eigenspaces (as the eigenvector sets) are not graph invariants, since the eigenvector entries become permuted if the vertices of G are relabeled.
An eigenvalue λ ∈ Spec(G) is main if the corresponding eigenspace E(λ; G) is not orthogonal to all-1 vector j; otherwise, it is non-main.
Given a graph G, let λ be its eivgenvalue of multiplicity k ≥ 1 and v ∈ V (G). Then v is a downer, or neutral, or Parter vertex of G, depending whether the multiplicity of λ in G − v is k − 1, or k, or k + 1, respectively. Recall, neutral and Parter vertices of G are also called Fiedler vertices. For more details, about the above vertex types see, for example, [19]. Remark 1.1. Sum rule: Let x be a λ-eigenvector of a graph G. Then the entries of x satisfy the following equalities:
(1) λx(v) = u∼v x(u), for all v ∈ V (G). From (1) it follows that if λ = 0, then N(u) = N(v) implies that x(u) = x(v) and if λ = −1, N(u) ∪ {u} = N(v) ∪ {v} implies that x(u) = x(v).
In sequel, we will need the following interlacing property for graph eigenvalues (or, eigenvalues of Hermitian matrices, see [8,
Theorem 1.1. Let G be a graph of order n and G ′ be an induced subgraph of G of order n ′ . If λ 1 ≥ λ 2 ≥ · · · ≥ λ n and λ ′ 1 ≥ λ ′ 2 ≥ · · · ≥ λ ′ n ′ are their eigenvalues respectively, then (2) λ i ≥ λ ′ i ≥ λ n−n ′ +i for i = 1, 2, . . . , n ′ . In particular, if n ′ = n − 1, then
λ 1 ≥ λ ′ 1 ≥ λ 2 ≥ λ ′ 2 ≥ · · · ≥ λ n−1 ≥ λ ′ n−1 ≥ λ n .
In the case of equality in (2) (see [8,Theorem 2.5.1]) the following holds.
Lemma 1.1. If λ ′ i = λ i or λ ′ i = λ n−n ′ +i for some i ∈ {1, 2, . . . , n ′ }, then G ′ has an eigenvector x ′ for λ ′ i such that 0 x ′ is an eigenvector of G for λ ′ i ,
where 0 is a zero vector whose entries correspond to the vertices from V (G) \ V (G ′ ).
Remark 1.2.
A vertex v is a downer for a fixed eigenvalue λ, if there exists in the corresponding eigenspace an eigenvector whose v-th component is non-zero. Otherwise, it is a Fiedler vertex. Let W be the eigenspace corresponding to λ. If for each x ∈ W , we have x(v) = 0, then v cannot be a downer vertex as for any x ∈ W , the vector x ′ obtained by deleting the v-th component, is a λ-eigenvector of G − v, and therefore we have The rest of the paper is organized as follows: in Section 2 we give some particular results about vertex types in threshold graphs, while in Section 3 we put focus on chain graphs, and among others we disprove Conjecture 3.1 from [1], which states that in any chain graph, every vertex is a downer with respect to every non-zero eigenvalue. Besides we point out that some weak versions of the same conjecture are true.
mult(λ, G − v) ≥ dim {x ′ : x ∈ W } = dim W = mult(λ, G).
Vertex types in threshold graphs
Any (connected) threshold graph G is a split graph i.e., it admits a partition of its vertex set into two subsets, say U and V , such that the vertices of U induce a co-clique, while the vertices of V induce a clique. All other edges join a vertex in U with a vertex in V . Moreover, if G is connected, then both U and V are partitioned into h ≥ 1 non-empty cells such that U = h i=1 U i and V = h i=1 V i and the following holds for (cross) edges: each vertex in U i is adjacent to all vertices in V 1 ∪ · · ·∪ V i (a nesting property). Accordingly, connected threshold graphs are also called nested split graphs (or NSG for short). If m i = |U i | and n i = |V i |, then we write Figure 1. The threshold graph G = NSG(m 1 , . . . , m h ; n 1 , . . . , n h ).
(3) G = NSG(m 1 , . . . , m h ; n 1 , . . . , n h ), (see Fig. 1). We denote by M h (= h i=1 m i ) the size of U, and by N h (= h i=1 n i ) the size of V . U h U h−1 U 2 U 1 m h m h−1 m 2 m 1 V h V h−1 V 2 V 1 n h n h−1 n 2 n 1
The following Theorem states the essential spectral properties of threshold graphs (see [1,15,18]).
Theorem 2.1. Let G = NSG(m 1 , . . . , m h ; n 1 , . . . , n h ).
Then the spectrum of G contains:
• h positive simple eigenvalues;
• h − 1 simple eigenvalues less than −1 if m h = 1, or otherwise if m h ≥ 2, h simple eigenvalues less than −1; • eigenvalue 0 of multiplicity M h −h, and −1 of multiplicity N h − h + 1 if m h = 1, or of multiplicity N h − h if m h > 1.
In addition, if λ = 0, −1 then λ is a main eigenvalue. Recall that any vertex of a connected graph is downer for the largest eigenvalue, see [10,Proposition 1.3.9.]. In addition, if λ = 0, −1, then the corresponding eigenvector x is unique (up to scalar multiple) and constant on each of the sets U i and V i (i = 1, . . . , h); in particular, if m h = 1 then it is constant on the set U h ∪ V h . These facts will be used repeatedly further on without any recall. Proof. Let u 1 ∈ U 1 and v 1 ∈ V 1 . Then, by the sum rule, λx(u 1 ) = n 1 x(v 1 ). Since λ = 0, −1, u 1 and v 1 are both downer or Fiedler vertices (see Remark 1.2). Let X = w∈V (G) x(w), and by the way of contradiction assume that u 1 and v 1 are both Fiedler vertices, i.e. x(u 1 ) = x(v 1 ) = 0. Again, by the sum rule, we have λx(v 1 ) = X − x(v 1 ), and therefore X = 0, a contradiction since λ = 0, −1 is a simple and non-main eigenvalue (see Theorem 2.1).
Let
u h ∈ U h , v h ∈ V h and Y = w∈V 1 ∪···∪V h x(w). Then, λx(u h ) = Y and λx(v h ) = Y − x(v h ) + m h x(u h ). For a contradiction, let x(u h ) = 0.
Then it easily follows that x(v h ) = 0. We next claim that for 2 ≤ i ≤ h,
x(u i ) = x(v i ) = 0 implies x(u i−1 ) = x(v i−1 ) = 0. To see this, since x(v i ) = 0 by the sum rule we obtain λx(u i ) = Y − h j=i+1 n j x(v j ) = Y − h j=i n j x(v j ) = λx(u i−1 ), and therefore λx(u i−1 ) = 0. Similarly, since x(u i ) = 0 and λx(v i ) = Y − x(v i ) + h j=i m j x(u j ) = Y − x(v i ) + h j=i−1 m j x(u j ) = (λ + 1)x(v i−1 ), it follows x(v i−1 ) = 0. Consequently, we obtain x(u h ) = · · · = x(u 1 ) = 0 and x(v h ) = · · · = x(v 1 ) = 0, i.e. x = 0, a contradiction.
This proves that all vertices in U h are downers for λ.
For the last part of the theorem, let λ = −m h . Then we have
λx(u h ) = Y, λx(v h ) = Y − x(v h ) + m h x(u h ), and so (λ + 1)x(v h ) = (λ + m h )x(u h ). Hence, if x(v h ) = 0,i = 1, . . . , h − 1, at least one of U i , U i+1 (resp. V i , V i+1 ) contains only downer vertices for λ.
Proof. Recall first that all vertices within U k or V k (k = 1, . . . , h) are of the same type for λ, and that λ is a simple eigenvalue. Assume on the contrary that all vertices in U i and U i+1 are neutral and let x be a λ-eigenvector. Then, for u i ∈ U i and u i+1 ∈ U i+1 , x(u i ) = x(u i+1 ) = 0. By the sum rule it easily follows that for any
v i+1 ∈ V i+1 , x(v i+1 ) = 0. Next, we have λx(v i ) = h j=1 n j x(v j ) − x(v i ) + h j=i m j x(u j ),(4)λx(v i+1 ) = h j=1 n j x(v j ) − x(v i+1 ) + h j=i+1 m j x(u j ).(5)
By subtracting (5) from (4) we obtain λx(v i ) = −x(v i ). Since λ = −1, x(v i ) = 0 and consequently x(u i−1 ) = 0. Proceeding in the similar way, we conclude that x(u 1 ) = 0, which contradicts Theorem 2.2.
The proof for vertices in V i , V i+1 is similar, and therefore omitted.
Next examples show that in an nested split graph G neutral vertices for the same eigenvalue may be distributed in different U i 's, V i 's and at the same time in both U and V . In what follows we assume that all vertices in U s (resp. V s ) of a nested split graph G are neutral for some s with respect to some λ i = 0, −1. If so, we will show that this assumption imply some restrictions on position of λ i in the spectrum of G. Let n ′ = |V (G ′ )| = h j=s+1 (m j + n j ) and let x be a λ-eigenvector of G. Denote by x ′ the vector obtained from x by deleting all entries corresponding to deleted vertices from G. Since
G ′ ) = {λ ′ 1 , . . . , λ ′ n ′ } then λ i = λ ′ j for some j ∈ {1, . . . , n ′ }. Moreover, j < i < n − n ′ + j, and if i ≤ n ′ then λ i = λ ′ n ′ . Proof. Let G ′ be0 = λ i x(u s ) = s j=1 n j x(v j ),
for any k ≥ s + 1, we obtain
λ i x ′ (u k ) = λ i x(u k ) = k j=1 n j x(v j ) = k j=s+1 n j x(v j ) = k j=s+1 n j x ′ (v j ) λ i x ′ (v k ) = λ i x(v k ) = h j=1 n j x(v j ) − x(v k ) + h j=k m j x(u j ) = h j=s+1 n j x ′ (v j ) − m j x ′ (v k ) + h j=k x ′ (u j )
and therefore x ′ is an eigenvector of G ′ for λ i , i.e. λ i ∈ Spec(G ′ ). Suppose λ i = λ ′ j for some j ∈ {1, . . . , n ′ }. From interlacing it follows that (6), at least one of inequalities holds as an equality then, by Lemma 1.1, G ′ has an eigenvector y ′ for λ ′ i such that 0 y ′ is an eigenvector of G for λ ′ i . By the sum rule for any vertex in V s we obtain that the sum of all entries of y ′ is 0 and accordingly that λ ′ i is non-main eigenvalue of G ′ . Hence, λ ′ i = 0 or λ ′ i = −1 which implies λ ′ i < λ i . Similarly, in (7) we conclude that λ ′ j = λ i is a non-main eigenvalue of G ′ , a contradiction, by Theorem 2.1. Therefore, the interlacing in these cases reads
(6) λ n−n ′ +i ≤ λ ′ i ≤ λ i = λ ′ j , if i ≤ n ′ . as well as (7) λ n−n ′ +j ≤ λ i = λ ′ j ≤ λ j . If inλ n−n ′ +i ≤ λ ′ i < λ i , i ≤ n ′ ,(8)
λ n−n ′ +j < λ ′ j = λ i < λ j .
Moreover, (9) implies j < i < n − n ′ + j. Also, if i ≤ n ′ , λ i = λ ′ n ′ holds. Otherwise, λ n−n ′ +i ≤ λ ′ i < λ i = λ ′ n ′ , a contradiction.
If all vertices in V s for some s are neutral for λ i = 0, −1, then bearing in mind that G − V s = NSG(m 1 , . . . , m s−1 + m s , . . . , m h ; n 1 , . . . , n s−1 , n s+1 , . . . , n h ) we can similarly conclude the following. Proof. If λ n = −1, then G is a complete graph and all vertices are downers for it. So, we assume that λ n = 0, −1. Suppose on the contrary that there exists at least one neutral vertex u for λ n . If u ∈ U s , then x(u) = 0, where x is a λ-eigenvector of G. As shown in the proof of Theorem 2.4, λ n = λ ′ j ∈ Spec(G ′ ), for some j ∈ {1, . . . , n ′ } and λ n−n ′ +j < λ ′ j < λ j , i.e. λ n−n ′ +j < λ n < λ j , a contradiction. The proof is similar if v ∈ V s for some s and hence omitted here. Then
λ n ′′ (G ′′ ) < λ < λ 1 (G ′′ ), where n ′′ = |V (G ′′ )| = s i=1 (m i + n i ). Proof. The graph G ′′ is an induced subgraph of G with vertex set V (G ′′ ) = s j=1 (U j ∪ V j ).
The adjacency matrix A of the whole graph is equal to:
A ′′ B B T A ′ ,
where A ′ , A ′′ are adjacency matrices of x 1 x 2 and the eigenvalue system reads:
A ′′ x 1 + Bx 2 = λx 1 (10) B T x 1 + A ′ x 2 = λx 2 .(11)
As we have seen in the proof of Theorem 2.4, λ is an eigenvalue of A ′ , the corresponding eigenvector is x 2 and further x 1 = 0. Therefore, it follows that B T x 1 = 0, i.e. the sum of some entries of x 1 is 0. From (10), we obtain (λI − A ′′ )x 1 = Bx 2 and then by multiplying by x T 1 from the left we obtain
x T 1 (λI − A ′′ )x 1 = 0 and consequently min y =0 y T (λI − A ′′ )y y T y ≤ x T 1 (λI − A ′′ )x 1 x T 1 x 1 ≤ max y =0 y T (λI − A ′′ )y y T y . Hence, λ n ′′ (λI − A ′′ ) ≤ 0 ≤ λ 1 (λI − A ′′ ), where n ′′ = |V (G ′′ )| = M s + N s . Since, λ n ′′ (λI − A ′′ ) = λ − λ 1 (G ′′ ) and λ 1 (λI − A ′′ ) = λ − λ n ′′ (G ′′ ) it follows (12) λ n ′′ (G ′′ ) ≤ λ ≤ λ 1 (G ′′ ).
Moreover, λ = λ 1 (G ′′ ). Equality holds if and only if x 1 is an eigenvector of G ′′ for λ 1 (G ′′ ), that is not possible due to the condition (11) and positivity of x 1 as an eigenvector corresponding to the largest eigenvalue of a connected graph. Similarly, if λ = λ n ′′ (G ′′ ), then x 1 is the corresponding eigenvector and from (10) it follows Bx 2 = 0. This implies that λ is a non-main eigenvalue of a nested split graph G ′ , a contradiction by Theorem 2.1.
Vertex types in chain graphs
Chain graph can be defined as follows: a graph is a chain graph if and only if it is bipartite and the neighborhoods of the vertices in each color class form a chain with respect to inclusion. For this reason, if connected (as was the case with threshold graphs), it is also called double nested graph [5].
Non-zero eigenvalues of chain graphs are simple (see Theorem 3.1 below). As the subgraphs of any chain graph are also chain graphs, it follows that there is no Parter vertex in any chain graphs with respect to non-zero eigenvalues. A question raises whether they can have neutral vertices. In [1] it is conjectured that this cannot be the case.
Conjecture 3.1. ([1]) In any chain graph, every vertex is downer with respect to every non-zero eigenvalue.
We disprove Conjecture 3.1 in this section. Indeed, Theorems 3.4 and 3.5 will show that there are infinitely many counterexamples for this conjecture. In spite of that, a couple of weak versions of the conjecture are true.
Remark 3.1. (Structure of chain graphs) As it was observed in [5], the color classes of any chain graph G can be partitioned into h nonempty cells U 1 , . . . , U h and V 1 , . . . , V h such that N(u) = V 1 ∪ · · · ∪ V h−i+1 for any u ∈ U i , 1 ≤ i ≤ h. If m i = |U i | and n i = |V i |, then we write DNG(m 1 , . . . , m h ; n 1 , . . . , n h ) (see Fig. 2). Figure 2. The chain graph G = DNG(m 1 , . . . , m h ; n 1 , . . . , n h ).
U 1 U 2 U h−1 U h m 1 m 2 m h−1 m h V h V h−1 V 2 V 1 n h n h−1 n 2 n 1
The spectrum of any chain graph has the following properties (see [1]): • h positive simple eigenvalues greater then 1 2 ; • h negative simple eigenvalues less than − 1 2 ;
• eigenvalue 0 of multiplicity M h + N h − 2h.
Remark 3.2. On the contrary from threshold graphs nonzero eigenvalues of chain graphs need not be main. For more information see [3]. Proof. Let x be any λ-eigenvector of G. Assume that u 1 ∈ U 1 and v h ∈ V h . By the sum rule λx(v h ) = m 1 x(u 1 ). Since, λ = 0, u 1 and v h are both downer or neutral. Let X = w∈V x(w) and assume on the contrary that x(u 1 ) = x(v h ) = 0. Again, by the sum rule λx(u 2 ) = X −n h x(v h ) = 0 and consequently x(u 2 ) = 0, for any u 2 ∈ U 2 as well as
x(v h−1 ) = 0 for any v h−1 ∈ V h−1 . Next, for any u 3 ∈ U 3 , λx(u 3 ) = X − n h−1 x(v h−1 ) − n h x(v h ) = 0
It follows that x is zero on U 3 , too. Continuing this argument, it follows that x = 0, a contradiction.
The following proposition states some facts related to vertex types in chain graphs. The proofs are similar to those in Section 2 and therefore omitted here.
1 < s < h, λ i ∈ Spec(G) \ {0}, n ′ s = s−1 j=1 (m j + n h−j+1 ), n ′′ s = n − n ′ s , Spec(G ′ s ) = {λ ′ 1 , . . . , λ ′ n ′ s }. Then • For any j = 1, . . . , h − 1 at least one of U j , U j+1 contains only downer vertices for λ i . • If all vertices in U s for some 2 < s < h − 1 are neutral for λ i , then -λ i is an eigenvalue of G ′ s and λ i = λ ′ j , for some j ∈ {1, . . . , n ′ s }. If λ i is main, then j < i < n − n ′ s + j. If i ≤ n ′ s then λ i = λ ′ n ′ s . -λ i ∈ [λ n ′′ s (G ′′ s ), λ 1 (G ′′ s )). -If λ i is a main eigenvalue then λ i ∈ (λ n ′′ s (G ′′ s ), λ 1 (G ′′ s )). -If λ i / ∈ h−1 s=2 [λ n ′′ s (G ′′ s ), λ 1 (G ′′ s )
) then all vertices in V (G) are downer vertices for λ i .
A chain graph for which |U 1 | = · · · = |U h | = |V 1 | = · · · = |V h | = 1 is called a half graph. Here we denote it by H(h). As we will see in what follows, specific half graphs provide counterexamples to Conjecture 3. In what follows, for convenience, we will instead of column vectors use row vectors, especially for eigenvectors. Let
x := (x 1 , . . . , x h ) where x i = a s if i ≡ℓ i=1 x i = s i=1 a i .
Let {u 1 , . . . , u h } and {v 1 , . . . , v h } be the color classes of H(h). Let h = 6t + 4. We show that (x, x) satisfies the sum rule for λ = −1. By the symmetry, we only need to show this for u i 's. Let i = 6t ′ + s for some 1 ≤ s ≤ 6. Then n − i + 1 = 6(t − t ′ ) + 5 − s.
j: v j ∼u i x j = n−i+1 j=1 x j = 5−s j=1 a j = −a s = −x i .
Now, let h = 6t + 1. We show that in this case (x, x) satisfies the sum rule for λ = 1. Let i = 6t ′ + s for some 1 ≤ s ≤ 6. Then n − i + 1 = 6(t − t ′ ) + 2 − s. Proof. From Table 2, we observe that for 1 ≤ s ≤ 10, Note that, since 10 i=1 b i = 0, if 1 ≤ ℓ ≤ k, 1 ≤ s ≤ 10 and ℓ ≡ s (mod 10), then
s b s 8 − s 8−s i=1 b i 3 − s 3−s i=1 b i 1 ω 7 1 − ω 2 ω − 1 2 −1 6 −ω 1 ω 3 0 5 0 10 0 4 1 4 ω 9 −ω 5 −ω 3 ω − 1 8 1 − ω 6 −ω 2 ω − 1 7 1 − ω 7 1 1 ω 6 −ω 8 0 10 0 5 0 9 −1 9 −ω 4 ω 10 ω 8 1 − ω 3 ω − 1ℓ i=1 x i = s i=1 b i .
Let k = 10t + 7. Then (x, x) satisfies the sum rule for λ = ω. Let i = 10t ′ + s for some 1 ≤ s ≤ 10. Then n − i + 1 = 10(t − t ′ ) + 8 − s.
j: v j ∼u i x j = n−i+1 j=0 x j = 8−s j=1 b j = ωb s = ωx i .
Now, let h = 10t + 2. Assume that i = 10t ′ + s for some 1 ≤ s ≤ 10. Then n − i + 1 = 6(t − t ′ ) + 3 − s.
j: v j ∼u i x j = n−i+1 j=1 x j = 3−s j=1 b j = −ωb s = −ωx i .
It follows that in this case (x, x) satisfies the sum rule for λ = −ω. (ii) Let x be an eigenvector for eigenvalue λ = 0 of a graph G with x v = 0, for some vertex v. If we add a new vertex u with N(u) = N(v) and add a zero component to x corresponding to u, then the new vector is an eigenvector of H for λ. So, we can extend any graph presented in Theorems 3.4 or 3.5 to construct infinitely many more counterexamples for Conjecture 3.1.
From this and Lemma 1.1 it follows, if mult(λ, G) = 1 that there exists a λ-eigenvector x with x(v) = 0 if and only if v is not a downer vertex for λ.
Remark 2 . 1 .
21If λ = 0, −1, then any vertex of a threshold graph is either downer or neutral. Parter vertices may arise only for λ = 0 or −1. Any vertex deleted subgraph G − v of a threshold graph G is a threshold graph as well. By Theorem 2.1 one can easily determine the multiplicities of 0 and −1 in both G and G − v and, consequently, the vertex type for v of λ = 0 or −1.
Theorem 2 . 2 .
22Let G = NSG(m 1 , . . . , m h ; n 1 , . . . , n h ) and λ = 0, −1 its eigenvalue other than the largest one. Then all vertices in U 1 ∪ V 1 are downers for λ. The same holds for vertices in U h , and also in V h unless λ = −m h and m h ≥ 2.
Theorem 2 . 3 .
23Let G = NSG(m 1 , . . . , m h ; n 1 , . . . , n h ) and let λ = 0, −1 be its eigenvalue. Then, for any
Example 2. 1 .
1If G = NSG(4, 1, 3, 1, 1; 1, 1, 1, 2, 1), then all vertices in U 2 and U 4 are neutral vertices for λ 3 = 1.If G = NSG(2, 4, 4, 2; 1, 1, 1, 2), then all vertices in V 2 and V 4 are neutral for λ 16 = −2.
Example 2. 2 .
2In G = NSG(2, 2, 5, 1; 1, 1, 1, 1) all vertices in U 3 and in V 2 are neutral vertices for λ 2 = 1.
Theorem 2. 4 .
4Let G = NSG(m 1 , . . . , m h ; n 1 , . . . , n h ) such that all vertices in U s for some 2 ≤ s ≤ h − 1 are neutral for λ i = 0, −1. If G ′ = NSG(m s+1 , . . . , m h ; n s+1 , . . . , n h ), n ′ = |V (G ′ )| and Spec(
the induced subgraph of G obtained by deleting all vertices in U 1 , . . . , U s , V 1 , . . . , V s i.e. G ′ = NSG(m s+1 , . . . , m h ; n s+1 , . . . , n h ).
Theorem 2 . 5 .
25Let G = NSG(m 1 , . . . , m h ; n 1 , . . . , n h ) such that all vertices in V s for some 2 ≤ s ≤ h are neutral for λ i = 0, −1. If H s = NSG(m 1 , . . . , m s−1 + m s , . . . , m h ; n 1 , . . . , n s−1 , n s+1 , . . . , n h ), and Spec(H s ) = {λ ′ 1 , . . . , λ ′ n−ns }, then λ i = λ ′ j for some j ∈ {1, . . . , n− n s }. Moreover, j < i < n s + j and if i ≤ n − n s then λ i = λ ′ n−ns . Corollary 2.1. Let G = NSG(m 1 , . . . , m h ; n 1 , . . . , n h ) of order n. Then all vertices in V (G) are downer vertices for λ n .
Theorem 2 . 6 .
26Let G = NSG(m 1 , . . . , m h ; n 1 , . . . , n h ) such that all vertices in U s are neutral for λ = 0, −1 and G ′′ = NSG(m 1 , . . . , m s ; n 1 , . . . , n s ).
G
′ = NSG(m s+1 , . . . , m h ; n s+1 , . . . , n h ) and G ′′ , respectively, and B = O Ms,n ′ J Ns,n ′ , j and n ′ = |V (G ′ )|. The corresponding eigenvector x can be represented as x =
Corollary 2. 2 .
2Let G = NSG(m 1 , . . . , m h ; n 1 , . . . , n h ), G ′′ s = NSG(m 1 , . . . , m s ; n 1 , . . . , n s ),I s = (λ n ′′ s (G ′′ s ), λ 1 (G ′′ s )), where n s ′′ = |(V (G ′′ s )| and λ ∈ Spec(G). If λ / ∈ h−1 s=2 I s ,then all vertices in U are downer vertices for λ. Example 2.3. Let G = NSG(1, 1, 5; 1, 1, 8). Then I 2 = (−1.48, 2.17) and besides λ 1 and λ n all vertices in U are downer for λ n−2 and λ n−1 , as well.
Theorem 3 . 1 .
31Let G = DNG(m 1 , . . . , m h ; n 1 , . . . , n h ). Then the spectrum of G is symmetric about the origin and it contains:
Theorem 3 . 2 .
32Let G = DNG(m 1 , . . . , m h ; n 1 , . . . , n h ) be a chain graph. Then the vertices in U 1 ∪ U h ∪ V 1 ∪ V h are downer for any non-zero eigenvalue.
G
= DNG(m 1 , . . . , m h ; n 1 , . . . , n h ), G ′ s = DNG(m 1 , . . . , m s−1 ; n h−s+2 , . . . , n h ), G ′′ s = DNG(m s , . . . , m h ; n 1 , . . . , n h−s+1 ),
= a s , where we consider 5 − s and 2 − s modulo 6 as elements of {1, . . . , 6}.
= a s = x i . Now we give another class of counterexamples to Conjecture 3.1. For this, let ω 2 + ω − 1 = 0, and (b 1 , . . . , b 10 ) := (ω, −1, 0, 1, −ω, −ω, 1, 0, −1, ω). Let x := (x 1 , . . . , x h ) where x i = b s if i ≡ s(mod 10).
Theorem 3 . 5 .
35In any half graph H(h), the vector (x, x) is an eigenvector for λ = ω if h ≡ 7 (mod 10) and it is an eigenvector for λ = −ω if h ≡ 2 (mod 10).
= −ωb s , where we consider 8 − s and 3 − s modulo 10 as elements of {1, . . . , 10}.
Remark 3 . 3 .
33The following two facts deserve to be mentioned:(i) Given (x, x) as eigenvector of H(h) for λ ∈ {±1, ±ω}, then (x, −x) is an eigenvector of H(h) for −λ.This gives more eigenvalues of H(h) with eigenvectors containing zero components.
then x(u h ) = 0 and we reach a contradiction as above. Consequently, all vertices in V h are downers.Remark 2.2. The following example shows that in unresolved case when λ = −m h and m h ≥ 2 vertices in V h may be neutral. Let G = NSG(2, 2, 2; 2, 3, 2). Then all vertices in U 3 are downers, while all vertices in V 3 are neutral for λ = −2. So, an unresolved case from Theorem 2.2 can be an exceptional one.So, the following question arises: Can we find an example when λ = −m h , m h ≥ 2 and that each vertex in V h is a downer?
s(mod 6). In the next theorem, we show that the vector (x, x) (each x corresponds to a color class) is an eigenvector of a non-zero eigenvalue of H(h) for some h. In view of Remark 1.2, this disproves Conjecture 3.1 .Theorem 3.4. In any half graph H(h), the vector (x, x) is an eigenvector for λ = 1 if h ≡ 1 (mod 6) and it is an eigenvector for λ = −1 if h ≡ 4 (mod 6).Proof. FromTable 1, we observe that for 1 ≤ s ≤ 6,
Table 1 .
1The values of 5−s i=1 a i and 2−s i=1 a iNote that, since 6 i=1 a i = 0, if 1 ≤ ℓ ≤ h, 1 ≤ s ≤ 6 and ℓ ≡ s (mod 6), then
Table 2 .
2The values of 8−s i=1 b i and 3−s i=1 b i
AcknowledgmentsThe research of the second author was in part supported by a grant from IPM.
Eigenvalue location for chain graphs. A Alazemi, M Andelić, S K Simić, Linear Algebra Appl. 505A. Alazemi, M. Andelić, S.K. Simić, Eigenvalue location for chain graphs, Linear Algebra Appl. 505 (2016), 194-210.
Some notes on the threshold graphs. M Andelić, S K Simić, Discrete Math. 310M. Andelić, S.K. Simić, Some notes on the threshold graphs, Discrete Math. 310 (2010), 2241-2248.
Some new considerations about double nested graphs. M Andelić, E Andrade, D M Cardoso, C M Da Fonseca, S K Simić, D V Tošić, Linear Algebra Appl. 483M. Andelić, E. Andrade, D.M. Cardoso, C.M. da Fonseca, S.K. Simić, D.V. Tošić, Some new considerations about double nested graphs, Linear Algebra Appl. 483 (2015), 323-341.
Vertex types in some lexicographic products of graphs. M Andelić, F Ashraf, C M Da Fonseca, S K Simić, submittedM. Andelić, F. Ashraf, C.M. da Fonseca, S.K. Simić, Vertex types in some lexicographic products of graphs, submitted.
Graphs for which the least eigenvalue is minimal. F K Bell, D Cvetković, P Rowlinson, S K Simić, Linear Algebra Appl. IIF.K. Bell, D. Cvetković, P. Rowlinson, S.K. Simić, Graphs for which the least eigenvalue is minimal, II, Linear Algebra Appl. 429 (2008), 2168-2179.
On the first eigenvalue of bipartite graphs. A Bhattacharya, S Friedland, U N Peled, Electron. J. Combin. 15144A. Bhattacharya, S. Friedland, U.N. Peled, On the first eigenvalue of bipartite graphs, Electron. J. Combin. 15 (2008), #R144.
A Brandstädt, V B Le, J P Spinrad, Graph Classes: A Survey. Philadelphia, PAA. Brandstädt, V.B. Le, J.P. Spinrad, Graph Classes: A Survey, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1999.
A E Brouwer, W H Haemers, Spectra of Graphs. New YorkSpringerA.E. Brouwer, W.H. Haemers, Spectra of Graphs, Springer, New York, 2012.
Spectra of graphs obtained by a generalization of the join graph operation. D M Cardoso, M A A Freitas, E A Martins, M Robbiano, Discrete Math. 313D.M. Cardoso, M.A.A. Freitas, E.A. Martins, M. Robbiano, Spectra of graphs obtained by a generalization of the join graph operation, Discrete Math. 313 (2013), 733-741.
An Introduction to the Theory of Graph Spectra. D Cvetković, P Rowlinson, S Simić, Cambridge University Press75CambridgeD. Cvetković, P. Rowlinson, S. Simić, An Introduction to the Theory of Graph Spectra, London Mathematical Society Student Texts, 75, Cambridge Univer- sity Press, Cambridge, 2010.
A new graph product and its spectrum. C D Godsil, B D Mckay, Bull. Austral. Math. Soc. 181C.D. Godsil, B.D. McKay, A new graph product and its spectrum, Bull. Aus- tral. Math. Soc. 18(1) (1978), 21-28.
P L Hammer, U N Peled, X Sun, Difference graphs. 28P.L. Hammer, U.N. Peled, X. Sun, Difference graphs, Discrete Appl. Math. 28 (1990), 35-44.
The structure of threshold graphs. F Harary, Riv. Mat. Sci. Econom. Social. 2F. Harary, The structure of threshold graphs, Riv. Mat. Sci. Econom. Social. 2 (1979), 169-172.
R A Horn, C R Johnson, Matrix Analysis. Cambridge University PressR.A. Horn, C.R. Johnson, Matrix Analysis, Cambridge University Press, 2013.
Eigenvalues and energy in threshold graphs. D Jacobs, V Trevisan, F Tura, Linear Algebra Appl. 465D. Jacobs, V. Trevisan, F. Tura, Eigenvalues and energy in threshold graphs, Linear Algebra Appl. 465 (2015), 412-425.
On a simple characterisation of threshold graphs. P Manca, Riv. Mat. Sci. Econom. Social. 2P. Manca, On a simple characterisation of threshold graphs, Riv. Mat. Sci. Econom. Social. 2 (1979), 3-8.
Threshold Graphs and Related Topics. N V R Mahadev, U N Peled, Annals of Discrete Mathematics. NorthHolland Publishing CoN.V.R. Mahadev, U.N. Peled, Threshold Graphs and Related Topics, Annals of Discrete Mathematics, NorthHolland Publishing Co., Amsterdam, 1995.
I Sciriha, S Farrugia, ID 108509On the Spectrum of Threshold Graphs, ISRN Discrete Mathematics. 21I. Sciriha, S. Farrugia, On the Spectrum of Threshold Graphs, ISRN Discrete Mathematics, 2011 (2011), Article ID 108509, 21 pages.
On the multiplicties of eigenvalues of graphs and their vertex deleted subgraphs: old and new results. S K Simić, M Andelić, C M Da Fonseca, D Živković, Electron. J. Linear Algebra. 30S.K.Simić, M. Andelić, C.M. da Fonseca, D.Živković, On the multiplicties of eigenvalues of graphs and their vertex deleted subgraphs: old and new results, Electron. J. Linear Algebra 30 (2015), 85-105.
|
[] |
[
"A New Loss Function for CNN Classifier Based on Pre-defined Evenly-Distributed Class Centroids",
"A New Loss Function for CNN Classifier Based on Pre-defined Evenly-Distributed Class Centroids"
] |
[
"Qiuyu Zhu \nSchool of Communication and Information Engineering\nShanghai University\nShanghaiCHINA\n",
"Pengju Zhang \nSchool of Communication and Information Engineering\nShanghai University\nShanghaiCHINA\n",
"Xin Ye \nSchool of Communication and Information Engineering\nShanghai University\nShanghaiCHINA\n"
] |
[
"School of Communication and Information Engineering\nShanghai University\nShanghaiCHINA",
"School of Communication and Information Engineering\nShanghai University\nShanghaiCHINA",
"School of Communication and Information Engineering\nShanghai University\nShanghaiCHINA"
] |
[] |
With the development of convolutional neural networks (CNNs) in recent years, the network structure has become more and more complex and varied, and has achieved very good results in pattern recognition, image classification, object detection and tracking. For CNNs used for image classification, in addition to the network structure, more and more research is now focusing on the improvement of the loss function, so as to enlarge the inter-class feature differences, and reduce the intra-class feature variations as soon as possible. Besides the traditional Softmax, typical loss functions include L-Softmax, AM-Softmax, ArcFace, and Center loss, etc.Based on the concept of predefined evenly-distributed class centroids (PEDCC) in CSAE network, this paper proposes a PEDCC-based loss function called PEDCC-Loss, which can make the inter-class distance maximal and intra-class distance small enough in hidden feature space. Multiple experiments on image classification and face recognition have proved that our method achieve the best recognition accuracy, and network training is stable and easy to converge. Code is available in https://github.com/ZLeopard/PEDCC-Loss
|
10.1109/access.2019.2960065
|
[
"https://arxiv.org/pdf/1904.06008v2.pdf"
] | 119,295,412 |
1904.06008
|
e730b89ca182e3f3e5f1c4e1d8300bf3f8f564d5
|
A New Loss Function for CNN Classifier Based on Pre-defined Evenly-Distributed Class Centroids
Qiuyu Zhu
School of Communication and Information Engineering
Shanghai University
ShanghaiCHINA
Pengju Zhang
School of Communication and Information Engineering
Shanghai University
ShanghaiCHINA
Xin Ye
School of Communication and Information Engineering
Shanghai University
ShanghaiCHINA
A New Loss Function for CNN Classifier Based on Pre-defined Evenly-Distributed Class Centroids
Image ClassificationSoftmaxPEDCCLoss Function
With the development of convolutional neural networks (CNNs) in recent years, the network structure has become more and more complex and varied, and has achieved very good results in pattern recognition, image classification, object detection and tracking. For CNNs used for image classification, in addition to the network structure, more and more research is now focusing on the improvement of the loss function, so as to enlarge the inter-class feature differences, and reduce the intra-class feature variations as soon as possible. Besides the traditional Softmax, typical loss functions include L-Softmax, AM-Softmax, ArcFace, and Center loss, etc.Based on the concept of predefined evenly-distributed class centroids (PEDCC) in CSAE network, this paper proposes a PEDCC-based loss function called PEDCC-Loss, which can make the inter-class distance maximal and intra-class distance small enough in hidden feature space. Multiple experiments on image classification and face recognition have proved that our method achieve the best recognition accuracy, and network training is stable and easy to converge. Code is available in https://github.com/ZLeopard/PEDCC-Loss
Introduction
In the past few years, convolutional neural networks (CNNs) have brought excellent performance in many areas such as image classification, object detection, and face recognition. CNNs extract features from complex datasets through kinds of convolutional layers and pooling layers, and then linear layer is performed for classification. Due to the powerful feature expression and learning ability of CNNs, we can solve a variety of visual recognition tasks.
In order to addressing the drawbacks currently faced by CNNs, many researchers have proposed very effective solutions, such as data augmentation, regularization, dropout, batch normalization and various activation functions. The development of the network structure is also very rapid, from the beginning of AlexNet [1] to VGGNet [2], and to the deeper ResNet [3], ResNext [4], DenseNet [5] and SEResNet [6], etc. The advantages of CNNs are constantly magnified.
Recent research has slowly extended to the design of loss function to obtain a more distinguishing feature distribution, which means the compactness of intra-class and the discreteness of inter-class as soon as possible. Due to the strong fitting ability of the CNNs, these methods can work well and the accuracy of classification can be improved. So, more and more researchers have optimized the loss function. Due to the advantages of clear theory, easy training, and good performance, the traditional cross entropy loss function is widely used in image classification. But it is not guaranteed to obtain the optimized feature distribution mentioned above. The contrastive loss [7] and triplet loss [8] were proposed to increase the constraint on features. It can easily train large-scale data sets without being limited by display storage. But the disadvantage is that much attention is paid to local feature, leading to training difficulties and long convergence time. L-Softmax [9] introduces the margin parameter and modifies the original Softmax function decision boundary, which increases the learning difficulty by modifies ‖ ‖‖ ‖ to ‖ ‖‖ ‖ , alleviating the over-fitting problem, and producing the decision margin to make the distribution more discriminative. AM-Softmax [10] set ‖ ‖ = ‖ ‖ = 1, and normalize the last layer weights and output features to reduce the difference of image's resolution in the dataset, and the impact of quantitative differences. Then, the Euclidean feature space are converted into the cosine feature space and cos (mθ) are changed to cosθ − m, which makes the backpropagation easier. The core of Center Loss [11] is that in each batch, a class center is calculated, and the distance between each sample and the class center are minimized. Then, the mean square error combined with the Cross-Entropy loss is proposed, in which the class centers are also trained by stochastic gradient descent. However, the distances of familiar classes are not well separated, and the distribution in the Euclidean space is not uniform. For example, the class centers of class '0' and '6' in MNIST [12] are relatively closer.
In this paper, PEDCC proposed in CSAE (Zhu qiuyu et al., 2019) [13] is used to generate the class centroids of the evenly distributed normalized weight, which is called PEDCC weights. We replace the weight of the classification linear layer with PEDCC weights in CNNs, and the PEDCC weights are solidified during training to maximize the inter-class distance. Thus, we get a new loss function called PEDCC-loss by apply the fixed PEDCC weight to AM-Softmax. In the same time, we add a constrain condition similar to Center Loss [11] to calculated the mean square error loss (MSE loss) between the sample feature and PEDCC centroids. This can optimize the feature embedding to enforce higher similarity for intra-class samples and diversity for inter-class samples. The two losses sum for the backpropagation to update the parameter weights before the classification linear layer. Compared with Center loss [11], the class centroid is fixed and uniform, and PEDCC weights are applied to AM-Softmax loss [10]. The method makes the feature distribution optimal for the compactness of intra-class and the discreteness of inter-class.
The overall system diagram is shown in Figure 1. Details of the proposed method are given in Section 3. (1) The PEDCC introduce by CSAE [13] is used as weight parameter to solidify the classification layer in the convolutional neural network, which also reduces the parameter number of the whole network and improves the training speed and accuracy.
Conv Unit FC
(2) PEDCC weights are applied to AM-Softmax [10] loss, and the improved MSE loss between the feature vector and the predefined class centroid are calculated. Two losses are added to form PEDCC-Loss. In the final stage of training, an optional finetuning trick are adopted to further improve the accuracy of classification.
(3) For the image recognition and face recognition tasks, multiple datasets (EMNIST [14], CIFAR100 [15], FaceScrub [16] and LFW [17]) are evaluated. Compared with the latest research work, our method achieves the optimal recognition accuracy, and network training is stable and easy to converge.
Related Work
There are various loss functions in CNNs. Traditional loss functions include Hinge loss, The contrastive loss [7], Triplet loss [8], and the most commonly used Softmax loss function. But the Softmax loss is not good at reducing the intra-class variation. To address this problem, L-Softmax [9] introduce the margin parameter to multiply the angle between the classes in order to increase learning difficulty. However, due to the cos (mθ), the training is difficult to converge. A-Softmax [18] introduced an conceptually appealing angular margin to push the classification boundary closer to the weight vector of each class. AM-Softmax [10] and CosFace [19] are also directly adds cosine margin penalty to the target logit, which obtains better performance compared to A-Softmax [18], and easier to implement and converge. ArcFace [20] moved cosine margin to the angular margin by changing cosθ − m to cos (θ + m), and also discuss the impact of different decision boundaries. But it is found by our experiments that there is no good universality, and it cannot be applied to different classification tasks. Center loss [11] innovatively discussed the distance between each sample and the class center. The mean square error combined with the cross entropy was added to compress the intra-class distance. however, the center of each class is continuously optimized during the training process.
Let's review the Softmax loss and define the ith input feature as and label . It can be expressed as follows:
= 1 ∑ = 1 ∑ − ( ∑ ) (1)
where represents the jth element of the class output vector of the final fully connected layer, and N is the number of training samples. Since can be expressed as = , and the final loss function can be written as:
= − ( ‖ ‖‖ ‖ ( ) ∑ ‖ ‖‖ ‖ ( ) ) (2)
The purpose of the initial Softmax is to make 1 > 2 , that is ‖ 1 ‖‖ ‖ 1 > ‖ 2 ‖‖ ‖ 2 , which gives the correct classification result for sample X (from class 1). The motivation of L-Softmax loss [9] is to generate a decision margin by adding a positive integer variable m, which can constrain the above inequalities more strictly. As following:
‖ 1 ‖‖ ‖ 1 ≥ ‖ 1 ‖‖ ‖ 1 > ‖ 2 ‖‖ ‖ 2 where 0 ≤ 1 ≤ .
AM-Softmax [10] rewrites the equation of cos( ) to: ψ( ) = cos( ) − . The above formula is simpler than the ψ(θ) of L-Softmax [9] in form and calculation. In addition, based on L-Softmax [9], a constraint is added: = 0, ‖ ‖ = 1. Compared with L-Softmax loss [9], the difference between the classes is only related to the angle of θ, and is angular margin. So, after the normalization of weights and input features, the loss function can be expressed as:
= − 1 ∑ •( − ) •( − ) + ∑ • =1, ≠ (3)
Center loss [11] calculates the class center of several samples of each class in each batch, and then calculates the MSE loss between each sample and the class center.
= 1 2 ∑‖ − ‖ 2 =1 (4)
where represents the center calculated by the class. Finally, the joint loss function is L = + .
In this paper, PEDCC is used to generate an evenly distributed class centroids, replacing the center calculated in Center loss [11], and MSE loss is used to further reduce the distance between the sample and the class center. Secondly, the fixed and evenly distributed PEDCC weights are directly used as the classification layer weights, and are not updated during training. Finally, the two losses are combined and optimized simultaneously, thus achieving the theoretical optimal distribution. We perform feature visualization for the MNIST [12] dataset, and compare various loss functions in Pytorch 1.0 [21] to show the feature distribution in two-dimensional space and threedimensional space, respectively, with epochs of 30, as shown in Figure 2:
Softmax-2D
Center-loss PEDCC-loss Softmax-3D AM-Softmax PEDCC-loss Figure 2 Features visualization of different methods in 2-D and 3-D space It can be seen that, in the Euclidean space, the PEDCC-loss is distributed on the 2-D or 3-D spherical surface, and each cluster is approximately evenly distributed and compact. While Center loss [11] randomly clustered in the feature space and reduces the intra-class distance, and AM-Softmax [10] has similar result. In the cosine space, PEDCC-loss can not only use margin to separate the inter-class space, but also make the clusters more evenly-distributed, and the sample is closer to the predefined class center.
Proposed Method
The Distance of Intra-class and Inter-class
From the perspective of statistical pattern recognition and image classification, the original image can be understood as high-dimensional features. Through some traditional machine learning methods, dimensionality reduction is performed on high-dimensional features. The main goal of dimensionality reduction is to generate low-dimensional expressions with higher similarity for intraclass samples and high diversity for inter-class samples, to ensure the compactness of intra-class and the discreteness of inter-class, such as the classic LDA method. Finally, we use the Euclidean distance for image recognition, or the cosine distance for face recognition to classify the samples.
The distance of intra-class is
= ∑ =1 ( − )( − )(5)
where is the center of class , is the probability of class , and = [ ] ,
= [ ].The distance of inter-class is = ∑ [( − )( − ) ] =1(6)
L-Softmax [9] introduced the concept of margin to increase the difficulty of learning, which is more concerned with inter-class distances than traditional Cross Entropy loss. After adding margin, the distribution of each class becomes slender, which improves the gap between classes. In some visual tasks, such as image classification, it also improves the recognition accuracy, but the intraclass distance is not minimal.
Center Loss [11] learns a center for deep features of each class and penalizes the distances between the deep features and their corresponding class centers. Increasing the degree of aggregation within a class is not difficult for a neural network with powerful learning ability. However, Increasing the inter-class distance is a difficult problem. Different classification tasks may have different distances, and the distance between classes is relatively close. If the intra-class distance is large, there will be overlap between class samples. This leads to misclassification, and there is currently no effective way to avoid this problem. This paper creatively makes use of predefined evenly distribution class centroids, which makes the distance of inter-class be fixed and separated from each other maximally, and simultaneously forces the samples to close to the predefined center as soon as possible.
PEDCC
In this paper, by pre-defining the optimal clustering center, the clustering centers of the classes are artificially set, and these clustering centers are evenly distributed on the hypersphere surface of the feature space, so that the class spacing is maximized.
In this way, we learn a mapping function through CNNs, and map different classes samples to the center of these predefined classes, then to cluster them. So that the distances between different classes can be separated maximally.
The method of generating the predefined class center in this paper is based on the physical model with the lowest isotropic charge energy on the sphere, that is, it is assumed that the charge points on the hypersphere have repulsive force with each other, and the repulsive force decreases as the distance between the points increases. At the end of the movement, the point on the hypersphere stops moving. Due to the repulsive force, the points will be evenly distributed on the hypersphere. When the equilibrium state is reached, the n points on the hypersphere can be farthest apart. The detail of algorithm implementation is visible in [13].
As shown in Figure 3, in order to visualize the weight distribution of the PEDCC, we set the output dimensions to 3, and display 4 and 20 classes of PEDCC.
Figure 3 Visualization of the PEDCC weight distribution
PEDCC-Loss
The previous section gives the concept of inter-class distance and intra-class distance in pattern recognition, which is very important in traditional machine learning and deep convolutional neural networks. The essence of machine learning is to learn good feature distribution, and PEDCC gives the theoretically optimal distribution of cluster centers. Therefore, based on the above two concepts, this section will give a new loss function called PEDCC-Loss for CNNs.
The classification layer parameters in the traditional CNNs are trained with the overall network, and the weights are updated using back propagation by minimalize the loss. In the Euclidean space, the score of each sample is calculated by the formula = ‖ ‖‖ ‖ ( ). Then we convert scores to probabilities by Softmax function, and get the result of classification. Because the sample numbers of each class and the quality of the image samples may be different in the dataset, the weight vector W is different too. the visualization of is the vector from origin to the class center, and the visualization of is the vector from origin to the point with each different color (corresponding to different classes, see Softmax 2D in Figure 2). Then, the classification layer weight is actually the vector trained by CNNs with sufficient discriminative ability.
The PEDCC is artificially given a plurality of evenly distributed class centers, which are evenly distributed sample points on the unit hypersphere, or a plurality of evenly scattered vectors. Therefore, the global optimal solution of the objective function of the classification layer of CNNs is essentially to obtain a plurality of scattered vectors with sufficient discrimination. We replace the last layer of the convolutional neural network with the predefined class-centered (PEDCC weight), and during the training phase, only the weights of previous layers are updated.
At the end of training phase, in order to obtain better recognition performance, depending on different dataset, a finetuning processing of the PEDCC weight of the last linear classification layer are adopted optionally. PEDCC-loss given as following:
− = − 1 ∑ •( − ) •( − ) + ∑ • =1, ≠ (7) − = 1 2 ∑‖ − ‖ 2 =1 (8) = − + √ −(9)
Where s and follow the setting of [20], and n ≥ 1 is a constrain factor of the PEDCC-MSE. On the unit hypersphere, the distance from the sample to the predefined class center is less than 1, and a nonlinear decision margin n is added to the MSE to increase the difficulty of reducing the intra-class distance.
Experiment Result
Implementation Details
Our experiment is implemented using Pytorch 1.0 [21], which performs image classification and face recognition tasks respectively. Different PEDCC normalized weights are generated according to the number of dataset classes. The network structure of the image classification is the same as [9] where VGG [2] is used, and the batchsize is 256. The network structure of face recognition is the same as [20] where ResNet18 (IRBlock) [3] with 512 features is used, and the batchsize is 128. During the training phase, the initial learning rate is 0.1, the weight attenuation is 0.0005, and the momentum is 0.9. The SGD training algorithm is used for both models.
Since the number of samples in each class in different datasets may be unbalanced, and different classes may have a slightly different clustering propertied, resulting in that a fixed PEDCC weight will not reach the globally optimal state. We allow PEDCC weights to be fine-tuned within a certain range, that is, a PEDCC weight is set a very small learning rate to fine-tune the class center after a certain training epoch. In this paper, the training epochs are 120, so we begin to finetune the PEDCC weight with learning rate 1e-3 at epoch 70 to obtain a globally optimal distribution.
Image Classification Tasks
In the image classification task, the EMNIST [14] data set is first used. The data set has six division methods: ByClass, ByMerge, Balanced, Letters, Digits, and MNIST. We use the Balanced data set for training. The data set has a total of 131,600 characters pictures, and are evenly divided into 47 classes, each class with 2800 characters. The experimental results are shown in Table 1. Then, we used the more representative CIFAR100 [15] dataset for test, which has 100 natural images, 500 training sets, and 100 test sets. For this dataset, standard data augmentation [9] is performed, that is, the training set image is padding 4 pixels, and then be randomly clipped to 32×32. The 0.5 probability horizontal flip is also performed, while the test set is not processed. The test results are shown in Table 2. In CIFAR100 [15], our method predefines 100 classes of 512-dimensional class centers distributed on the hypersphere. After the parameters are solidified, the loss of the training set is also lower than the AM-Softmax [10] of the same parameter. This shows the effectiveness of our method, and the addition of PEDCC-MSE further compresses intra-class distance. In terms of accuracy, PEDCC-loss obtains the best results in the classification.
Face Recognition Tasks
After L-Softamx [9], many studies have focused on the loss function of face recognition, because face recognition pays more attention to the validity of the feature vector, and the increase of the number of classes can better reflect the validity of the loss function. Here we train ResNet18 for the FaceScrub [16] dataset, which contains more than 100,000 face-aligned images for 530 people, with 265 for men and women. After training the model, the 512-dimensional feature vector extracted are used to test the LFW [17] dataset. The training picture size is 144 × 144, which is randomly clipped to 128 × 128, and flipped by the same 0.5 probability level. The number of test faces for LFW [15] is 6000 pairs. Through the above experiments, we can know that, compared with the weight of random initialization, the PEDCC weight proposed can get a better weight distribution result and make the model more precise, and a nonlinear factor added to the MSE also can increase the accuracy. Due to the imbalance in the number of samples of various classes, the fixed PEDCC weights are not the optimal state. So, by using the finetuning strategy, we can see that its accuracy has been effectively improved.
Conclusion
We propose a new loss function based on predefined evenly distributed class centroids for convolutional neural networks. The fixed PEDCC weights are substituted for the parameters of the classification layer in the network, and the improved cross entropy loss is combined with the mean square error of the predefined class center, where a nonlinear factor is also added to the MSE to increase the learning difficulty. Experimental results show that PEDCC-Loss achieves the best results in image classification and face recognition tasks, and network training is stable and easy to converge.
Figure 1 .
1The PEDCC-loss Our main contributions are as follows:
Table 1
1Accuracy with various loss function in EMNISTLoss Function
Accuracy(%)
Hinge Loss
88.22
Cross Entropy Loss
88.42
L-Softmax (m=2)
88.69
L-Softmax (m=4)
88.81
A-Softmax (m=4)
88.83
Center Loss
89.21
AM-Softmax (m=0.5)
89.45
ArcFace (m=0.5)
89.52
PEDCC-Loss(m=0.5 n=1)
89.60
PEDCC-Loss -finetuning(m=0.5 n=1)
89.83
Table 2
2Accuracy with various loss function in CIFAR100Loss Function
Accuracy(%)
Hinge Loss
67.10
Cross Entropy Loss
67.26
L-Softmax (m=2)
70.05
L-Softmax (m=4)
70.47
A-Softmax (m=4)
70.86
Center Loss
71.01
AM-Softmax (m=0.5)
71.43
ArcFace (m=0.5)
71.76
PEDCC-Loss (m=0.5 n=1)
72.22
PEDCC-Loss -finetuning (m=0.5 n=1)
72.66
PEDCC-Loss (m=0.5 n=2)
71.87
PEDCC-Loss -finetuning (m=0.5 n=2)
72.13
PEDCC-Loss (m=0.5 n=3)
71.59
PEDCC-Loss -finetuning (m=0.5 n=3)
71.89
Table 3
3Accuracy with various loss function in LFWLoss Function
Accuracy(%)
Cross Entropy Loss
91.07.
L-Softmax (m=4)
91.22
ImageNet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Commun. ACM. 606A. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet classification with deep convolutional neural networks," Commun. ACM, vol. 60, no. 6, pp. 84-90, May 2017.
Very Deep Convolutional Networks for Large-Scale Image Recognition. K Simonyan, A Zisserman, K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," ArXiv14091556 Cs, Sep. 2014.
Deep Residual Learning for Image Recognition. K He, X Zhang, S Ren, J Sun, K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," ArXiv151203385 Cs, Dec. 2015.
Aggregated Residual Transformations for Deep Neural Networks. S Xie, R Girshick, P Dollar, Z Tu, K He, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HIS. Xie, R. Girshick, P. Dollar, Z. Tu, and K. He, "Aggregated Residual Transformations for Deep Neural Networks," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017, pp. 5987-5995.
Densely Connected Convolutional Networks. G Huang, Z Liu, L Van Der Maaten, K Q Weinberger, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HIG. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, "Densely Connected Convolutional Networks," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017, pp. 2261-2269.
Squeeze-and-Excitation Networks. J Hu, L Shen, S Albanie, G Sun, E Wu, J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, "Squeeze-and-Excitation Networks," ArXiv170901507 Cs, Sep. 2017.
Dimensionality Reduction by Learning an Invariant Mapping. R Hadsell, S Chopra, Y Lecun, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06). 2R. Hadsell, S. Chopra, and Y. LeCun, "Dimensionality Reduction by Learning an Invariant Mapping," in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), 2006, vol. 2, pp. 1735-1742.
FaceNet: A Unified Embedding for Face Recognition and Clustering. F Schroff, D Kalenichenko, J Philbin, 2015 IEEE Conf. Comput. Vis. Pattern Recognit. CVPR. F. Schroff, D. Kalenichenko, and J. Philbin, "FaceNet: A Unified Embedding for Face Recognition and Clustering," 2015 IEEE Conf. Comput. Vis. Pattern Recognit. CVPR, pp. 815- 823, Jun. 2015.
Large-Margin Softmax Loss for Convolutional Neural Networks. W Liu, Y Wen, Z Yu, M Yang, 10W. Liu, Y. Wen, Z. Yu, and M. Yang, "Large-Margin Softmax Loss for Convolutional Neural Networks," p. 10.
Additive Margin Softmax for Face Verification. F Wang, W Liu, H Liu, J Cheng, IEEE Signal Process. Lett. 257F. Wang, W. Liu, H. Liu, and J. Cheng, "Additive Margin Softmax for Face Verification," IEEE Signal Process. Lett., vol. 25, no. 7, pp. 926-930, Jul. 2018.
A Discriminative Feature Learning Approach for Deep Face Recognition. Y Wen, K Zhang, Z Li, Y Qiao, Computer Vision -ECCV 2016. B. Leibe, J. Matas, N. Sebe, and M. WellingSpringer International Publishing9911Y. Wen, K. Zhang, Z. Li, and Y. Qiao, "A Discriminative Feature Learning Approach for Deep Face Recognition," in Computer Vision -ECCV 2016, vol. 9911, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds. Cham: Springer International Publishing, 2016, pp. 499-515.
The MNIST Database of Handwritten Digit Images for Machine Learning Research. L Deng, IEEE Signal Process. Mag. 296Best of the WebL. Deng, "The MNIST Database of Handwritten Digit Images for Machine Learning Research [Best of the Web]," IEEE Signal Process. Mag., vol. 29, no. 6, pp. 141-142, Nov. 2012.
A Classification Supervised Auto-Encoder Based on Predefined Evenly-Distributed Class Centroids. Q Zhu, R Zhang, Q. Zhu and R. Zhang, "A Classification Supervised Auto-Encoder Based on Predefined Evenly- Distributed Class Centroids," ArXiv190200220 Cs, Feb. 2019.
EMNIST: an extension of MNIST to handwritten letters. G Cohen, S Afshar, J Tapson, A Van Schaik, G. Cohen, S. Afshar, J. Tapson, and A. van Schaik, "EMNIST: an extension of MNIST to handwritten letters," Feb. 2017.
Learning multiple layers of features from tiny images. A Krizhevsky, A. Krizhevsky, "Learning multiple layers of features from tiny images," 2009.
A data-driven approach to cleaning large face datasets. H Ng, S Winkler, 2014 IEEE International Conference on Image Processing. H. Ng and S. Winkler, "A data-driven approach to cleaning large face datasets," in 2014 IEEE International Conference on Image Processing (ICIP), 2014, pp. 343-347.
Labeled Faces in the Wild: A Database forStudying Face Recognition in Unconstrained Environments. G B Huang, M Mattar, T Berg, E Learned-Miller, 15G. B. Huang, M. Mattar, T. Berg, and E. Learned-Miller, "Labeled Faces in the Wild: A Database forStudying Face Recognition in Unconstrained Environments," p. 15, 2008.
SphereFace: Deep Hypersphere Embedding for Face Recognition. W Liu, Y Wen, Z Yu, M Li, B Raj, L Song, W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song, "SphereFace: Deep Hypersphere Embedding for Face Recognition," ArXiv170408063 Cs, Apr. 2017.
CosFace: Large Margin Cosine Loss for Deep Face Recognition. H Wang, 2018H. Wang et al., "CosFace: Large Margin Cosine Loss for Deep Face Recognition," in 2018
IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UTIEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, pp. 5265-5274.
ArcFace: Additive Angular Margin Loss for Deep Face Recognition. J Deng, J Guo, N Xue, S Zafeiriou, J. Deng, J. Guo, N. Xue, and S. Zafeiriou, "ArcFace: Additive Angular Margin Loss for Deep Face Recognition," ArXiv180107698 Cs, Jan. 2018.
Automatic differentiation in PyTorch. A Paszke, 4A. Paszke et al., "Automatic differentiation in PyTorch," p. 4.
|
[
"https://github.com/ZLeopard/PEDCC-Loss"
] |
[
"arXiv:astro-ph/0507501v1 21 Jul 2005 Eclipsing Light-Curve Asymmetry for Black-Hole Accretion Flows",
"arXiv:astro-ph/0507501v1 21 Jul 2005 Eclipsing Light-Curve Asymmetry for Black-Hole Accretion Flows"
] |
[
"Publ Astron ",
"Soc ",
"Japan \nAstronomical Institute\nOsaka-Kyoiku University\n582-8582KashiwaraOsakaJapan\n",
"-?? ",
"Ken-Ya Watarai [email protected] \nAstronomical Institute\nOsaka-Kyoiku University\n582-8582KashiwaraOsakaJapan\n\nPromotion of Science\nJapan Society\n",
"Rohta Takahashi \nGraduate School of Arts and Sciences\nUniversity of Tokyo\n153-8902TokyoJapan\n\nPromotion of Science\nJapan Society\n",
"Jun Fukue \nAstronomical Institute\nOsaka-Kyoiku University\n582-8582KashiwaraOsakaJapan\n"
] |
[
"Astronomical Institute\nOsaka-Kyoiku University\n582-8582KashiwaraOsakaJapan",
"Astronomical Institute\nOsaka-Kyoiku University\n582-8582KashiwaraOsakaJapan",
"Promotion of Science\nJapan Society",
"Graduate School of Arts and Sciences\nUniversity of Tokyo\n153-8902TokyoJapan",
"Promotion of Science\nJapan Society",
"Astronomical Institute\nOsaka-Kyoiku University\n582-8582KashiwaraOsakaJapan"
] |
[] |
We propose an eclipsing light-curve diagnosis for black-hole accretion flows. When emission from an inner accretion disk around a black hole is occulted by a companion star, the observed light curve becomes asymmetric at ingress and egress on a time scale of 0.1-1 seconds. This light-curve analysis provides a means of verifying the relativistic properties of the accretion flow, based on the special/general relativistic effects of black holes. The "skewness" for the eclipsing light curve of a thin disk is ∼ 0.08, whereas that of a slim disk is ∼ 0, since the innermost part is self-occulted by the disk's outer rim.
|
10.1093/pasj/57.5.827
|
[
"https://arxiv.org/pdf/astro-ph/0507501v1.pdf"
] | 119,354,869 |
astro-ph/0507501
|
d9a0dc47ec3e38e53c19556ba09fee871cb101ab
|
arXiv:astro-ph/0507501v1 21 Jul 2005 Eclipsing Light-Curve Asymmetry for Black-Hole Accretion Flows
Publ Astron
Soc
Japan
Astronomical Institute
Osaka-Kyoiku University
582-8582KashiwaraOsakaJapan
-??
Ken-Ya Watarai [email protected]
Astronomical Institute
Osaka-Kyoiku University
582-8582KashiwaraOsakaJapan
Promotion of Science
Japan Society
Rohta Takahashi
Graduate School of Arts and Sciences
University of Tokyo
153-8902TokyoJapan
Promotion of Science
Japan Society
Jun Fukue
Astronomical Institute
Osaka-Kyoiku University
582-8582KashiwaraOsakaJapan
arXiv:astro-ph/0507501v1 21 Jul 2005 Eclipsing Light-Curve Asymmetry for Black-Hole Accretion Flows
(Received 2005 April 15; accepted 2005 July 11)accretion: accretion disks, black holes-stars: X-rays
We propose an eclipsing light-curve diagnosis for black-hole accretion flows. When emission from an inner accretion disk around a black hole is occulted by a companion star, the observed light curve becomes asymmetric at ingress and egress on a time scale of 0.1-1 seconds. This light-curve analysis provides a means of verifying the relativistic properties of the accretion flow, based on the special/general relativistic effects of black holes. The "skewness" for the eclipsing light curve of a thin disk is ∼ 0.08, whereas that of a slim disk is ∼ 0, since the innermost part is self-occulted by the disk's outer rim.
Introduction
Several methods have been proposed for obtaining physical information about conditions near black holes. Direct imaging of black holes is an extremely promising method of investigation. For example, VSOP2 and MAXIM projects are currently underway, and one aim of these projects is the direct imaging of black hole shadows (see also Hirabayashi et al. 2005;; and the MAXIM Web page http://maxim.gsfc.nasa.gov/). In particular, Sgr A* and M87 are good targets for observing a black hole shadow because of their proximity and large apparent sizes. However, even with the use of VSOP2 or MAXIM, direct imaging studies of stellar-mass black holes are difficult because the size of the emitting region is extremely small. The characteristic size of the emitting region is roughly the radius of the black hole. If we assume a non-rotating black hole, then the Schwartzchild radius, r g , would be 2.95 × 10 6 (M/10M ⊙ ) cm. The radius of maximum temperature for a standard thin accretion disk is then located at ∼ 3r g , and therefore the size of the inner emitting region is ∼ 100 km for a 10M ⊙ black hole. Therefore, observations on such small scales are extremely difficult even for proposed future missions. Consequently, timing analysis or spectroscopic study are currently more useful methods for investigating stellar mass black holes than the imaging studies. Timing analysis of quasi-periodic oscillations (QPOs) is a very popular and powerful tool for examining Galactic black hole candidates (van der Klis 2000, and references therein). The QPO frequency can be used to derive the physical parameters of black holes, i.e., the black hole mass and spin (Abramowicz & Kluźniak 2001).
In this paper, we propose a new method to detect accreting gas falling into a black hole using light curves obtained during eclipse by a companion star. Light curves obtained at the time of an eclipse contain information about the region around a compact star, that is, the curvature of the space time and angular momentum of the compact star. Therefore, provided the observational instrument has sufficient time resolution and sensitivity, these data can be used to constrain the black hole physics. The basic idea for this light-curve analysis was first proposed by Fukue (1987). In this paper we will discuss the idea and describe methods for statistical analysis. Our analysis is a first step towards an "eclipse mapping" method (Horne 1985), a well-known technique for light-curve analysis of cataclysmic variables. This light-curve analysis method is potentially a strong tool for studying and obtaining physical information about black holes.
In the next section, we examine several timescales during an eclipse in a binary system. In section 3, we present the results of our light-curve calculations for the eclipsing period. In section 4, we discuss, from various viewpoints, the feasibility of observationally detecting asymmetry in light curves. The final section contains our concluding remarks.
Time Scales During an Eclipse
A light curve obtained during the eclipse of an accretion disk by a companion star contains physical information about the accreting gas near the black hole: for example, the position of the marginal stable circular orbit, the transonic nature of the flow, and details on radiation processes. This physical information is likely to provide important clues as to the nature of black holes. Therefore, light-curve analysis can be a strong tool for studying black holes. A [Vol. , rough estimate of the eclipsing time was derived by Fukue (1987); the eclipsing time scale of ingress and egress is ∆t ∼ 2r/v orb , where r is the disk radius with an asymmetrical emission distribution, and v orb is the orbital velocity.
∆t ∼ 2r v orb = 0.85 r 20r g a R ⊙ 1/2 M 10M ⊙ M + m 10M ⊙ −1/2 s = 4.24 r 20r g M 10M ⊙ M + m 10M ⊙ −1/3 P 2.62d 1/3 s.(1)
Here, the orbital velocity was simply evaluated by Kepler's law, v orb = G(M + m)/a, where G is the gravitational constant, M and m are a black hole and a companion mass, respectively, and a represents the binary separation. Finally, we adopted P = 2.62 day for a period of GRO J1655-40 (Bailyn et al. 1995) as a sample observational value for eclipsing binary. Clearly, a larger black hole mass or longer orbital period will increase ∆t. The duration time of the total eclipse of the disk is roughly given by the following equation;
t eclipse ∼ 2r * v orb = 10 3 r * R ⊙ a R ⊙ 1/2 M + m 10M ⊙ −1/2 s = 4.15 × 10 3 r * R ⊙ M + m 10M ⊙ −1/3 P 2.62d 1/3 s.(2)
Here, r * is the radius of the companion stars.
To detect this eclipse time period, ∆t, a high time resolution of at least ∼ 0.01 − 1 seconds is required. This is not easy to obtain in the optical band; however, in the X-ray band, current instrument already has sufficiently high time resolution (for example, the RXT E Proportional Counter Array (PCA) has a time resolution of milliseconds). Observations of eclipsing black hole X-ray binaries therefore provide an opportunity to study the physics around black holes. We show a schematic diagram of our calculation in figure 1. Our present study focused on eclipse light curves for a region very close to a black hole ( < ∼ 30 r g ). In addition, we implicitly assumed that the binary system has a relatively high inclination angle, i > 60 • .
Asymmetric Light-Curve Analysis
Calculation Method
Some assumptions were required to calculate a light curve during an eclipse. First, we assumed an optically thick relativistic accretion disk around a Schwarzschild black hole as a background radiation source. We numerically solved a set of hydrodynamical equations including the effect of the transonic nature of the flow (Watarai et al. 2000). Using the numerical data, we could obtain more realistic temperature profiles, velocity field, and disk geometrical thickness unlike simple thin disk solutions. For an accretion rate exceeding the Eddington rate,Ṁ E = 16L E /c 2 , the calculated solutions became those of advection-dominated states. Such solutions corresponded to so-called "slim disks" (Abramowicz et al. 1988), and the scale-height of a disk increased as the accretion rate increased. When the mass-accretion rate increased, the energy generation via viscous heating increased. As a result the pressure of the disk became proportional to the mass-accretion rate, which gave H ∝Ṁ 1/2 . Hence, the disk became geometrically thick for high mass-accretion rates. Using the calculated numerical solutions we computed bolometric flux images including the geometrical and relativistic effects (figure 2). Throughout these calculations, we used the normalized accretion rate,ṁ =Ṁ /(L E /c 2 ).
Second, to obtain an image around a black hole, we applied the Ray-Tracing method, which is commonly used in astrophysics (see also Fukue, Yokoyama 1988). Large numbers of rays were traced from the observer's screen to the black hole, and the rays were calculated along with the null geodesic. Accordingly, our calculation automatically included the special/general relativistic effects, such as the relativistic Doppler effect, photon red shift, the light bending effect, etc. More details of the calculation method are described in Watarai et al. (2005). Left column shows i = 60 • , the center shows i = 70 • , and the right shows i = 80 • , respectively.
Although the calculation size was 60rg × 60rg, in order to close up the flux distribution we plotted 30rg × 30rg region. Here, rg is the Schwarzschild radius, rg = 2.95 × 10 6 (M/10M ⊙ ) cm. The bottom right figure (for i = 80 • ,ṁ = 100) shows that the emission from disk's inner region was completely blocked by the disk's outer region. Figure 3 shows the light curves at ingress and egress of an eclipse. We assumed that the orbital velocity was 200 km s −1 . The total eclipse phase (∼ 10 3 s) was removed (see figure 1), and the ingress phase (∼ 1 s) was directly joined to the egress phase to analyze the eclipse profiles. The calculated size was 60r g × 60r g , which is very small relative to the radius of companion star. We therefore ignored the curvature of the edge of the companion star in calculating the light-curves of the eclipse.
Calculated Light Curves
All of the light curves showed asymmetric profiles (figure 3). These light-curve features are explained as follows. At ingress, initially the brighter part was blocked so that the light curve first decreased rapidly and then more gradually. On the other hand, at egress, the brighter part appeared first, and thus the light curve increased rapidly and then more gradually. However, there are several differences in these asymmetric light curves. In particular, there is a dependence on inclination angle, i: the asymmetry strengthens as i increases. This is due to the emission from high-i accretion disks becoming amplified by relativistic Doppler beaming.
As the accretion rate increased, the shape of the light curve became symmetric because the emission from the disk's inner region suffered from the self-occultation of the disks outer rim. Therefore, the light curves retained a normal distribution due to the absence of an asymmetric emitting region. The degree of asymmetry of these light curves will be discussed in the next subsection.
Skewness and Kurtosis
Asymmetric light curves will be observed when the object is sufficiently bright and when a companion star crosses in front of a black hole during an eclipse. Here, we seek a simplified observational indicator that expresses the degree of asymmetry. We introduce the statistical quantities of skewness, S, and kurtosis, K, as indicators of light-curve asymmetry. The skewness and kurtosis represent the deviation of the observational data from the Gaussian (normal) distribution. The skewness measures the asymmetry of the distribution. The kurtosis measures how sharply peaked a distribution is relative to its width.
First, we inverted the light-curve data to obtain f i . Then, we used the modified data, f i for skewness-kurtosis analysis. The statistical quantities were defined as follows:
N = n i=1 f i ,(3)x = 1 N n i=1 x i f i ,(4)σ = 1 N n i=1 (x i −x) 2 f i ,(5)S = 1 N σ 3 n i=1 (x i −x) 3 f i ,(6)K = 1 N σ 4 n i=1 (x i −x) 4 f i .(7)
Here, N is the number of the data points, x i is the time of each mesh (sample numbers of an observation), f i is the frequency of the data,x is an average, and σ is the standard deviation. In our analysis, x i andx are the time of our observation and its average (which is the same as half time of the total eclipse time), and f i corresponds to the amplitude of observed/calculated flux. When the data agree with the normal distribution, the distribution is symmetric, i.e., S equals to zero. A positively skewed distribution (S > 0) has a longer tail to the right; whereas, a negatively skewed distribution (S < 0) has a longer tail to the left. The calculated skewness and kurtosis are plotted in figure 4. A large value of skewness indicates a high degree of asymmetry. The skewness for a small accretion rate,ṁ=1-10, remained constant value even for different inclination angles. This is because in this situation we could clearly observe an asymmetric brightness distribution around the black hole. In this regime, the ratio between blue shift and red shift did not alter significantly, so that the inclination angle did not affect the skewness value. If this characteristic skewness values, S ∼ 0.08, were detected, then it would be an observational effects associated with the black hole. On the other hand, the kurtosis value did not change significantly with accretion rate, i.e., the variation amplitude was very small. This means that the sharpness of the profiles is almost the same over a wide range of accretion rates. Consequently, the kurtosis is not a useful indicator of relativistic effects.
Let us consider the causes of the change of skewness variation in more detail. When the accretion rate of the disk is less than the sub-critical rate (ṁ < ∼ 10), the rotational velocity is much higher than the radial velocity, v ϕ ≫ v r . Therefore, Doppler boosting via the rotational velocity component is dominant, and the asymmetry of the light curves becomes large. As a result, the skewness attains relatively large values ∼ 0.08. On the other hand, when the accretion rate increases, the radial velocity of the flow approaches to the same order as the rotational velocity. However, Doppler boosting via the radial velocity component does not contribute to the asymmetry of the light curves; it mainly amplifies the observed flux. For supercritical accretion flows, the rotational velocity decreases while the radial velocity increases, since the angular momentum loss via viscosity becomes effective. As a result, the flux distribution on the X-Y plane approaches to be symmetry. Accordingly, the degree of light-curve asymmetry decreases for supercritical accretion flows, and the skewness approaches zero.
A small increase in skewness can be seen for (ṁ, i) = (100, 70 • ) and (316, 60 • ) in figure 4. These parameter sets, that is, (ṁ, i) = (100, 70 • ) and (316, 60 • ), are marginal values at which the self-occultation of the disk begins to be effective. In these parameter sets, the radiation area on the side near to the observer was concealed by the disk outer rim, but the radiation area on the opposite side was not concealed. As a result, the small increase in skewness was observed.
Meanwhile, as the all mass-accretion rate increases, all light curves become symmetric The is because the inner region of the disk was entirely unobservable for high inclination angle. In other words, due to the self-occultation of the disk, we could not observe the relativistic Doppler beaming which causes the asymmetric flux distribution, and so the skewness approaches zero. This geometrical effect occurs only in the cases of high mass-accretion rates (ṁ > ∼ 32). Hence, the degree of the skewness can be an indicator of accretion rate.
Discussion
Is this Diagnosis Useful for the Verification of Relativistic Effects?
In our study, we adopted a set of statistical quantities, i.e., skewness and kurtosis, as indicators of relativistic effects. The skewness S ∼ 0.08 did not alter significantly for small accretion rates and relatively high inclination angles (i = 60 • − 80 • ). This means that it is possible to observe relativistic effects, i.e., an asymmetric brightness distribution.
For high-luminosity objects, it will be possible to observe such asymmetric features in the X-ray band. For example, the RXTE PCA has sufficient time resolution to observe the eclipse phenomenon, which changes the flux on a timescale of 0.01 seconds. It would be difficult to observe this asymmetry in the optical band even if optical telescope detectors had sufficient time resolution. Although the temperature around the inner-disk region is higher than that of the outer disk, the surface area of the inner-disk region is much smaller than that of the outer region. Therefore, the dominant emission in the optical band comes from the outer region. With an observational instrument having a spatial resolution of less than ∼ 100 r g , in principle, the light-curve asymmetry could be detected even in the optical band.
The emission from the vicinity of the event horizon contains the information about the physical parameters of a black hole. In particular, the shape and the position of a black hole can be deformed by its rotation and its charge (Takahashi 2004. In principle, light curves of the occultation of the black hole may also contain the information about the physical parameters of the black hole. Using the light-curve analysis described here, it may be possible to measure the black-hole spin using light-curve analysis . This is because a rotating black hole transfers angular momentum to the accreting gas via the frame-dragging effect, and so the marginal stable circular orbit decreases to less than 3 r g . Hence, light curves for rotating black holes have a gentler slope at ingress or egress, compared to non-rotating black holes. Verification of the expected difference in the light-curve slopes of rotating and non-rotating black holes is an intriguing topic for future study.
Finding an Eclipsing Black Hole
A major problem remains regarding the light-curve analysis we described here: finding suitable observational targets. Can we identify a black hole that is covered by its companion star? How many eclipsing black hole X-ray binaries exist? To date, these questions remain unanswered.
To confirm the predicted asymmetric features of light curves, it is necessary to find an X-ray eclipse in a black hole candidate system. Recently, an eclipsing black hole X-ray binary was observed in M33 X-7 (Pietsch et al. 2004). This object does not have pulsations, which are commonly seen in X-ray pulsars, and the mass of the compact object derived from orbital parameters exceeds 2.1-3.0 M ⊙ for i = 90 • . For these reasons the compact object is suspected to be a black hole. If a large number of X-ray eclipse black hole candidates are discovered in our galaxy, as well as in external galaxies in the near future, it will be possible to extend the search for stellar-mass black holes by the use of the light-curve analysis presented here. Table 1 summarizes the eclipsing black-hole candidates identified to date. The ingress or egress times (∆t) for these objects are observable with the time-resolution of current X-ray telescopes. In particular, ∆t in GRS 1915+105 is about 10 seconds, which is sufficiently large to be observable not only with X-ray telescopes, but also in the optical telescope. To date, no X-ray eclipse has been observed in GRS 1915+105. However, the limited observations of this object do not allow us to decide whether it is an eclipsing system. We therefore look forward to long-term observation that covers the whole orbital period of the binary systems, or to observations by next-generation X-ray telescope missions.
Observational Constraints for Accretion Disks
When light-curve asymmetry is undetected, what can we infer about the binary system? One possibility is selfoccultation of the disk. As in figure 2, when the mass-accretion rate is high (ṁ = 100) the asymmetric intensity pattern does not appear because of the self-occultation effect. If we assume this scenario, we can interpret an object having a symmetric light curve at eclipse as a super-critical acccretor. For example, SS433 is thought to be undergoing supercritical accretion,ṁ > ∼ 32, (Cherepashchuk et al. 1982;Gies et al. 2002;Revnivtsev et al. 2004). We therefore predict that the skewness of the light-curve for SS433 will be close to zero. Using the result in figure 4 we can constrain the minimum value of the accretion rate. Unfortunately, the X-ray emission from SS433 contains other contaminants, e.g., X-ray emission from the jet and/or corona. Accordingly, an exact measurement of the skewness will be difficult to obtain. However, we again note that if a black hole eclipse occurs in a binary system, an asymmetric light curve is expected for small accretion rates. Since the disk gas rotates at relativistic speeds around a black hole, then the emitted radiation must be beamed by the Doppler effect.
Conclusion
We have shown that it should be possible to observe the relativistic effects of a black hole using the expected asymmetry of its light curve at eclipse. Specifically, we propose the use of skewness and kurtosis analysis as indicators of relativistic effects. In particular, we predict a skewness of S ∼ 0.08, which can be compared with observations.
If asymmetry is not detected in the light curves at eclipse, one possibility is that the emission from the inner region of the accretion disk is hidden by the disk's outer region, i.e., a self-occultation effect by a geometrically thick disk. For this to occur, the accretion rate must be very high relative to the critical accretion rate. Therefore, our analysis technique can be reversed to use the skewness measurements to constrain the accretion rate.
Even if we consider neutron stars, an asymmetry should appear in the light curves. In some eclipsing neutron star binaries, X-ray eclipses have been clearly seen in their light curves (Homan et al. 2003). In general, the apparent size of a neutron star is an order of magnitude smaller than that of a black hole. Thus, the time scales of the eclipse light curve features will be short (∆t NS ∼ 0.1 ∆t BH ). However, the time scale also depends on the binary parameters, in particular, the orbital period. For an X-ray binary with a relatively long orbital period, the eclipse light curves can be analyzed with present X-ray telescopes, i.e., RXTE, XMM-Newton, or ASTRO-E2.
Moreover, if high-speed photometry on timescales of 0.01-10 seconds becomes available, our analysis can be applied in the optical band to further increase understanding of black holes. Investigations of black holes using high-speed photometry will provide an interesting challenge in the near future.
We are grateful to Drs. H. Negoro, K. Matsumoto, and A. Yonehara for doing useful comments and discussions.
Fig. 1 .
1Schematic diagram of our calculation.
Fig. 2 .
2Bolometric flux distribution with different accretion rates,ṁ =Ṁ /(L E /c 2 ), and inclination angles, i. The right-hand legend of each panel represents the bolometric flux level [erg/cm 2 /s]. Normalized accretion rates are 1, 10, and 100 from top to bottom.
Fig. 3 .
3Light curves during before and during (left side of dashed line) and after (right side of dashed line) eclipse by a companion star. Calculation parameters were the same as those of figure 2; that is, the normalized accretion rates were 1, 10, and 100 from top to bottom, and the inclination angles were i = 60 • , i = 70 • , and i = 80 • from left to right. Solid lines represent the calculated light curves. Dotted lines are inverted from the Gaussian distribution for comparison of axis symmetry. Dashed lines showing zero time (time=0 sec) indicate the boundary of complete eclipse. We assumed an orbital velocity of 200 km s −1 .
Fig. 4 .
4The skewness and kurtosis for different accretion rates. Solid lines represent i = 60 • case, dashed lines represent i = 70 • , and dotted lines represent i = 80 • case.
Table 1 .
1Eclipsing Black Hole X-ray Binary Candidates. name Black Hole Mass [M ⊙ ] P [day] ∆t [s] t eclipse [s] References GRS 1915+105 ∼ 14 ×10 4 (O-B)Pietsch et al. (2004) Object ∼ 33.5
12.0
5.9 ×10 3 (K-M) Greiner et al. (2001)
GRO J1655-40 ∼ 7
∼ 2.62
3.04
5.5 ×10 3 (F3)
Orosz & Bailyn (1997)
V4641 Sgr
∼ 9.6
∼ 2.82
3.56
1.1 ×10 4 (B9)
Orosz et al. (2001)
M33 X-7
∼ 2.5
∼ 3.45
0.78
4.6
This research was partially supported by the Ministry of Education, Science, Sports and Culture, Grant-in-Aid for JSPS Fellows, (16004706, KW; 15052631, 17010519, RT). This work was also supported in part by the Grants-in Aid of the Ministry of Education, Science, Sports, and Culture of Japan (15540235, JF).
. M A Abramowicz, B Czerny, J P Lasota, E Szuszkiewicz, ApJ. 332646Abramowicz, M.A., Czerny, B., Lasota, J.P., & Szuszkiewicz, E. 1988, ApJ, 332, 646
. M A Abramowicz, W Kluźniak, A&A. 37419Abramowicz, M.A., & Kluźniak, W. 2001, A&A, 374, L19
. C D Bailyn, J A Orosz, J E Mcclintock, R A Remillard, Nature. 378157Bailyn, C.D., Orosz, J.A., McClintock, J.E., & Remillard, R.A. 1995, Nature, 378, 157
. A M Cherepashchuk, A A Aslanov, V G Kornilov, SvA. 26697Cherepashchuk, A. M., Aslanov, A. A., & Kornilov, V. G. 1982, SvA, 26, 697
. J Fukue, Nature. 327600Fukue, J. 1987, Nature, 327, 600
. J Fukue, T Yokoyama, PASJ. 4015Fukue, J., & Yokoyama, T. 1988, PASJ, 40, 15
. J Greiner, J G Cuby, M J Mccaughrean, Nature. 414522Greiner, J., Cuby, J.G., McCaughrean, M.J. 2001, Nature, 414, 522
. D R Gies, M V Mcswain, R L Riddle, ApJ. 5661069Gies, D. R., McSwain, M. V., Riddle, R. L., et al. 2002, ApJ, 566, 1069
. H Hirabayashi, astro-ph/0501020Hirabayashi, H. et al. astro-ph/0501020
. J Homan, R Wijnands, Van Den, M Berg, A&A. 412799Homan, J., Wijnands, R., & van den Berg, M. 2003, A&A, 412, 799
. K Horne, MNRAS. 213129Horne, K. 1985, MNRAS, 213, 129
. Pietsch, A&A. 413879Pietsch et al. 2004, A&A, 413, 879
. R Takahashi, ApJ. 611996Takahashi, R., 2004, ApJ, 611, 996
. R Takahashi, PASJ. 57273Takahashi, R., 2005, PASJ, 57, 273
. R ; Y Takahashi, Nature. 375659Kyoto University TanakaPh.D. thesisTakahashi, R. 2005, Ph.D. thesis, Kyoto University Tanaka, Y. et al. 1995, Nature, 375, 659
M Revnivtsev, L5 van der Klis, M. 2000. 424717A&ARevnivtsev, M. et al. 2004, A&A, 424, L5 van der Klis, M. 2000, ARA&A, 38, 717
. K Watarai, J Fukue, M Takeuchi, S Mineshige, PASJ. 52133Watarai, K., Fukue, J., Takeuchi, M., & Mineshige, S. 2000, PASJ, 52, 133
. K Watarai, K Ohsuga, R Takahashi, J Fukue, PASJ. 57513Watarai, K., Ohsuga, K., Takahashi, R., & Fukue, J. 2005, PASJ, 57, 513
|
[] |
[
"Prospects for indirect detection of sneutrino dark matter with IceCube",
"Prospects for indirect detection of sneutrino dark matter with IceCube"
] |
[
"Rouzbeh Allahverdi \nDepartment of Physics & Astronomy\nUniversity of New Mexico\n87131AlbuquerqueNMUSA\n",
"Sascha Bornhauser \nDepartment of Physics & Astronomy\nUniversity of New Mexico\n87131AlbuquerqueNMUSA\n",
"Bhaskar Dutta \nDepartment of Physics\nTexas A&M University\n77843-4242College StationTXUSA\n",
"Katherine Richardson-Mcdaniel \nDepartment of Physics & Astronomy\nUniversity of New Mexico\n87131AlbuquerqueNMUSA\n"
] |
[
"Department of Physics & Astronomy\nUniversity of New Mexico\n87131AlbuquerqueNMUSA",
"Department of Physics & Astronomy\nUniversity of New Mexico\n87131AlbuquerqueNMUSA",
"Department of Physics\nTexas A&M University\n77843-4242College StationTXUSA",
"Department of Physics & Astronomy\nUniversity of New Mexico\n87131AlbuquerqueNMUSA"
] |
[] |
We investigate the prospects for indirect detection of right-handed sneutrino dark matter at the IceCube neutrino telescope in a U (1)B−L extension of the MSSM. The capture and annihilation of sneutrinos inside the Sun reach equilibrium, and the flux of produced neutrinos is governed by the sneutrino-proton elastic scattering cross section, which has an upper bound of 8 × 10 −9 pb from the Z ′ mass limits in the B − L model. Despite the absence of any spin-dependent contribution, the muon event rates predicted by this model can be detected at IceCube since sneutrinos mainly annihilate into leptonic final states by virtue of the fermion B − L charges. These subsequently decay to neutrinos with 100% efficiency. The Earth muon event rates are too small to be detected for the standard halo model irrespective of an enhanced sneutrino annihilation cross section that can explain the recent PAMELA data. For modified velocity distributions, the Earth muon events increase substantially and can be greater than the IceCube detection threshold of 12 events km −2 yr −1 . However, this only leads to a mild increase of about 30% for the Sun muon events. The number of muon events from the Sun can be as large as roughly 100 events km −2 yr −1 for this model.
|
10.1103/physrevd.80.055026
|
[
"https://arxiv.org/pdf/0907.1486v2.pdf"
] | 118,638,948 |
0907.1486
|
13df08a8ec104872b34ce26dfc3cc3e43aa85c13
|
Prospects for indirect detection of sneutrino dark matter with IceCube
24 Sep 2009 September 24, 2009
Rouzbeh Allahverdi
Department of Physics & Astronomy
University of New Mexico
87131AlbuquerqueNMUSA
Sascha Bornhauser
Department of Physics & Astronomy
University of New Mexico
87131AlbuquerqueNMUSA
Bhaskar Dutta
Department of Physics
Texas A&M University
77843-4242College StationTXUSA
Katherine Richardson-Mcdaniel
Department of Physics & Astronomy
University of New Mexico
87131AlbuquerqueNMUSA
Prospects for indirect detection of sneutrino dark matter with IceCube
24 Sep 2009 September 24, 2009numbers: 1260Jv9535+d1460Lm
We investigate the prospects for indirect detection of right-handed sneutrino dark matter at the IceCube neutrino telescope in a U (1)B−L extension of the MSSM. The capture and annihilation of sneutrinos inside the Sun reach equilibrium, and the flux of produced neutrinos is governed by the sneutrino-proton elastic scattering cross section, which has an upper bound of 8 × 10 −9 pb from the Z ′ mass limits in the B − L model. Despite the absence of any spin-dependent contribution, the muon event rates predicted by this model can be detected at IceCube since sneutrinos mainly annihilate into leptonic final states by virtue of the fermion B − L charges. These subsequently decay to neutrinos with 100% efficiency. The Earth muon event rates are too small to be detected for the standard halo model irrespective of an enhanced sneutrino annihilation cross section that can explain the recent PAMELA data. For modified velocity distributions, the Earth muon events increase substantially and can be greater than the IceCube detection threshold of 12 events km −2 yr −1 . However, this only leads to a mild increase of about 30% for the Sun muon events. The number of muon events from the Sun can be as large as roughly 100 events km −2 yr −1 for this model.
I. INTRODUCTION
There are various lines of evidence supporting the existence of dark matter in the universe, but its identity remains a major problem the solution to which likely rests at the interface of particle physics and cosmology. It is well established that particle physics can explain dark matter in the form of weakly interacting massive particles (WIMPs) [1]. In the standard scenario the dark matter relic abundance, as precisely measured by cosmic microwave background (CMB) experiments [2] is determined from the thermal freeze out of dark matter annihilation in the early universe. There are currently major experimental efforts for direct and indirect detection of dark matter particles. Indirect detection investigates annihilation of dark matter to various final states (photons, anti-particles, neutrinos) through astrophysical observations, while direct detection probes the scattering of the dark matter particle off nuclei inside dark matter detectors.
Supersymmetry is a front-runner candidate to address the hierarchy problem of the standard model (SM). The minimal supersymmetric standard model (MSSM) has become the focus of major theoretical and experimental activities for the past two decades. It has a natural dark matter candidate, namely the lightest supersymmetric particle (LSP), which can have the correct thermal relic abundance [3]. It is also believed that there are gauge symmetries beyond those of the SM. A minimal extension of the SM gauge group, motivated by the nonzero neutrino masses, includes a gauged U (1) B−L gauge symmetry [4] (B and L are baryon and lepton number respectively). Anomaly cancellation then implies the existence of three right-handed (RH) neutrinos and allows us to write the Dirac and Majorana mass terms for the neutri-nos to explain the light neutrino masses and mixings.
The B − L extended MSSM also provides new dark matter candidates: the lightest neutralino in the B − L sector [5,6] and the lightest RH sneutrino [7]. In this work we will focus on the sneutrino as the dark matter candidate 1 . The candidate is made stable by invoking a discrete R-parity, but in the context of a B−L symmetry, a discrete matter parity can arise once the U (1) B−L is spontaneously broken [9]. The B − L gauge interactions can yield the correct relic abundance of sneutrinos if the U (1) B−L is broken around the TeV scale.
Recently, it has been shown that it is possible to explain the positron excess observed in the PAMELA data [10] in the context of a low scale B − L extension of the MSSM [6,11,12]. Due to a factor of 3 difference between the B − L charges of the quarks and leptons, the anti-proton flux is naturally suppressed in this model in agreement with the PAMELA anti-proton data. Furthermore, the U (1) B−L gauge coupling unifies with those of the SM symmetries, and the B − L symmetry can be broken radiatively. The B −L breaking around a TeV results in a Z ′ gauge boson with around a TeV mass that can be probed at the LHC along with the other new states of this model.
The RH sneutrino of this B − L extended model can be detected when it elastically scatters off a nucleus. The sneutrino-proton scattering cross section is large enough to be probed in the ongoing and upcoming dark matter direct detection experiments [7]. In addition, annihilation of sneutrinos at the present time produces LH neutrinos. It is interesting to investigate the possibility of indirect detection of sneutrino dark matter by using final state neutrinos in the IceCube neutrino telescope. This ongoing experiment plans to probe the neutrino flux arising from the annihilation of gravitationally trapped dark matter particles in the Sun and the Earth. We will examine the status of the U (1) B−L model in two cases. In case 1, the sneutrinos annihilate mostly into RH neutrinos that subsequently decay into LH neutrinos and the MSSM Higgs. In case 2, the sneutrinos annihilate mostly into the lightest Higgs boson in the B − L sector that decays into τ + τ − and b + b − quarks, which subsequently produce LH neutrinos via three-body decays. The recent PAMELA data [10] can be explained in case 2, where the final state taus give rise to the positron excess in the cosmic ray flux without producing a significant number of antiprotons [6,11,12]. The large cross section required to explain the data arises from Sommerfeld enhancement [13] or from the non-thermal production of dark matter [12].
Since the source of neutrinos are different in the two cases, two-body versus three-body decay, the energy spectrum of the neutrinos can be used to distinguish the cases. We will estimate the muon neutrino flux as well as the muon flux in both scenarios as a function of sneutrino mass. Since the Large Hadron Collider (LHC) is on the verge of producing physics results, it will enable us to measure the mass of the dark matter candidate. Therefore, using the LHC measurements and the IceCube results in tandem, we hope to discern the B − L model. We will present predictions of this model using the standard dark matter halo model as well as the modified velocity distributions obtained in recent galaxy simulations. This paper is organized as follows. In section II, we discuss the low scale U (1) B−L model. In section III, we give a general discussion of the indirect detection of sneutrino dark matter via neutrino final states. In section IV, we present our results and discuss the prospect of detection of sneutrino dark matter at IceCube in case 1 and case 2. In section V, we show the results obtained for the modified velocity distributions. In section VI, we compare predictions for the sneutrino dark matter in the U (1) B−L model with those for the neutralino dark matter in the minimal supergravity model. Finally, we close by concluding in section VII.
II. THE U (1)B−L MODEL
Since this B − L is a local gauge symmetry, we have a new gauge boson Z ′ (and its supersymmetric partner). In the minimal model, we also have two new Higgs fields H ′ 1 and H ′ 2 (that are SM singlets) and their supersymmetric partners. The vacuum expectation values (VEVs) of these Higgs fields break the B − L symmetry. We can write the superpotential of the model as follows (the boldface characters denote superfields)
W = W MSSM + W B−L + y D NH u L ,(1)
where H u and L are the superfields containing the Higgs field that gives mass to up-type quarks and the LH leptons respectively. For simplicity, we have omitted the family indices. The W B−L term contains H ′ 1 , H ′ 2 and N [11]. Its detailed form depends on the charge assignments of the new Higgs fields.
The U (1) B−L is broken by the VEV of H ′ 1 and H ′ 2 , which we denote by v ′ 1 and v ′ 2 respectively. This results in a mass m Z ′ for the Z ′ gauge boson. We have three physical Higgs fields φ, Φ (scalars) and A (a pseudo scalar). The masses of the Higgs fields follow m 2 A natural dark matter candidate in this model is the lightest sneutrino N . We note that it has fewer gauge interactions than other supersymmetric particles, and its mass receives the smallest contribution from the gaugino loops. Based on the dominant channel for sneutrino annihilation we therefore consider the following two cases:
φ < cos 2 (2β ′ )m 2 Z ′ (where tanβ ′ ≡ H ′ 2 / H ′ 1 ) and m Φ , m A ∼ m Z ′ . Various B − LFields Q Q c L L c H ′ 1 H ′ 2 QB−L 1/6 -1/6 -1/2 1/2 1 -1
• Case 1: A generic case where a solution to the positron excess observed by PAMELA is not sought. In this case the dominant annihilation channels are the S-wave processes N N → N N and N * N * → N * N * via t-channel exchange of Z ′ . There are also N N * → N N * , ff annihilation modes via s-channel exchange of a Z ′ or B − L Higgs fields, but these are P -wave suppressed and can be completely neglected (particularly at the present time). In this case the annihilation crosssection has the nominal value ∼ 3 × 10 −26 cm 3 /sec (dictated by thermal freeze out) at all times. The RH neutrinos produced from dark matter annihilation quickly decay to LH neutrinos and the MSSM Higgs.
• Case 2: In this case the PAMELA puzzle is addressed via Sommerfeld enhancement of sneutrino annihilation at the present time [11]. In this part of the model parameter space the lightest B −L Higgs φ is much lighter than the Z ′ . The dominant annihilation channel is N * N → φφ via the s-channel exchange of the φ or Φ, the t or u-channel exchange of a N , and the contact term | N | 2 φ 2 . The interactions for these processes arise from the D-term part of the potential, and their strength is proportional to m Z ′ . There are other S-wave processes with Higgs final states N * N → φΦ, φA, ΦΦ, AA, but they are kinematically suppressed and/or forbidden. The annihilation modes N N → N N and N * N * → N * N * are also subdominant in this case. As in the previous case, annihilations to ff final states are P -wave suppressed and hence totally negligible. The cross section for annihilation to the φφ final state at the present time is required to be 3×10 −23 cm 3 /sec in order to explain the PAMELA data. Sufficient Sommerfeld enhancement is obtained as a result of the attractive force between sneutrinos due to the φ exchange provided that the mass of φ is small (< 20 GeV) 2 . The φ subsequently decays into fermion-antifermion pairs very quickly via a one-loop diagram, and it mostly produces τ + τ − final states by virtue of the fermion B − L charges [11].
The sneutrino-proton scattering cross section for this model can be in the 10 −11 − 10 −8 pb range for a reasonable choice of parameters that satisfy the relic density constraint, cf. [7,11]. This opens up the prospect for direct detection with the help of the next generation of experiments [14]. The current upper bound for the spinindependent cross section is 4.6 × 10 −8 − 2 × 10 −7 pb for a dark matter mass of 60 − 1200 GeV; this is just above the highest possible values for our model 3 .
III. PROSPECTS FOR INDIRECT DETECTION AT ICECUBE
A. The Neutrino Signal
The B − L model also shows great promise for indirect detection, and we focus in particular on the potential neutrino signal at the IceCube experiment. In case 1, the sneutrinos annihilate to produce RH neutrinos that subsequently decay into a LH neutrino and a neutral Higgs boson 4 . We assume for most of this paper that the total LH neutrino flux branches into every neutrino flavor equally (see subsection IV A for a discussion). Assuming that the mass difference between the RH sneutrinos and RH neutrinos is small 5 , the RH neutrinos are produced non-relativistically, and hence each LH neutrino and Higgs receives an energy equal to half of the sneutrino mass.
In case 2, RH neutrinos constitute about 10% of the annihilation final states. Two of the lightest B − L Higgses φ compose the remaining 90% of the branching fraction. This branching fraction is necessary to provide a high enough leptonic particle rate to fit the PAMELA data. As mentioned in the previous section we need m φ < 20 GeV. For 4 GeV < m φ < 20 GeV, the final states are mostly taus (74%) and b quarks (16%), where the dominance of tau final states is a result of the fermion B − L charges. The LH neutrinos in this case arise from the three-body decay of taus and bottom quarks. For m φ < 4 GeV, we would have mostly muons and charm quarks.
Both the case 1 and case 2 scenarios of our model display a crucial signature difference when compared to the standard neutralino LSP in the MSSM. The energy distribution of the produced LH neutrinos from the RH neutrino decay is a delta function occurring at half of the sneutrino mass. Other annihilation channels in this model, as well as those available in the MSSM, produce additional neutrino signal via three-body decays such as τ − → e − ν τνe . This difference opens up a significant possibility to differentiate between the B − L model and the MSSM with the help of the differential energy spectrum of the detector event rates. This is discussed further in section IV.
B. Neutrino Flux
Sneutrino annihilation in the Sun and the Earth produces an expected neutrino flux through IceCube. This flux is modeled by calculating the number of gravitationally captured sneutrinos and then considering the propagation and detection of the produced neutrinos. The number of captured dark matter particles as a function of time is governed by a differential equation the solution to which is
N (t) = C A tanh √ CAt ,(2)
where C is the total capture rate and depends on both the total scattering cross sections off nucleons and A is related to the annihilation cross section; see Ref. [17] for details. The total rate of annihilation is given by
Γ A = C 2 tanh 2 t τ eq .(3)
The number of captured sneutrinos will saturate as long as the length of time for the process has exceeded the equilibration time, τ eq ≡ ( √ CA) −1 . In equilibrated systems, the rate of annihilation is entirely dominated by the capture rate C, Γ A ≈ C/2. We can explain equilibration in the B − L model by considering some example cross sections. Since the age of the solar system is 4.5 Gyr, for a 1 TeV sneutrino with an annihilation cross section of 3 × 10 −23 cm 3 /sec (3 × 10 −26 cm 3 /s), a spin-independent cross section σ SI of at least 10 −11 pb (10 −8 pb) is needed to reach equilibration in the Sun. This assumes no spin-dependence as the B − L model has none. The scattering cross section needed to achieve equilibration in the Earth is already excluded by direct detection bounds.
Alternatively we can fix the scattering cross section and place a limit on the annihilation cross section. In the B − L model, the cross section for sneutrino-proton elastic scattering follows
σ SI ∝ g B−L Q L m Z ′ 4 m 2 p ,(4)
where g B−L and Q L are the U (1) B−L gauge coupling and B − L charge of leptons, respectively, and m p is the proton mass. The limits on the Z ′ mass from LEP and Tevatron are given by [15,16],
m Z ′ g B−L Q L > 6 TeV .(5)
This results in an upper limit on σ SI of 8 × 10 −9 pb. Assuming this bound is realized, an annihilation cross section ≥ 4 × 10 −26 cm 3 /s (1 × 10 −18 cm 3 /s) needs to be achieved to reach equilibrium in the Sun (Earth). Note that we can always choose the B − L gauge coupling and scale B − L charges in accordance with anomaly cancelation such that σ SI is saturated while obtaining the correct relic density for sneutrino dark matter. This is possible since a different combination of g B−L and Q B−L appears in the relic density calculation . This is in contrast to the MSSM case where the SM gauge couplings and charges are fixed.
Since equilibrium is easily achieved in the Sun, the neutrino signal will depend solely on C, or equivalently σ SI , so the increased annihilation rate in case 2 of our model confers no advantage compared to typical MSSM cases for annihilation in the Sun. On the other hand, choosing reasonable values for either of the relevant cross sections demonstrates that equilibrium is nearly impossible to reach for the Earth without significant deviation from the assumptions made in [17]. Consequently, the neutrino signal from the Earth will depend on both C and A. Therefore one expects a much larger signal for case 2 as compared to either case 1 or the neutralino dark matter models [18].
The annihilation of sneutrinos in the Sun and Earth yields neutrinos that can be detected by the IceCube experiment. IceCube can distinguish between neutrino signals from the Earth and Sun with the help of an angle cut. This cut restricts the detection to an angle range of 90 • < Θ < 113 • in the case of the Sun, where Θ is the Earth zenith angle. One has to measure below the horizon to be able to distinguish the background of atmospheric neutrinos from the signal, and the Sun cannot be more than 23.5 • below the horizon at the South Pole [27,28]. In the case of a search for a potential Earth signal one looks at a zenith angle of about 180 • , i.e., directly to the core of the Earth [27].
Muon neutrinos create muons via charged current interactions in the detector. The qualitative behavior of the muon flux depends on the corresponding neutrino muon flux, and the differential neutrino spectrum is given by
dN ν dE ν = Γ a 4πD 2 f B f e N dN f ν dE ν ,(6)
see for example Ref. [21]. Appendix A contains a detailed discussion about the mass dependence of this equation. The IceCube detector records the Cerenkov light from relativistic charged particles in its volume. Cosmic ray showers create a muon background signal that can be controlled by selecting for upward-going and contained muon events. The atmospheric neutrino background is well understood and may be subtracted away from the signal.
In addition to the muon flux through IceCube, electromagnetic and hadronic cascades inside the detector might also allow sneutrino dark matter detection. Electromagnetic cascades occur via charged current interactions. By depositing some of the incoming neutrino energy in taus and electrons, Bremsstrahlung radiation produces a localized cascade of energy that the digital optical modules of IceCube can record. In the results of Appendix B we have ignored any contribution from the charged current electromagnetic cascades of the muons, since their contribution has already been considered in the form of Cerenkov radiation from the muon tracks. Hadronic cascades occur for both neutral current and charged current interactions. As the neutrino scatters off of a nucleus in the detector, the nucleus breaks up and produces products such as pions that in turn decay into detectable photons. Note that for neutral current interactions the energy of the outgoing neutrino is lost and is not recorded in any cascade. The energy from localized electromagnetic and hadronic cascades is much harder to reconstruct compared to muon tracks but still might produce an interesting signal in the detector, see Appendix B.
IV. MODEL RESULTS
The annihilation of sneutrinos in the Sun and Earth results in a flux of particle events through the IceCube detector that are calculated using DarkSUSY, which uses results from WimpSim [21,22]. The calculations account for neutrinos produced via decays, as well as neutrino oscillation, loss via charged current interactions and scattering via neutral current interactions. DarkSUSY default parameters are used, which include a Gaussian dark matter velocity distribution and an NFW halo profile. Realistic Sun and Earth density profiles are integrated over numerically according to [23]. For both case 1 and case 2, the maximum spin-independent cross section allowed by the Z ′ limits is used. Similarly, the annihilation cross section is fixed at 3 × 10 −26 cm 3 /s (3 × 10 −23 cm 3 /s) for case 1 (case 2). Finally, the results presented in the subsections below use the convention of a detector energy threshold of 1 GeV. IceCube effective areas have not been calculated for our model, but we anticipate that they would be slightly larger than those used for the MSSM scenarios since we have a slightly harder spectrum. This is especially true in case 1.
A. Sensitivity to Neutrino Flavor
For the results that follow we have considered equal branching to the three flavors of LH neutrinos, but in principle this need not be the case. The exact flavor composition of LH neutrinos produced from sneutrino annihilation in the Sun depends on the detailed structure of Majorana and Dirac couplings in the neutrino sector. In Fig. 1, the resulting muon neutrino flux for a 100% branching ratio to a single flavor is compared to equal flavor ratios in both case 1 and case 2 (upper and lower panels respectively).
It is seen from the upper panel that in case 1 for sneutrino masses below 300 GeV (LH neutrino energy below 150 GeV) flavor composition of produced neutrinos does not matter since oscillations are very efficient at low energies and easily mix the neutrino flavors. Therefore 100% ν e , ν µ , or ν τ each leads to the same ν µ signal at the detector. However at high energies oscillation length L osc ∝ E ν /∆m 2 elongates, and oscillations become less efficient. Here ∆m 2 is the difference between (mass) 2 of neutrino mass eigenstates. This effect is most important for ν e 's since they oscillate to ν µ 's via the small mass splitting responsible for solar neutrino oscillations ∆m 2 sol . This is why the ν µ flux at the detector falls quickly for 100% ν e branching ratio at high energies. The effect is less pronounced for 100% ν µ and ν τ branching ratios because the relevant mass splitting is the one responsible for atmospheric neutrino oscillations ∆m 2 atm , which is much larger. However, it is seen that the ν µ flux for 100% ν µ branching ratio is less than that for 100% ν τ branching ratio at high energies. This is because of charged current interactions inside the Sun whose cross section is proportional to the neutrino energy. These interactions convert muon neutrinos to muons that are quickly stopped in the Sun due to electromagnetic interactions that result in attenuation of the neutrino flux. Charged current interactions also convert tau neutrinos to taus. However, due to their much shorter lifetime, they decay back to ν τ before any significant energy loss. Nevertheless, for sneutrino masses up to 1.5 TeV, the result for equal branching ratios to three flavors is within a factor of a few compared with the 100% branching ratio to a single neutrino flavor. Moreover, for a typical model, it is unlikely that sneutrino annihilation produces only one flavor of RH neutrinos. Therefore equal branching to the three flavors is a good approximation in case 1.
In case 2, the lower panel 6 , there is virtually no differ- 6 The effect of the 1 GeV conventional energy threshold in the spectrum can be seen at low masses as more of the neutrino signal is lost under the threshold; this causes the maximum event rate ence between various flavor compositions. This is because sneutrino annihilation mainly produces taus in this case (the branching ratio for production of RH neutrinos is only 10%). Hence equal branching to the three flavors is a nearly perfect approximation in this case. We conclude that our results do not depend critically on the choice of neutrino flavor branching ratios in either case.
B. Contributions to Muon Flux
It is worth emphasizing that case 1 and case 2 yield different neutrino signals. In case 1, LH neutrinos are produced from two-body decay of (almost nonrelativistic) RH neutrinos. This produces a delta function in the energy of the LH neutrinos at one-half the mass of the sneutrino dark matter 7 . On the other hand, in case 2, the sneutrino dominantly annihilates to φφ final states, and each φ decays to a fermion-antifermion pair via a one-loop diagram. The partial decay rate of φ is proportional to the squares of the mass of the resulting fermion and the fourth power of its B − L charge [6,11]. As a result, the largest contribution to the annihilation is from taus (≈ 74%) and bottom quarks (≈ 16%), where the quark signal is suppressed due to the B − L charge. Both of these final states produce neutrinos via threebody decay that results in a spread in energy signal. Fig. 2(a) shows the muon neutrino flux energy spectrum through a kilometer squared of IceCube in one year for a 300 GeV sneutrino for case 1. The delta function at half the mass of the sneutrino can be seen clearly. A small portion of muon neutrinos from this initial annihilation state are scattered via neutral current interactions inside the Sun to lower energies. This produces the slight bump in the spectrum at low energies. Fig. 2(b) plots the resulting muon flux from the charged current interactions inside the IceCube detector. As expected for a monochromatic incident neutrino, the spectrum of muons has a linear dependence on energy.
For case 2, the delta function from the neutrino channel at the detector is subdominant to the other annihilation channels, see Fig. 3(a). First, consider that the sneutrino annihilation mainly produces taus and bottom quarks that subsequently produce LH neutrinos via three-body decays. Second, due to the larger sneutrino mass of 1 TeV (in order to explain the PAMELA data), the LH neutrinos produced from two-body decays have to move to the right from the edge of the graph. This effect is not evident in case 1 since the majority of the neutrino flux arrives at higher energies and is unaffected by the small threshold. 7 There is one additional potential source for neutrinos: the Higgs produced from the decay of the RH neutrinos can itself decay to a bb pair. We checked that this contribution gives only a few percent change in the signal. We therefore neglect it in our numerical calculation for the sake of simplicity. a higher energy than in the 300 GeV case. Therefore they lose energy via neutral current interactions and get absorbed via charged current interactions inside the Sun more efficiently. As a result of both of these facts, there are more neutrinos with low energies at the detector from each channel in this case than in case 1. This also is reflected in the spectrum of muon flux, shown in Fig. 3(b), which does not show a linear dependence on energy due to the presence of three-body decays. This is in contrast to Fig. 2(b).
The muon event signal from annihilation in the Earth for case 1 and case 2 is too small to detect since the dark matter population has not reached equilibrium; therefore, the production of neutrinos depends on both the scattering cross section and the annihilation cross section, which is small in this scenario. The plots have two characteristics: an increase at lower masses culminating in a peak followed by a general decrease in event rates at higher masses.
C. Mass Dependence of Muon Flux
The decrease of the event rates for higher m e N is reflective of the decrease of the neutrino flux due to the kinematic suppression of sneutrino capture (the factor scales approximately like 1/m e N for large masses 9 ). The linear increase at low m e N is explained by the linear dependence of the cross section for charged current interactions on the energy of neutrinos at the detector (which is proportional to the sneutrino mass). The case 1 signal is larger than the case 2 signal for lower values of sneutrino mass. LH neutrinos are produced in two-body decays in case 1 versus three-body decays in case 2, and hence have a higher energy. As a result, the cross section for conversion of neutrinos to muons at the detector is 8 The apparent discrete nature of these plots occurs because only a few values of sneutrino mass are recorded in the WimpSim tables used by DarkSUSY; the program interpolates between these points. The effect is numerical and not physical. 9 See Appendix A for a more detailed definition of "large". larger in case 1. However, for large sneutrino masses case 1 has a smaller signal than case 2. The produced LH neutrinos, 100% of case 1 products, get absorbed via charged current interactions or lose energy via neutral current interactions inside the Sun more efficiently because of their larger energy, thus a smaller number of neutrinos arrive at the detector.
Refs. [19,20] display sensitivity plots for the detection of a muon signal in the case of standard neutralino dark matter annihilation in the Sun and Earth respectively. In the case of the Earth, more than 12 events are needed for a DM mass between 70 GeV and 4 TeV. In the case of the Sun the number of events needed drops linearly as a function of mass starting from 300 events at 70 GeV down to 70 events at 300 GeV. Beyond 300 GeV up to 4 TeV, the number of events needed remains fixed at 70. This provides a hint that one could detect the event rates caused by sneutrinos despite some differences between the sneutrino and neutralino dark matter spectra. These differences are due to unequal numbers and weighting of neutrino production channels, but the somewhat harder spectrum of the sneutrino model will make IceCube slightly more sensitive to the model. Hence, we can expect that it might be possible to detect muon neutrinos produced by sneutrino annihilation for sneutrino masses around 300 GeV for the Sun, cf. Fig. 4. Note that a large range of masses would be accessible with only an order of magnitude improvement in sensitivity.
In summary, if the dark matter mass is determined from measurements at the LHC, then we can read the maximum number expected for the Sun muon rate in the B − L model from Fig. 4 10 . Thus, for a known sneutrino 10 Since we have used the upper bound on the sneutrino-proton mass, observation of a muon signal exceeding the number given in Fig. 4 will rule out the B − L model. The largest number of muon events from the Sun in the entire depicted mass range is 58 km −2 yr −1 (36 km −2 yr −1 ) for case 1 (case 2). Therefore detection of a muon signal larger than this will rule out the B − L model regardless of the sneutrino mass.
In the case of the Earth, as mentioned in the previous subsection, there is no prospect for a potential detection at IceCube for the standard halo model. The number of muon events is 6 orders of magnitude below the minimum measurable Earth rate of 12 km −2 yr −1 in this case.
V. DARK MATTER DISC IN THE MILKY WAY
In our analysis, so far we have assumed a Gaussian like velocity distribution for dark matter particles with a typical value for the three dimensional velocity dispersion of σ v = 270 km sec −1 and |v Sun | = 220 km sec −1 for the velocity of the solar system with respect to the halo. However, there are recent speculations about the existence of a dark matter thick disc in the Milky Way in addition to the baryonic one, see e. g. [24,25]. This dark matter disc is caused by the accretion of Milky Way satellite galaxies and their corresponding baryonic and dark matter. As dynamical friction causes the satellite galaxies to accrete onto the disc, tidal forces disrupt the satellites [25]. Galaxy formation simulations find the density of the dark matter disc ρ dark to be in the range ≈ 0.25−1.5 times the local halo dark matter density ρ halo [25].Possible ranges for the solar system velocity and velocity dispersion of the dark matter disc are|v Sun | ≈ 0 − 150 km sec −1 and σ v ≈ 87 − 156 km sec −1 . Fig. 5 shows the Earth muon rate when we scan about the relevant parameter space for the allowed values of |v Sun | and σ v in case 1 and case 2. We used the fixed ratio ρ dark /ρ halo = 1. Case 2 has a sufficient total event rate (≥ 12 km −2 yr −1 ) for nearly the whole allowed parameter space. The constraint of the parameter space is more pronounced for case 1. The allowed combinations are roughly given by a triangle with maximal values of |v Sun | = 47km/s and σ v = 100km/s. The differences in the allowed parameter space for the two cases reflects the fact that the Earth is not in equilibrium yet. Thus the muon neutrino signal and the corresponding muon flux still depends on the annihilation cross section, which is three orders of magnitude larger for case 2. However, we see that in both cases the Earth rates have increased to detectable rates, several orders of magnitude higher than the standard halo model that has higher |v Sun | and σ v , used in the previous section 11 . scattering cross section in our calculations, the number of muon events cannot be larger than that given in Fig. 4. 11 The usage of a free space Gaussian velocity distribution means A change in the velocities and dispersions also modifies the corresponding total Sun event rates. This is comparatively modest for neutralino dark matter where it is at most one order of magnitude, see [26]. Fig. 6 shows a band of allowed total Sun muon rates for the sneutrino dark matter. These rates are given again as a function of the sneutrino mass and under the requirement that we that our calculated event rates are an upper bound. There are many proposed parameterizations for the dark matter velocity distribution, and a Gaussian distribution belongs to the scenarios with the highest resulting event rates, see [26].
have a measurable Earth rate of at least 12 events km −2 yr −1 . Any variation of the event numbers for a fixed mass arises as a result of the use of velocities |v Sun | and dispersions σ v within the required parameter ranges of Fig. 5(a) and 5(b). A comparison between Fig. 4 and 6 shows an increase of ≈ 30% in the Sun muon rates for the sneutrino dark matter. The band of total muon rates for case 1 is noticeably thinner than for case 2. It is seen from Figs. 5(a) and 5(b) that case 1 has a smaller allowed parameter range with more than 12 events km −2 yr −1 . Thus the corresponding ratio between the minimal and maximal value within the allowed range is much smaller than that in case 2, and the possible change in the total Sun rates is comparatively small. Even for case 2 the differences between the highest and lowest rates for a fixed mass is about 40% or less.
To summarize, a modified velocity distribution can substantially enhance the Earth muon rate for the sneutrino dark matter beyond the detection threshold of 12 km −2 yr −1 . It also raises the maximum Sun muon rate to 78 events km −2 yr −1 (48 km −2 yr −1 ) in case 1 (case 2). Observation of the Sun muon rates larger than these will rule out the B − L model regardless of the sneutrino mass or Earth rates.
VI. COMPARISON WITH MSUGRA
Minimal supergravity (mSUGRA) is a constrained version of the MSSM that depends only on four parameters and one sign. These are m 0 (the universal soft breaking mass at the grand unification scale), m 1/2 (the universal gaugino soft breaking mass at the grand unification scale), A (the universal trilinear soft breaking mass at the grand unification scale), tan β (the ratio of MSSM Higgs VEVs at the electroweak scale) and the sign of µ (the MSSM Higgs mixing parameter). The mSUGRA dark matter candidate is the lightest neutralino.
The parameter space of the mSUGRA model has three distinct regions allowed by the dark matter constraints [29]: (i) the co-annihilation region where both m 0 and m 1/2 can be small, (ii) the hyperbolic branch/focus point region where the dark matter has a large Higgsino component and m 0 is very large but m 1/2 is small, and (iii) the funnel region where both m 0 and m 1/2 are large and the dark matter annihilation occurs through heavy Higgs bosons in the s-channel. We note that a bulk region (where none of the above properties hold) is now almost ruled out due to other experimental constraints. Among these three regions, the neutralino has a large capture rate in the hyperbolic branch/focus point region due to a large Higgsino component that results in a large spin-dependent scattering cross section via Z exchange. In this section we compare mSUGRA hyperbolic branch/focus point scenarios with the B − L model. Fig. 7 shows the total Sun muon rate as a function of the neutralino mass for mSUGRA hyperbolic branch/focus points. A comparison with Fig. 4 shows that these scenarios always have a higher total muon rate in the plotted mass range than the B − L model. The hyperbolic branch/focus point models yield larger muon rates by between more than one order of magnitude and a factor of 1.5 for dark matter masses in the 100 − 800 GeV range. Even for masses up to 400 GeV the hyperbolic branch/focus point scenarios provide rates higher than 100 events km −2 yr −1 . These higher rates are explained by the bigger spin-dependent scattering cross sections, which are a few orders of magnitude larger than the upper bound on the spin-independent cross section for the sneutrino dark matter. The spin-dependent scattering cross section for the B−L model is zero because U (1) B−L is a vectorial symmetry. Since the Sun mainly consists of hydrogen, the spin-dependent piece contributes dominantly for the mSUGRA case.
However, it is interesting that despite having a much smaller scattering cross section, the B−L model can yield muon rates that are roughly comparable to the mSUGRA scenarios. Sneutrino annihilation dominantly produces leptons, i.e., RH neutrinos in case 1 and taus in case 2, which subsequently decay to LH neutrinos 100%. On the other hand, neutralino annihilation in the hyperbolic branch/focus point scenarios dominantly produces quark final states that have a small branching ratio for decay to neutrinos.
Furthermore, despite lower event rates, sneutrino dark matter still produces a distinctive linear spectrum in the muon flux. As illustrated in subsection IV B, this feature is caused by the delta function in energy for the neutrino spectrum and can be used to distinguish between the B − L model and the hyperbolic branch/focus point scenarios as long as energy binning of the differential muon rate with respect to the energy is precise enough at IceCube. Fig. 6 for mSUGRA hyperbolic branch/focus point scenarios. The range of velocities and dispersions for which the corresponding Earth rates are at least 12 events km −2 yr −1 yields a band for the total Sun muon rates. We see that the range between the highest and lowest rates for a fixed mass does not exceed a factor of two even for masses below 200 GeV.
A scan about the whole parameter space of the modified velocity distribution yields a maximum of 13 events km −2 yr −1 from the Sun for a 1000 GeV neutralino in the hyperbolic branch/focus point sneutrino. The B − L model with sneutrino masses of 1000 GeV and 1500 GeV gives rise to maximum values of 18 and 6 (25 and 14) events km −2 yr −1 for case 1 (case 2). In contrast, for a dark matter mass of 300 GeV, the maximum events km −2 yr −1 are 158 (hyperbolic branch/focus point), 79 (case 1) and 48 (case 2). Thus the hyperbolic branch/focus point rates are larger than the B −L rates for low masses, but both are in the detectable range at IceCube. At high masses it becomes more difficult to distinguish between the hyperbolic branch/focus point and the B − L models using maximal Sun rates; we would have to depend instead on the spectral features mentioned in Section IV B.
In the stau co-annihilation and Higgs resonance regions the lightest neutralino has a high gaugino fraction and therefore a much smaller spin-dependent cross section that leads to much lower event rates than the B − L model. For example, even if we assume a modified velocity distribution without any minimal Earth event rate condition the maximum total Sun rate is less than 1 event km −2 yr −1 for a 300 GeV neutralino (compared with the maximum Sun rate of 158 events km −2 yr −1 for a hyperbolic branch/focus point scenario with the same mass). This is far below any detection threshold.
It is also important to note that the hyperbolic branch/focus point in the mSUGRA model is incompatible with the g − 2 data, where there exists a 3σ deviation from the SM value if the e + e − data is used to calculate the leading order hadronic contribution [30]. In the context of the B − L model, case 2, which can address the PAMELA puzzle, also becomes incompatible with g − 2 data, however the generic B − L model, i.e. case 1, is still compatible.
VII. CONCLUSION
We have considered prospects of indirect detection of the RH sneutrino dark matter in a U (1) B−L extension of the MSSM at the IceCube neutrino telescope. The sneutrinos captured in the Sun and Earth dominantly annihilate through S-wave processes at the present time. In a generic situation (called case 1) the sneutrinos annihilate to RH neutrinos (annihilation cross section of 3 × 10 −26 cm 3 /sec) that quickly decay to a LH neutrino and the MSSM Higgs. If one seeks an explanation for the recently observed positron excess from the PAMELA data (called case 2), the sneutrinos with a mass ≥ 1 TeV dominantly annihilate to the lightest Higgs in the B − L sector (with an enhanced annihilation cross section of 3 × 10 −23 cm 3 /sec) that rapidly decay to fermionantifermion pairs (74% taus, 16% bottom quarks, and 10% RH neutrinos). LH neutrinos are produced mainly from the three-body decay of taus. The muon neutrinos from sneutrino annihilation are converted to muons via charged current interactions at IceCube.
In both of the cases, sneutrino capture and annihilation inside the Sun reaches equilibrium. Consequently, the flux of neutrinos from the Sun is governed by the cross section for sneutrino-proton elastic scattering, which has an upper bound of 8 × 10 −9 pb from the LEP and Tevatron limits on the Z ′ mass (due to the vectorial nature of the B − L symmetry, there is no spin-dependent piece). In Fig. 4 we have shown the number of Sun muon events at IceCube as a function of the sneutrino mass for case 1 and case 2 (using the upper bound on the sneutrinoproton scattering cross section). In both cases, the num-ber of events are potentially detectable by IceCube due to a harder neutrino spectrum. Thus once the dark matter mass is found from measurements at the LHC, observation of muon events larger than that given in Fig. 4 will rule out the B − L model.
For the standard halo model the capture and annihilation of sneutrinos inside the Earth does not reach equilibrium for either case 1 or case 2, resulting in an event rate that is too small to be detected at IceCube. However, modified velocity distributions within the range allowed by recent simulations of the galaxy can lead to a substantially larger rate that exceeds the IceCube detection threshold of 12 km −2 yr −1 for events from annihilation in the Earth. Nevertheless, the Sun-annihilation muon rate can at most increase by 30% for a modified velocity distribution, as shown in Fig. 6. This implies that observation of a muon event rate larger than roughly 100 km −2 yr −1 from the Sun will all but rule out the B − L model regardless of the dark matter mass.
We compared predictions of the sneutrino dark matter in the B − L model with that of the neutralino dark matter in the mSUGRA model. Only hyperbolic branch/focus point scenarios in mSUGRA, which have a Higgsino type dark matter candidate and thus large spindependent contributions to the neutralino-proton elastic scattering cross section, give rise to Sun muon event rates that can be detected at IceCube. Even though scattering cross sections can be two to three orders of magnitude larger than the B − L case, the muon rates do not scale directly with the cross section. This is because sneutrinos mainly annihilate into lepton final states (by virtue of the B − L symmetry) that decay to neutrinos with 100% efficiency, while neutralino annihilation dominantly produces quark final states that have a small branching ratio for decay to neutrinos. Moreover, the linear dependence of the muon spectrum on the energy in the case of the sneutrino dark matter (particularly case 1) , a common feature for neutrinos produced from the two-body decays, can be used to distinguish between the B − L model and the hyperbolic branch/focus point scenarios. This will be feasible by a sufficiently precise energy binning of the differential muon rate at IceCube. where the sum runs over all species i of nuclei in the Sun or Earth, F i are the corresponding form factors, S is the kinematic suppression factor for capture of a sneutrino and the σ i are the individual scalar cross sections for scattering from nucleus i. The effect of the F i dependence on mass is negligible because most of these form factors vary little from unity. Furthermore, σ i is not dependent on the sneutrino mass in the B − L model since we have chosen a constant sneutrino-proton scattering cross section of 8 × 10 −9 pb (the upper bound implied by the Z ′ mass limits). Thus the overall shape of the curves in Fig. 4 can be understood by looking at S(m e N ). S can be parameterized by
S(x) = A(x) 1.5 1 + A(x) 1.5 3/2 ,(A2)
A(x) = 1.5
x (x − 1) 2 < v esc > 2 v 2 ,(A3)
wherev = 270 km sec −1 is the velocity dispersion of the dark matter particles and < v esc > is the escape velocity of 1156 km sec −1 and 13.2 km sec −1 for the Sun and Earth respectively. S(x) is bounded between zero and one. Moreover, it scales like 1.5(< v esc > 2 /v 2 )/x for x → ∞, and it peaks at one for x = 1. Therefore, the exact location of the peak for each scattering element i is determined by its corresponding nucleus mass m Ni . As mentioned in subsection IV C, S scales approximately like 1/m e N for large masses. The meaning of large in this context depends on the value of the (< v esc > /v) 2 ratio in comparison to x. For example, a value of m e N with m e N /m Ni > (< v esc > /v) 2 is considered large.
APPENDIX B: CASCADE SIGNAL Fig. 9 plots the total energy spectrum from all cascades, both hadronic and electromagnetic (excluding the electromagnetic muon signal 12 ), per kilometer squared of detector per year for case 1 and case 2. The general downward trend of the plot occurs because the hadronic signal dominates as it is produced by both charged current and neutral current interactions, while the upward-trending electromagnetic signal only receives contributions from the charged current interactions and excludes the muon signal altogether. The cross sections for hadronic processes decrease as the transferred energy to the nucleus goes up, hence creating the decreasing trend (high energy in the hadronic cascades corresponds to low energies in the electromagnetic cascades). We note that in the lower panel of the figure (case 2) the cascade signal is depleted at high energies. This is because the produced neutrinos have higher energies (as a result of the higher sneutrino mass in this case), and therefore absorption and scattering effects inside the Sun are more important. This explains why the signal in case 2 is more steeply curved than the case 1 signal.
It is important to remember that it is not clear at this time whether IceCube will be able to distinguish between electromagnetic and hadronic cascades. As a result, while a single charged current interaction will result in both a hadronic and electromagnetic cascade, these may be recorded as a single event with the total energy of the incoming neutrino. Meanwhile, the hadronic cascade of neutral current events would be recorded correctly as a single event with only part of the energy of the incoming neutrino. While we have assumed in the above that individual cascade signals are separable this may not reflect experimental reality.
III. ACKNOWLEDGEMENT
FIG. 1 :
1Total muon neutrino rates received at the Earth for the U (1)B−L model as a function of the sneutrino mass in the case of sneutrino dark matter capture and annihilation in the Sun. The results are for one year of detection with Ice-Cube. The B − L model is robust to changes in the neutrino branching ratios. 100% branching to νe, νµ and ντ is shown in orange (bottom line in case 1), green (second line from the bottom in case 1) and blue (top line in case 1) respectively. Results of equal branching to neutrino flavors are in red (second line from the top in case 1).
FIG. 2 :
2In the upper (lower) panel, muon neutrino (muon) flux through IceCube from annihilation of 300 GeV sneutrinos in the Sun for case 1.
Fig. 4 FIG. 3 :
43shows our results for the total muon rate integrated over energy as a function of the sneutrino mass The same asFig. 2, but with a 1 TeV sneutrino in case 2. Individual annihilation channels are shown: neutrino (red, dotted), tau (green, dashed), bottom quark (purple, dotdashed) and all channels (blue, solid).
m e N for annihilation in the Sun 8 . The figure shows both the case 1 and case 2 rates in events km −2 yr −1 .
FIG. 4 :
4Total muon rates detected at the Earth from annihilation of sneutrino dark matter in the Sun as a function of the sneutrino mass. The results are for one year of detection with IceCube. Case 1 (case 2) is the highest (lowest) peaked line. The dotted line denotes the mass range where one cannot explain the PAMELA data using case 2 anymore.
FIG. 5 :
5Total Earth-annihilation muon event rates inside the detector per kilometer squared per year for a 300 GeV (in case 1) and 1000 GeV (in case 2) sneutrino.
FIG. 6 :
6Total Sun-annihilation muon rates inside the detector for the sneutrino dark matter with modified velocity distributions that yield Earth-annihilation rates of at least 12 events per year per km 2 . Upper (lower) curve shows the case 1 (case 2). The dotted lines denote the mass range where one cannot explain the PAMELA data using case 2 anymore.
FIG. 7 :
7Total Sun-annihilation muon rates inside the detector for mSUGRA hyperbolic branch/focus point scenarios as a function of the neutralino mass. The results are for one year of detection with IceCube.
FIG. 8 :
8Total muon rates detected inside the Earth for mSUGRA Focus point scenarios as a function of the neutralino mass in the case of neutralino DM capture and annihilation in the Sun. Rates for a range of velocities and dispersions for which the corresponding Earth rates are at least 12 events per year per km 2 are shown in the shaded region. The results are for one year of detection with IceCube.
Fig. 8
8shows the counterpart of
APPENDIX A: MASS DEPENDENCE OF ΓAWe analyze in detail here the contribution from Eq. (3) to the mass behavior ofFig. 4. Any mass dependence from dN f ν /dE ν is ultimately washed out of the muon signal by the linear dependence of σ CC,N C on the neutrino energy, which dominates at low energy. The distance D between the detector and the source and the branching fraction B f e N into the final state f are independent of m e N . Thus, the annihilation rate at high energies is governed by the dependence of Γ A on the kinematic suppression factor. Eq. (3) shows that this annihilation rate is proportional to the capture rate C. Ref.[17] provides a parameterization of C as a function of the energy: Ni )σ i (m e N ) , (A1)
FIG. 9 :
9Total electromagnetic and hadronic cascades inside the detector volume from sneutrino annihilation in the Sun.
charge assignments are allowed by anomaly cancelation. We choose the charge assignment shown inTable 1. In this case H ′ 2 couples to the RH neutrinos and gives rise to a Majorana mass upon spontaneous breakdown of the U (1) B−L . Choosing these Majorana masses in the 100 GeV −1 TeV range, we have three (dominantly RH) heavy neutrinos and three (dominantly LH) light neutrinos. The masses of the light neutrinos are obtained via the see-saw mechanism.
TABLE I :
IThe B − L charges of the fields for the minimal model. Here Q and L represent quarks and leptons respectively, while H ′ 1 and H ′ 2 are the two new Higgs fields. The MSSM Higgs fields have zero B − L charges.
It is also possible to have successful inflation in the context of the U (1) B−L model[8]. In this case the dark matter candidate (the RH sneutrino) can become a part of the inflaton field and thereby gives rise to a unified picture of dark matter, inflation and the origin of neutrino masses[7].
It is possible to invoke a non-thermal scenario where the sneutrinos are created from the decay of heavy moduli or gravitinos[12]. In this case we do not need Sommerfeld enhancement to satisfy the PAMELA data, and the annihilation cross section will be large, 3 × 10 −23 cm 3 /sec, at all times.3 Since B − L symmetry is vectorial, the spin-dependent cross section is zero in this model.
RH neutrino decay to a charged lepton and a charged Higgs is typically forbidden. 5 This is the case when the soft supersymmetry breaking mass of the sneutrino is similar to or smaller than supersymmetry conserving Majorana mass of the (s)neutrino. A rather small soft mass term is motivated if the B − L symmetry is to break radiatively and is needed to keep the lightest B − L Higgs φ light as in case 2[11].
The electromagnetic cascade from a muon signal is excluded from the graph since it is accompanied by a more discernable muon track, the subject of the body of this paper.
The authors wish to thank Spencer Klein and Carsten Rott for valuable discussions. The work of BD is supported in part by DOE grant DE-FG02-95ER40917.
. H Goldberg, Phys. Rev. Lett. 501419H. Goldberg, Phys. Rev. Lett. 50 (1983) 1419.
. E Komatsu, arXiv:0803.0547E. Komatsu, et. al., arXiv:0803.0547.
. J R Ellis, Nucl. Phys. B. 238453J.R. Ellis, et. al., Nucl. Phys. B 238, 453 (1984).
. R N Mohapatra, R E Marshak, Phys. Rev. Lett. 441316Erratum-ibid. 44, 1643 (1980)R.N. Mohapatra and R.E. Marshak, Phys. Rev. Lett. 44, 1316 (1980) [Erratum-ibid. 44, 1643 (1980)].
. S Khalil, H Okada, arXiv:0810.4573hep-phS. Khalil and H. Okada, arXiv:0810.4573 [hep-ph];
. S Khalil, A Masiero, Phys. Lett. B. 665374S. Khalil and A. Masiero, Phys. Lett. B 665, 374 (2008).
. R Allahverdi, B Dutta, K Richardson-Mcdaniel, Y Santoso, Phys. Rev. D. 7975005R. Allahverdi, B. Dutta, K. Richardson-McDaniel and Y. Santoso, Phys. Rev. D 79, 075005 (2009).
. R Allahverdi, B Dutta, A Mazumdar, Phys. Rev. Lett. 99261301R. Allahverdi, B. Dutta and A. Mazumdar, Phys. Rev. Lett. 99, 261301 (2007).
. R Allahverdi, A Kusenko, A Mazumdar, JCAP. 070718R. Allahverdi, A. Kusenko and A. Mazumdar, JCAP 0707, 018 (2007).
. S P Martin, arXiv:hep-ph/9602349Phys. Rev. D. 542340S. P. Martin, Phys. Rev. D 54, 2340 (1996) [arXiv:hep-ph/9602349].
. O Adriani, arXiv:0810.4995arXiv:0810.4994O. Adriani, et al., arXiv:0810.4995; arXiv:0810.4994.
. R Allahverdi, B Dutta, K Richardson-Mcdaniel, Y Santoso, arXiv:0902.3463Phys. Lett.B). hep-ph. to appear inR. Allahverdi, B. Dutta, K. Richardson-McDaniel and Y. Santoso, arXiv:0902.3463 [hep-ph] (to appear in Phys. Lett.B).
. B Dutta, L Leblond, K Sinha, arXiv:0904.3773hep-phB. Dutta, L. Leblond and K. Sinha, arXiv:0904.3773 [hep-ph].
. A Sommerfeld, Annalen der Physik. 403257A. Sommerfeld, Annalen der Physik, 403, 257 (1931).
. L Baudis, arXiv:0711.3788L. Baudis, arXiv:0711.3788.
. T Aaltonen, CDF CollaborationarXiv:0707.2524Phys. Rev. Lett. 99171802hep-exT. Aaltonen et al. [CDF Collaboration], Phys. Rev. Lett. 99 (2007) 171802 [arXiv:0707.2524 [hep-ex]].
. M S Carena, A Daleo, B A Dobrescu, T M P Tait, arXiv:hep-ph/0408098Phys. Rev. D. 7093009M. S. Carena, A. Daleo, B. A. Dobrescu and T. M. P. Tait, Phys. Rev. D 70 (2004) 093009 [arXiv:hep-ph/0408098].
. G Jungman, M Kamionkowski, K Griest, arXiv:hep-ph/9506380Phys. Rept. 267195G. Jungman, M. Kamionkowski and K. Griest, Phys. Rept. 267 (1996) 195 [arXiv:hep-ph/9506380].
. C Delaunay, P J Fox, G Perez, JHEP. 090599C. Delaunay, P. J. Fox and G. Perez, JHEP 0905, 099 (2009).
Search for Dark Matter with the AMANDA and IceCube Neutrino Detectors. C De, Stockholm, SwedenPresented at the Identification of Dark Matter. Proceedings of Science PoS (idm2008) 034C. De Clercq for the IceCube Collaboration, "Search for Dark Matter with the AMANDA and IceCube Neutrino Detectors", Presented at the Identification of Dark Mat- ter 2008, Stockholm, Sweden, 18-22 August 2008; Pro- ceedings of Science PoS (idm2008) 034.
. R Abbasi, ICECUBE CollaborationarXiv:0902.2460Phys. Rev. Lett. 102201302R. Abbasi et al. [ICECUBE Collaboration], Phys. Rev. Lett. 102 (2009) 201302 [arXiv:0902.2460].
. P Gondolo, J Edsjo, P Ullio, L Bergstrom, M Schelke, E A Baltz, arXiv:astro-ph/0406204JCAP. 04078P. Gondolo, J. Edsjo, P. Ullio, L. Bergstrom, M. Schelke and E. A. Baltz, JCAP 0407 (2004) 008 [arXiv:astro-ph/0406204].
. M Blennow, J Edsjo, T Ohlsson, arXiv:0709.3898JCAP. 080121hep-phM. Blennow, J. Edsjo and T. Ohlsson, JCAP 0801 (2008) 021 [arXiv:0709.3898 [hep-ph]].
. A Gould, Astrophys. J. 321571Gould, A. 1987, Astrophys. J. , 321, 571
. J I Read, G Lake, O Agertz, V P Debattista, arXiv:0803.2714astro-phJ. I. Read, G. Lake, O. Agertz and V. P. Debattista, arXiv:0803.2714 [astro-ph].
. J I Read, L Mayer, A M Brooks, F Governato, G Lake, arXiv:0902.0009astro-ph.GAJ. I. Read, L. Mayer, A. M. Brooks, F. Governato and G. Lake, arXiv:0902.0009 [astro-ph.GA].
. T Bruch, A H G Peter, J Read, L Baudis, G Lake, arXiv:0902.4001astro-ph.HET. Bruch, A. H. G. Peter, J. Read, L. Baudis and G. Lake, arXiv:0902.4001 [astro-ph.HE].
Search For Neutralino Dark Matter With The Amanda Neutrino Telescope And Prospects For Icecube. A Rizzo, IceCube CollaborationHeidelbergA. Rizzo [IceCube Collaboration], "Search For Neu- tralino Dark Matter With The Amanda Neutrino Tele- scope And Prospects For Icecube," In *Heidelberg 2007, Dark matter in astroparticle and particle physics* 122- 131
. M Ackermann, AMANDA CollaborationarXiv:astro-ph/0508518Astropart. Phys. 24459M. Ackermann et al. [AMANDA Collaboration], As- tropart. Phys. 24 (2006) 459 [arXiv:astro-ph/0508518].
. J Ellis, K Olive, Y Santoso, V Spanos, Phys. Lett. 565176J. Ellis, K. Olive, Y. Santoso, and V. Spanos, Phys. Lett. B565, 176 (2003);
. R Arnowitt, B Dutta, B Hu, ; H Baer, C Balazs, A Belyaev, T Krupovnickas, X Tata, arXiv:hep-ph/0310103JHEP. 030654R. Arnowitt, B. Dutta, and B. Hu, arXiv:hep-ph/0310103; H. Baer, C. Balazs, A. Belyaev, T. Krupovnickas, and X. Tata, JHEP 0306, 054 (2003);
. B Lahanas, D V Nanopoulos, Phys. Lett. 56855B. Lahanas and D.V. Nanopoulos, Phys. Lett. B568, 55 (2003);
. U Chattopadhyay, A Corsetti, P Nath, Phys. Rev. 6835005U. Chattopadhyay, A. Corsetti, and P. Nath, Phys. Rev. D68, 035005 (2003);
. E Baltz, P Gondolo, JHEP. 041052E. Baltz and P. Gon- dolo, JHEP 0410, 052 (2004).
. F Jegerlehner, A Nyffeler, arXiv:0902.3360Phys. Rept. 477hep-phF. Jegerlehner and A. Nyffeler, Phys. Rept. 477, 1 (2009) [arXiv:0902.3360 [hep-ph]].
|
[] |
[
"Multiple object tracking with context awareness Doctoral Thesis Multiple object tracking with context awareness",
"Multiple object tracking with context awareness Doctoral Thesis Multiple object tracking with context awareness"
] |
[
"Laura Leal-Taixé ",
"Laura Leal-Taixé ",
"Prof. DrIng Bodo ",
"Rosenhahn Gottfried ",
"Wilhelm Leibniz ",
"Universität Hannover ",
"DrGermany Prof ",
"Daniel Cremers ",
"Prof. DrIng Jörn ",
"Ostermann Gottfried ",
"Wilhelm ",
"Laura Leal-Taixé ",
"\nFakultät für Elektrotechnik und Informatik der Gottfried Wilhelm\nGOTTFRIED WILHELM LEIBNIZ UNIVERSITÄT HANNOVER\nGOTTFRIED WILHELM LEIBNIZ UNIVERSITÄT HANNOVER\nLeibniz Universität Hannover\nTechnical University of Munich (TUM)\nGermany\n",
"\nInstitute for Information Processing (TNT) Appelstr. 9A\nLeibniz Universität Hannover\n13. Etage 30167HannoverGermany, Germany\n"
] |
[
"Fakultät für Elektrotechnik und Informatik der Gottfried Wilhelm\nGOTTFRIED WILHELM LEIBNIZ UNIVERSITÄT HANNOVER\nGOTTFRIED WILHELM LEIBNIZ UNIVERSITÄT HANNOVER\nLeibniz Universität Hannover\nTechnical University of Munich (TUM)\nGermany",
"Institute for Information Processing (TNT) Appelstr. 9A\nLeibniz Universität Hannover\n13. Etage 30167HannoverGermany, Germany"
] |
[] |
Multiple people tracking is a key problem for many applications such as surveillance, animation or car navigation, and a key input for tasks such as activity recognition.In crowded environments occlusions and false detections are common, and although there have been substantial advances in recent years, tracking is still a challenging task.Tracking is typically divided into two steps: detection, i.e., locating the pedestrians in the image, and data association, i.e., linking detections across frames to form complete trajectories. For the data association task, approaches typically aim at developing new, I would first like to thank my advisor, for always believing in my ideas, pushing me to reach my limits and for allowing me to follow my own research path. Special thanks to the all the members of my thesis committee, for dedicating the time to read my thesis and evaluate my work. A warm thanks to all the members of the Hannover group, both the ones who have already moved on and the ones who are still there. I want to thank them for the fun memories, the fruitful collaborations, the late deadline nights, the amazing Doktorhut you created, and the many hours of discussion and games during coffee time! I will never forget my time on the 13th floor. Thanks to the people from the Michigan laboratory, who gave me the warmest welcome during Midwest winter. I learned so much from all you guys and had tons of fun during my internship.Thanks to the many friends I made in the computer vision community during conferences, for the many interesting discussions, suggestions, collaborations, opportunities offered, and also for the all the fun we had.
| null |
[
"https://arxiv.org/pdf/1411.7935v1.pdf"
] | 5,334,684 |
1411.7935
|
5172baf893b1afe2650e015606b8537e16db88e5
|
Multiple object tracking with context awareness Doctoral Thesis Multiple object tracking with context awareness
24 Nov 2014 11.02.2014
Laura Leal-Taixé
Laura Leal-Taixé
Prof. DrIng Bodo
Rosenhahn Gottfried
Wilhelm Leibniz
Universität Hannover
DrGermany Prof
Daniel Cremers
Prof. DrIng Jörn
Ostermann Gottfried
Wilhelm
Laura Leal-Taixé
Fakultät für Elektrotechnik und Informatik der Gottfried Wilhelm
GOTTFRIED WILHELM LEIBNIZ UNIVERSITÄT HANNOVER
GOTTFRIED WILHELM LEIBNIZ UNIVERSITÄT HANNOVER
Leibniz Universität Hannover
Technical University of Munich (TUM)
Germany
Institute for Information Processing (TNT) Appelstr. 9A
Leibniz Universität Hannover
13. Etage 30167HannoverGermany, Germany
Multiple object tracking with context awareness Doctoral Thesis Multiple object tracking with context awareness
24 Nov 2014 11.02.2014geboren am 28. Juni 1984 in Barcelona. 2014Dissertation von Doktorvater / Supervisor: Prof. Dr.-Ing. Bodo Rosenhahn Gottfried Wilhelm Leibniz Universität Hannover, Germany Gutachter / Reviewers: Datum des Kolloquiums / Date of Defense: Autor / Author:
Multiple people tracking is a key problem for many applications such as surveillance, animation or car navigation, and a key input for tasks such as activity recognition.In crowded environments occlusions and false detections are common, and although there have been substantial advances in recent years, tracking is still a challenging task.Tracking is typically divided into two steps: detection, i.e., locating the pedestrians in the image, and data association, i.e., linking detections across frames to form complete trajectories. For the data association task, approaches typically aim at developing new, I would first like to thank my advisor, for always believing in my ideas, pushing me to reach my limits and for allowing me to follow my own research path. Special thanks to the all the members of my thesis committee, for dedicating the time to read my thesis and evaluate my work. A warm thanks to all the members of the Hannover group, both the ones who have already moved on and the ones who are still there. I want to thank them for the fun memories, the fruitful collaborations, the late deadline nights, the amazing Doktorhut you created, and the many hours of discussion and games during coffee time! I will never forget my time on the 13th floor. Thanks to the people from the Michigan laboratory, who gave me the warmest welcome during Midwest winter. I learned so much from all you guys and had tons of fun during my internship.Thanks to the many friends I made in the computer vision community during conferences, for the many interesting discussions, suggestions, collaborations, opportunities offered, and also for the all the fun we had.
more complex formulations, which in turn put the focus on the optimization techniques required to solve them. However, they still utilize very basic information such as distance between detections. In this thesis, I focus on the data association task and argue that there is contextual information that has not been fully exploited yet in the tracking community, mainly social context and spatial context coming from different views. As tracking framework I use a global optimization method that finds the best solution for all pedestrian trajectories and all frames using Linear Programming. This is the perfect setup to include contextual information that can be used to improve all trajectories.
Firstly, I present an efficient way to include social and grouping behavior to improve monocular tracking. Incorporating this source of information leads to much more accurate tracking results, especially in crowded scenarios. Secondly, I present a formulation to perform 2D-3D assignments (reconstruction) and temporal assignments (tracking) in a single global optimization. I show that linking the reconstruction and tracking processes in a tight formulation leads to a significant boost in tracking accuracy. Overall, I show that context is an extremely rich source of information that can be exploited to obtain more accurate tracking results. Insgesamt zeige ich, dass der Kontext eine reiche Informationsquelle ist, die genutzt werden kann um genauere Ergebnisse beim Tracking zu erzielen.
II
Introduction
Motivation
Video cameras are increasingly present in our daily lives: webcams, surveillance cameras and other imaging devices are being used for multiple purposes. As the number of data streams increases, it becomes more and more important to develop methods to automatically analyze this type of data. People are usually the central characters of most videos, therefore, it is particularly interesting to develop techniques to analyze their behavior. Either for surveillance, animation or activity recognition, multiple people tracking is a key problem to be addressed. In crowded environments occlusions and false detections are common, and although there have been substantial advances in the last years, tracking is still a challenging task. The task is typically divided into two steps: detection and data association. Detectors are nowadays very robust and provide extremely good detection rates for normal scenes, but still struggle with partial and full occlusions common in crowded scenes. Data association or tracking, on the other hand, is also extremely difficult in crowded scenarios, especially due to the high rate of missing data and common false alarms. In this thesis, we argue that there are two main sources of pedestrian context that have not been fully exploited in the tracking community, namely social context and spatial context coming from different views.
Typically, matching is solely based on appearance and distance information, i.e., the closest detection in the following frame is matched to the detection in the current frame.
But this can be completely wrong: let us imagine a queue of people waiting at a coffee shop and a low frame rate camera, as is typical for surveillance scenarios. In one frame 1 1. Introduction we might have 4 persons waiting, while in the next the first person is already out of the queue and a new person entered the queue. In this case, if we only use distance information, the 4 persons of the first frame might be matched to the 4 persons of the second frame, although they are completely different pedestrians. Though this is an extreme case, it represents an error that is common while tracking in crowded scenarios, and this is only caused by the assumption that people do not move from one frame to the next, which is clearly inaccurate.
It is therefore more natural to take into account the context of the pedestrian, which can be the activity they are performing (e.g., queueing) or the interactions that take place in a crowded scenario. It is clear that if a person is walking alone, he/she will follow a straight path towards his/her destination. But what if the environment becomes more crowded, and suddenly the straight path is no longer an option? The pedestrian will then try to find a rather short path to get to the same destination by avoiding other pedestrians and obstacles. All these pedestrian movements and reactions to the environment are ruled by what is called the Social Force Model.
Another source of information that has not been fully exploited in the literature is the spatial context coming from different camera views. It is typical for many applications to observe the same scenario from different viewpoints. In this case, object locations in the images are temporally correlated by the system dynamics and are geometrically constrained by the spatial configuration of the cameras. These two sources of structure have been typically exploited separately, but splitting the problem in two steps has obviously several disadvantages, because the available evidence is not fully exploited.
For example, if one object is temporarily occluded in one camera, both data association for reconstruction and tracking become ambiguous and underconstrained when considered separately. If, on the other hand, evidence is considered jointly, temporal correlation can potentially resolve reconstruction ambiguities and vice versa.
In this thesis, we will show that pedestrian context is an incredibly rich source of information that should be included in the tracking procedure.
Contributions and Organization
As we motivated in the previous section, tracking methods still fail to capture and fully exploit much of the context of a pedestrian and his/her environment. In this thesis we In this work, we present a method for multiple people tracking that leverages a generalized model for capturing interactions among individuals. At the core of our model lies a learned dictionary of interaction feature strings which capture relationships between the motions of targets. These feature strings, created from low-level image features, lead to a much richer representation of the physical interactions between targets compared to hand-specified social force models that previous works have introduced for tracking. One disadvantage of using social forces is that all pedestrians must be detected in order for the forces to be applied, while our method is able to encode the effect of undetected targets, making the tracker more robust to partial occlusions. The interaction feature strings are used in a Random Forest framework to track targets according to the features surrounding them. This is the 2 page abstract version of [1], presented at the same conference as invited paper in the Scene Understanding workshop. In this work, we present an approach for multiple people tracking in semi-crowded environments including interactions between pedestrians in two ways: first, considering social and grouping behavior, and second, using a global optimization scheme to solve the data association problem. This is an extended text of the conference paper [4] in book chapter format. It is intended to be an exhaustive introduction to Linear Programming for multiple people tracking, providing the necessary background on both graphical models and optimization to allow students to start programming such a tracking system. In this work, we present an approach for multiple people tracking in semi-crowded environments including interactions between pedestrians in two ways: first, considering social and grouping behavior, and second, using a global optimization scheme to solve the data association problem. This is an extended text of the conference paper [4], which includes more experiments, detailed evaluation of the effect of the method's parameters, detailed implementation details and extended theoretical background on graphical models. In this work, we present a new algorithm to jointly track multiple objects in multiview images. While this has been typically addressed separately in the past, we tackle the problem as a single global optimization. We formulate this assignment problem as a min-cost problem by defining a graph structure that captures both temporal correlations between objects as well as spatial correlations enforced by the configuration of the cameras. This leads to a complex combinatorial optimization problem that we solve using Dantzig-Wolfe decomposition and branching. Our formulation allows us to solve the problem of reconstruction and tracking in a single step by taking all available evidence into account. In several experiments on multiple people tracking and 3D human pose tracking, we show that our method outperforms state-of-the-art approaches. 1. Introduction motion is independent, thereby ignoring the complex and important interaction between subjects. On the contrary, our method includes the interactions between pedestrians in two ways: first, considering social and grouping behavior, and second, using a global optimization scheme to solve the data association problem. Results are presented on three challenging, publicly available datasets to show that our method outperforms several state-of-the-art tracking systems.
The five publications related to the appendix section of the thesis are detailed below. provides an efficient way of measuring 3D microscopic data over time. In the following works, we explore detection, tracking and motion analysis on this challenging data, as well as ways for extending the method to a multiple camera system. In this work, we present a low-cost transportable stereoscopic system consisting of two consumer camcorders. We apply this novel apparatus to behavioral analysis of barnacle larvae during surface exploration and extract and analyze the threedimensional patterns of movement. The resolution of the system and the accuracy of position determination are characterized. In order to demonstrate the biological applicability of the system, three-dimensional swimming trajectories of the cypris larva of the barnacle Semibalanus balanoides are recorded in the vicinity of a glass surface. Parameters such as swimming direction, swimming velocity and swimming angle are analyzed. In this work, we describe a stereoscopic system to track barnacle cyprids and an algorithm to extract 3D swimming patterns for a common marine biofouling organism -Semibalanus balanoides. The details of the hardware setup and the calibration object are presented and discussed. In addition we describe the algorithm for the camera calibration, object matching and stereo triangulation. Several trajectories of living cyprids are presented and analyzed with respect to statistical swimming parameters. In this work, we present a complete system for the automatic analysis of digital inline holographic data; we detect the 3D positions of the microorganisms, compute their trajectories over time and finally classify these trajectories according to their motion patterns. This work includes the contributions presented in [10] and [11], extended experiments, theoretical background and implementation details. In this work, we present an approach for automatically classifying complex microorganism motions observed with digital in-line holography. Our main contribution is the use of Hidden Markov Models (HMMs) to classify four different motion patterns of a microorganism and to separate multiple patterns occurring within a trajectory. We perform leave-one-out experiments with the training data to prove the accuracy of our method and to analyze the importance of each trajectory feature for classification. We further present results obtained on four full sequences, a total of 2500 frames. The obtained classification rates range between 83.5% and 100%.
Introduction
In this work, we approach the challenges of a high throughput analysis of holographic microscopy data and present a system for detecting particles in 3D reconstructed holograms and their 3D trajectory estimation over time. Our main contribution is a robust method, which evolves from the Hungarian bipartite weighted graph matching algorithm and allows us to deal with newly entering and leaving particles and compensate for missing data and outliers. In the experiments we compare our fully automatic system with manually labeled ground truth data and we can report an accuracy between 76% and 91%.
Aside from the previous publications related to multiple object tracking and motion analysis, the author was also involved in other projects, mainly related to pose estimation: [12] A. Kuznetsova In this work, we propose a precise method to recognize hand static gestures from a depth data provided from a depth sensor. Hand sign recognition is performed using a multi-layered random forest (MLRF), which requires less the training time and memory when compared to a simple random forest with equivalent precision.
We evaluate our algorithm on synthetic data, on a publicly available Kinect dataset containing 24 signs from American Sign Language (ASL) and on a new dataset, collected using the Intel Creative Gesture Camera. In this work, we propose a method for learning a class representation that can return a continuous value for the pose of an unknown class instance, using only 2D data and weak 3D labelling information. Our method is based on generative feature models, i.e., regression functions learnt from local descriptors of the same patch collected under different viewpoints. We evaluate our approach on two state-ofthe-art datasets showing that our method outperforms other methods by 9-10%. In this work, we present a feature-based framework that combines spatial feature clustering, guided sampling for pose generation and model updating for 3D object recognition and pose estimation. We propose to spatially separate the features before matching to create smaller clusters containing the object. Then, hypothesis generation is guided by exploiting cues collected off-and on-line, such as feature repeatability, 3D geometric constraints and feature occurrence frequency. The evaluation of our algorithm on challenging video sequences shows the improvement provided by our contribution. In this work, we present a human motion capturing system that combines video input with sparse inertial sensor input under a particle filter optimization scheme.
It is an extension of the work presented in [16] which includes a thorough theoretical introduction, extended experimental section and implementation details. In this paper, we introduce a novel hybrid human motion capturing system that combines video input with sparse inertial sensor input. Employing an annealing particle-based optimization scheme, our idea is to use orientation cues derived from the inertial input to sample particles from the manifold of valid poses. Then, visual cues derived from the video input are used to weigh these particles and to iteratively derive the final pose. Our method can be used to sample poses that fulfill arbitrary orientation or positional kinematic constraints. In the experiments, we show that our system can track even highly dynamic motions in an outdoor environment with changing illumination, background clutter and shadows.
Introduction
In this work, we present an approach for Markerless Motion Capture using string matching. We find correspondences between the model predictions and image features using global bipartite graph matching on a pruned cost matrix. Extracted features such as contour, gradient orientations and the turning function of the shape are embedded in a string comparison algorithm. The information is used to prune the association cost matrix discarding unlikely correspondences. This results in significant gains in robustness and stability and reduction of computational cost. We show that our approach can stably track fast human motions where standard articulated Iterative Closest Point algorithms fail. This work was done by a Master's student whom the author co-supervised.
The following publication was presented out of the author's Master's Thesis completed at Northeastern University in Boston, USA: The topic of the meeting was Large-Scale Outdoor Scene Analysis, which covers all aspects, applications and open problems regarding the performance or design of computer vision algorithms capable of working in outdoor setups and/or largescale environments. Developing these methods is important for driver assistance, city modeling and reconstruction, virtual tourism, telepresence and outdoor motion capture. After the meeting, this post-proceedings book was edited with the collaboration of all participants who sent a paper that was peer-reviewed by three reviewers.
Chapter 2
Tracking-by-Detection
Tracking is commonly divided into two steps: object detection and data association.
First, objects are detected in each frame of the sequence and second, the detections are matched to form complete trajectories. This is called the tracking-by-detection paradigm, and is the framework that will be used throughout the thesis. In this chapter we introduce the paradigm and give a brief overview of some of the most popular state-of-theart detectors, while the main content of the thesis lies on the data association part. Here we also discuss the type of scenarios we are working with and give an overview of the literature that deals with high-density scenarios where people cannot be individually detected.
The scale of tracking
Videos of walking pedestrians can vary in an infinite number of ways. Camera position, camera distance and type of environment are a few of the characteristics that define the type of video that will be created. Before introducing the problem that we are dealing with in this thesis, we first need to introduce the types of scenarios and the types of videos we will be working with.
In Figure 2.1, we can see four examples of different scenarios with varying crowdness levels. The first example in Figure 2.1(a), from the well-known PETS2009 dataset [20], shows a scene with few pedestrians. The small size of the pedestrians, similar clothing and occlusions behind the pole or within themselves make this a challenging scenario. 15 Nonetheless, recent methods have shown excellent results on this video, which is why more difficult datasets have been introduced.
One example from the Town Center dataset [21] is shown in Figure 2. 1(b). This semicrowded environment is challenging for the high amount of occlusions, but it is wellsuited for the study of social behaviors as we will see in Chapter 5. Pedestrian detection is very challenging in these scenarios, since pedestrians are almost never fully visible and tracking is difficult due to the high amount of crossing trajectories.
Even more crowded scenarios, like the one shown in Figure 2.1(c), can still be analyzed with special methods which take into account the high density of the crowd [22]. For this category of videos, either the target to follow is manually initialized [23] or only head positions are tracked, since other parts of the body are rarely visible for a detector to work. Other approaches rely on feature tracking and motion factorization [24], conveying the idea that if two points move in a similar way they belong to the same object.
In this last case, there is no need for a detection step.
Finally, we have extremely crowded scenarios like marathons, demonstrations, etc. which are filmed from an elevated point of view, as in Figure 2.1(d). In these cases, individuals cannot be detected and identified, and therefore the task changes from individual person tracking towards analysis of the overall flow of the crowd [25,26].
Therefore, depending on the amount of people present in the scene, we can perform two types of tasks for video analysis:
• Microscopic tracking focuses on the detection and tracking of individuals. Behavior analysis is centered around each individual and possibly their interactions. It uses individual motion and appearance features and is not too concerned with the overall motion in the scene.
• Macroscopic tracking, on the other hand, focuses on capturing the "flow" of the crowd, the global behavior and motion tendencies. It is not focused on observing individual behavior but rather network behavior. Individual tracking can be performed if a target is manually initialized, since detection is not possible in this type of videos.
Throughout this thesis, we work on sparse and semi-crowded scenarios as shown in (c) Crowded: tracking full-body pedestrians is no longer possible, but detection and tracking of heads is still performed. Person counting is a common task for videos of this crowdness level. Image from [22]. (d) Macroscopic scenario: individuals cannot be properly detected, therefore the goal in these scenarios is typically to find the overall flow of the crowd. Image from [25].
Tracking-by-detection paradigm
As mentioned in the previous section, there are several approaches for pedestrian video analysis. We focus on microscopic tracking, which means we are interested in detecting and tracking individuals. For this task, the tracking-by-detection paradigm has become increasingly popular in recent years, driven by the progress in object detection. Such methods involve two independent steps: (i) object detection on all individual frames and (ii) tracking or association of those detections across frames.
We can see a diagram of the tracking-by-detection paradigm in Figure 2 This makes tracking or data association a challenging task. Some of the most important challenges include:
• Missed detections: long-term occlusions are usually present in semi-crowded scenarios, where a detector might lose a pedestrian for 1-2 seconds. In this case, it is very hard for the tracker to re-identify the pedestrian without distinctive appearance information, and therefore, the track is usually lost. That is why in recent literature, researchers are opting for global optimization methods [4,27,28], which are very good at dealing with long-term occlusions.
• False alarms: the detector can be triggered by regions in the image that actually do not contain any pedestrian, creating false positive. A tracker might follow the false alarms and create what is called a ghost trajectory.
• Similar appearance: one source of information commonly used for pedestrian identification is appearance. However, in some videos similar clothing can lead to virtually identical appearance models for two different pedestrians. Many methods in recent literature focus on the motion of the pedestrian rather than his/her appearance [4,29].
• Groups and other special behaviors: when dealing with semi-crowded scenarios, it is very common to observe social behaviors like grouping or waiting at a bus stop or stopping to talk to a person. All these behaviors do not fit classic tracking models like Kalman Filter [30], which consider pedestrians motion to be rather constant.
These are a few of the challenges that tracking has to address. In this thesis, we make the observation that there is a lot of context that is not being used for tracking, especially social context or pedestrian interaction and spatial context coming from multiple views of the same scene. The proper use of these two sources of context in a global optimization scheme will be the center of the thesis.
Detectors
There are many pedestrian detectors in literature. Even though this thesis is focused on the data association part of multiple object tracking, we want to give a brief overview of three of the most used methods for pedestrian detection. Such methods can be classified in many ways; we detail one possible classification scheme:
Model-based detectors. A model of the background is created, and then a pixel-wise or block-wise comparison of a new image against the background model is performed to detect regions that do not fit the model [31]. This method is commonly used for video surveillance, since the camera is static and therefore the background model can be learned accurately. The drawback of these techniques is that they are very sensitive to changes in illumination and occlusions.
Template-based detectors. These detectors use a pre-learned set of templates, based for example on image features such as edges [32]. The detector is triggered if the image features inside the local search window meet certain criteria. The drawback of this approach is that its performance can be significantly affected by background clutter and occlusions; if a person is partly occluded, the overall detection score will be very low because part of the image will be completely different from the learned examples.
Part-based detectors. One downside of holistic detectors like the one presented in [32] is that they are easily affected by occlusions and local deformations. We would need a lot of training data to cover all the deformations that a body can undergo. In order to reduce the amount of data needed, recent works [33,34] have proposed to use part-based methods, in which a template for each body part is learned separately. This way, deformations can be learned locally for each part and later combined. Another advantage of this method is that it is more robust to occlusions, since if one part is occluded, all the others can still be detected and combined for an overall high detection score. Other detectors based on parts have also been presented and created specifically to address occlusions [35,36].
Similar to these are block-based detectors, either based on HoG features [37] or SIFT features [38]. The objective is to learn the appearance of blocks inside the bounding box of a detection. At testing time, each block votes for the position of the center of the object to be detected.
There is a different family of detectors, namely online detectors, that formulate the problem of tracking as that of re-detection. The combination of both types of detectors can be very beneficial as shown in [39], specially to account for appearance variations which might not be captured by the learned templates.
We refer the reader to the following survey [40] regarding Adaboost and HOG-based pedestrian detectors for monocular videos; [31] for background subtraction techniques based on a mixture of gaussian background modeling and [41] for a detailed description of part-based models for object detection.
Background modeling using Mixture-of-Gaussians
While the most basic background subtraction methods are based on a frame-by-frame image difference, we detail here the model-based method presented in [42] and used in the OpenCV implementation [43]. The basic idea is to model each pixel's intensity by using a Gaussian Mixture Model (GMM). A simple heuristic determines which intensities most probably belong to the background, and pixels which do not match these are called foreground pixels. Foreground pixels are grouped using 2D connected component analysis.
An example of this process is shown in Figure 2.3. As we can see, the background model is not perfect, which often leads to spurious foreground pixels around the scene as in it is a method prone to false detections. In the experiments for this thesis, we use the homography provided by the camera calibration in order to determine the approximate size of a pedestrian on each pixel position. This allows us to determine a rough bounding box size and to discard groups of foreground pixels which are too small to be a pedestrian.
Histogram of Oriented Gradients (HOG)
The essential thought behind the Histogram of Oriented Gradients (HOG) descriptor is that local object appearance and shape within an image can be described by the distribution of intensity gradients or edge directions. An overview of the pedestrian detection process as described in [32] is shown in Figure 2 The first step is to compute the gradients, then divide the image into small spatial windows or cells. For each cell we accumulate a local 1-D histogram of gradient directions or edge orientations over the pixels of the cell. The histograms can also be contrastnormalized for better invariance to changes in illumination or shadowing. Normalization is done over larger spatial regions called blocks. The detection window is covered
with an overlapping grid of HOG descriptors, and the resulting feature vector is used in a conventional SVM classifier [44,45] that learns the appearance of a pedestrian vs.
non-pedestrian.
The HOG descriptor is particularly suited for human detection in images. This is because coarse spatial sampling, fine orientation sampling, and strong local photometric normalization allow the individual body movement of pedestrians to be ignored so long as they maintain a roughly upright position. results are insensitive to 's value over a large range.
(a) (b) (c) (d) (e) (f) (g)
Centre-surround normalization. We also investigated an alternative centre-surround style cell normalization scheme, in which the image is tiled with a grid of cells and for each cell the total energy in the cell and its surrounding region (summed over orientations and pooled using Gaussian weighting) is used to normalize the cell. However as fig. 4(c) ("window norm") shows, this decreases performance relative to the corresponding block based scheme (by 2% at 10 −4 FPPW, for pooling with σ=1 cell widths). One reason is that there are no longer any overlapping blocks so each cell is coded only once in the final descriptor. Including several normalizations for each cell based on different pooling scales σ provides no perceptible change in performance, so it seems that it is the existence of several pooling regions with different spatial offsets relative to the cell that is important here, not the pooling scale.
To clarify this point, consider the R-HOG detector with overlapping blocks. The coefficients of the trained linear SVM give a measure of how much weight each cell of each block can have in the final discrimination decision. Close examination of fig. 6(b,f) shows that the most important cells are the ones that typically contain major human contours (especially the head and shoulders and the feet), normalized w.r.t. blocks lying outside the contour. In other wordsdespite the complex, cluttered backgrounds that are common in our training set -the detector cues mainly on the contrast of silhouette contours against the background, not on internal edges or on silhouette contours against the foreground. Patterned clothing and pose variations may make internal regions unreliable as cues, or foreground-to-contour transitions may be confused by smooth shading and shadowing effects. Similarly, fig. 6(c,g) illustrate that gradients inside the person (especially vertical ones) typically count as negative cues, presumably because this suppresses false pos-itives in which long vertical lines trigger vertical head and leg cells.
Detector Window and Context
Our 64×128 detection window includes about 16 pixels of margin around the person on all four sides. Fig. 4(e) shows that this border provides a significant amount of context that helps detection. Decreasing it from 16 to 8 pixels (48×112 detection window) decreases performance by 4% at 10 −4 FPPW. Keeping a 64×128 window but increasing the person size within it (again decreasing the border) causes a similar loss of performance, even though the resolution of the person is actually increased.
Classifier
By default we use a soft (C=0.01) linear SVM trained with SVMLight [10] (slightly modified to reduce memory usage for problems with large dense descriptor vectors). Using a Gaussian kernel SVM increases performance by about 3% at 10 −4 FPPW at the cost of a much higher run time.
Discussion
Overall, there are several notable findings in this work. The fact that HOG greatly out-performs wavelets and that any significant degree of smoothing before calculating gradients damages the HOG results emphasizes that much of the available image information is from abrupt edges at fine scales, and that blurring this in the hope of reducing the sensitivity to spatial position is a mistake. Instead, gradients should be calculated at the finest available scale in the current pyramid layer, rectified or used for orientation voting, and only then blurred spatially. Given this, relatively coarse spatial quantization suffices (6-8 pixel wide cells / one limb width). On the other hand, at least for human detection, it pays to sample orientation rather finely: both wavelets and shape contexts lose out significantly here.
Secondly, strong local contrast normalization is essential for good results, and traditional centre-surround style results are insensitive to 's value over a large range.
(a) (a) (b) (c) (d) (e) (f) (g)
Centre-surround normalization. We also investigated an alternative centre-surround style cell normalization scheme, in which the image is tiled with a grid of cells and for each cell the total energy in the cell and its surrounding region (summed over orientations and pooled using Gaussian weighting) is used to normalize the cell. However as fig. 4(c) ("window norm") shows, this decreases performance relative to the corresponding block based scheme (by 2% at 10 −4 FPPW, for pooling with σ=1 cell widths). One reason is that there are no longer any overlapping blocks so each cell is coded only once in the final descriptor. Including several normalizations for each cell based on different pooling scales σ provides no perceptible change in performance, so it seems that it is the existence of several pooling regions with different spatial offsets relative to the cell that is important here, not the pooling scale.
To clarify this point, consider the R-HOG detector with overlapping blocks. The coefficients of the trained linear SVM give a measure of how much weight each cell of each block can have in the final discrimination decision. Close examination of fig. 6(b,f) shows that the most important cells are the ones that typically contain major human contours (especially the head and shoulders and the feet), normalized w.r.t. blocks lying outside the contour. In other wordsdespite the complex, cluttered backgrounds that are common in our training set -the detector cues mainly on the contrast of silhouette contours against the background, not on internal edges or on silhouette contours against the foreground. Patterned clothing and pose variations may make internal regions unreliable as cues, or foreground-to-contour transitions may be confused by smooth shading and shadowing effects. Similarly, fig. 6(c,g) illustrate that gradients inside the person (especially vertical ones) typically count as negative cues, presumably because this suppresses false pos-itives in which long vertical lines trigger vertical head and leg cells.
Detector Window and Context
Our 64×128 detection window includes about 16 pixels of margin around the person on all four sides. Fig. 4(e) shows that this border provides a significant amount of context that helps detection. Decreasing it from 16 to 8 pixels (48×112 detection window) decreases performance by 4% at 10 −4 FPPW. Keeping a 64×128 window but increasing the person size within it (again decreasing the border) causes a similar loss of performance, even though the resolution of the person is actually increased.
Classifier
By default we use a soft (C=0.01) linear SVM trained with SVMLight [10] (slightly modified to reduce memory usage for problems with large dense descriptor vectors). Using a Gaussian kernel SVM increases performance by about 3% at 10 −4 FPPW at the cost of a much higher run time.
Discussion
Overall, there are several notable findings in this work. The fact that HOG greatly out-performs wavelets and that any significant degree of smoothing before calculating gradients damages the HOG results emphasizes that much of the available image information is from abrupt edges at fine scales, and that blurring this in the hope of reducing the sensitivity to spatial position is a mistake. Instead, gradients should be calculated at the finest available scale in the current pyramid layer, rectified or used for orientation voting, and only then blurred spatially. Given this, relatively coarse spatial quantization suffices (6-8 pixel wide cells / one limb width). On the other hand, at least for human detection, it pays to sample orientation rather finely: both wavelets and shape contexts lose out significantly here.
Secondly, strong local contrast normalization is essential for good results, and traditional centre-surround style results are insensitive to 's value over a large range.
(b) (a) (b) (c) (d) (e) (f) (g)
Centre-surround normalization. We also investigated an alternative centre-surround style cell normalization scheme, in which the image is tiled with a grid of cells and for each cell the total energy in the cell and its surrounding region (summed over orientations and pooled using Gaussian weighting) is used to normalize the cell. However as fig. 4(c) ("window norm") shows, this decreases performance relative to the corresponding block based scheme (by 2% at 10 −4 FPPW, for pooling with σ=1 cell widths). One reason is that there are no longer any overlapping blocks so each cell is coded only once in the final descriptor. Including several normalizations for each cell based on different pooling scales σ provides no perceptible change in performance, so it seems that it is the existence of several pooling regions with different spatial offsets relative to the cell that is important here, not the pooling scale.
To clarify this point, consider the R-HOG detector with overlapping blocks. The coefficients of the trained linear SVM give a measure of how much weight each cell of each block can have in the final discrimination decision. Close examination of fig. 6(b,f) shows that the most important cells are the ones that typically contain major human contours (especially the head and shoulders and the feet), normalized w.r.t. blocks lying outside the contour. In other wordsdespite the complex, cluttered backgrounds that are common in our training set -the detector cues mainly on the contrast of silhouette contours against the background, not on internal edges or on silhouette contours against the foreground. Patterned clothing and pose variations may make internal regions unreliable as cues, or foreground-to-contour transitions may be confused by smooth shading and shadowing effects. Similarly, fig. 6(c,g) illustrate that gradients inside the person (especially vertical ones) typically count as negative cues, presumably because this suppresses false pos-itives in which long vertical lines trigger vertical head and leg cells.
Detector Window and Context
Our 64×128 detection window includes about 16 pixels of margin around the person on all four sides. Fig. 4(e) shows that this border provides a significant amount of context that helps detection. Decreasing it from 16 to 8 pixels (48×112 detection window) decreases performance by 4% at 10 −4 FPPW. Keeping a 64×128 window but increasing the person size within it (again decreasing the border) causes a similar loss of performance, even though the resolution of the person is actually increased.
Classifier
By default we use a soft (C=0.01) linear SVM trained with SVMLight [10] (slightly modified to reduce memory usage for problems with large dense descriptor vectors). Using a Gaussian kernel SVM increases performance by about 3% at 10 −4 FPPW at the cost of a much higher run time.
Discussion
Overall, there are several notable findings in this work. The fact that HOG greatly out-performs wavelets and that any significant degree of smoothing before calculating gradients damages the HOG results emphasizes that much of the available image information is from abrupt edges at fine scales, and that blurring this in the hope of reducing the sensitivity to spatial position is a mistake. Instead, gradients should be calculated at the finest available scale in the current pyramid layer, rectified or used for orientation voting, and only then blurred spatially. Given this, relatively coarse spatial quantization suffices (6-8 pixel wide cells / one limb width). On the other hand, at least for human detection, it pays to sample orientation rather finely: both wavelets and shape contexts lose out significantly here.
Secondly, strong local contrast normalization is essential for good results, and traditional centre-surround style results are insensitive to 's value over a large range.
(c) (a) (b) (c) (d) (e) (f) (g)
Centre-surround normalization. We also investigated an alternative centre-surround style cell normalization scheme, in which the image is tiled with a grid of cells and for each cell the total energy in the cell and its surrounding region (summed over orientations and pooled using Gaussian weighting) is used to normalize the cell. However as fig. 4(c) ("window norm") shows, this decreases performance relative to the corresponding block based scheme (by 2% at 10 −4 FPPW, for pooling with σ=1 cell widths). One reason is that there are no longer any overlapping blocks so each cell is coded only once in the final descriptor. Including several normalizations for each cell based on different pooling scales σ provides no perceptible change in performance, so it seems that it is the existence of several pooling regions with different spatial offsets relative to the cell that is important here, not the pooling scale.
To clarify this point, consider the R-HOG detector with overlapping blocks. The coefficients of the trained linear SVM give a measure of how much weight each cell of each block can have in the final discrimination decision. Close examination of fig. 6(b,f) shows that the most important cells are the ones that typically contain major human contours (especially the head and shoulders and the feet), normalized w.r.t. blocks lying outside the contour. In other wordsdespite the complex, cluttered backgrounds that are common in our training set -the detector cues mainly on the contrast of silhouette contours against the background, not on internal edges or on silhouette contours against the foreground. Patterned clothing and pose variations may make internal regions unreliable as cues, or foreground-to-contour transitions may be confused by smooth shading and shadowing effects. Similarly, fig. 6(c,g) illustrate that gradients inside the person (especially vertical ones) typically count as negative cues, presumably because this suppresses false pos-itives in which long vertical lines trigger vertical head and leg cells.
Detector Window and Context
Our 64×128 detection window includes about 16 pixels of margin around the person on all four sides. Fig. 4(e) shows that this border provides a significant amount of context that helps detection. Decreasing it from 16 to 8 pixels (48×112 detection window) decreases performance by 4% at 10 −4 FPPW. Keeping a 64×128 window but increasing the person size within it (again decreasing the border) causes a similar loss of performance, even though the resolution of the person is actually increased.
Classifier
By default we use a soft (C=0.01) linear SVM trained with SVMLight [10] (slightly modified to reduce memory usage for problems with large dense descriptor vectors). Using a Gaussian kernel SVM increases performance by about 3% at 10 −4 FPPW at the cost of a much higher run time.
Discussion
Overall, there are several notable findings in this work. The fact that HOG greatly out-performs wavelets and that any significant degree of smoothing before calculating gradients damages the HOG results emphasizes that much of the available image information is from abrupt edges at fine scales, and that blurring this in the hope of reducing the sensitivity to spatial position is a mistake. Instead, gradients should be calculated at the finest available scale in the current pyramid layer, rectified or used for orientation voting, and only then blurred spatially. Given this, relatively coarse spatial quantization suffices (6-8 pixel wide cells / one limb width). On the other hand, at least for human detection, it pays to sample orientation rather finely: both wavelets and shape contexts lose out significantly here.
Secondly, strong local contrast normalization is essential for good results, and traditional centre-surround style results are insensitive to 's value over a large range.
(d) (a) (b) (c) (d) (e) (f) (g)
Centre-surround normalization. We also investigated an alternative centre-surround style cell normalization scheme, in which the image is tiled with a grid of cells and for each cell the total energy in the cell and its surrounding region (summed over orientations and pooled using Gaussian weighting) is used to normalize the cell. However as fig. 4(c) ("window norm") shows, this decreases performance relative to the corresponding block based scheme (by 2% at 10 −4 FPPW, for pooling with σ=1 cell widths). One reason is that there are no longer any overlapping blocks so each cell is coded only once in the final descriptor. Including several normalizations for each cell based on different pooling scales σ provides no perceptible change in performance, so it seems that it is the existence of several pooling regions with different spatial offsets relative to the cell that is important here, not the pooling scale.
To clarify this point, consider the R-HOG detector with overlapping blocks. The coefficients of the trained linear SVM give a measure of how much weight each cell of each block can have in the final discrimination decision. Close examination of fig. 6(b,f) shows that the most important cells are the ones that typically contain major human contours (especially the head and shoulders and the feet), normalized w.r.t. blocks lying outside the contour. In other wordsdespite the complex, cluttered backgrounds that are common in our training set -the detector cues mainly on the contrast of silhouette contours against the background, not on internal edges or on silhouette contours against the foreground. Patterned clothing and pose variations may make internal regions unreliable as cues, or foreground-to-contour transitions may be confused by smooth shading and shadowing effects. Similarly, fig. 6(c,g) illustrate that gradients inside the person (especially vertical ones) typically count as negative cues, presumably because this suppresses false pos-itives in which long vertical lines trigger vertical head and leg cells.
Detector Window and Context
Our 64×128 detection window includes about 16 pixels of margin around the person on all four sides. Fig. 4(e) shows that this border provides a significant amount of context that helps detection. Decreasing it from 16 to 8 pixels (48×112 detection window) decreases performance by 4% at 10 −4 FPPW. Keeping a 64×128 window but increasing the person size within it (again decreasing the border) causes a similar loss of performance, even though the resolution of the person is actually increased.
Classifier
By default we use a soft (C=0.01) linear SVM trained with SVMLight [10] (slightly modified to reduce memory usage for problems with large dense descriptor vectors). Using a Gaussian kernel SVM increases performance by about 3% at 10 −4 FPPW at the cost of a much higher run time.
Discussion
Overall, there are several notable findings in this work. The fact that HOG greatly out-performs wavelets and that any significant degree of smoothing before calculating gradients damages the HOG results emphasizes that much of the available image information is from abrupt edges at fine scales, and that blurring this in the hope of reducing the sensitivity to spatial position is a mistake. Instead, gradients should be calculated at the finest available scale in the current pyramid layer, rectified or used for orientation voting, and only then blurred spatially. Given this, relatively coarse spatial quantization suffices (6-8 pixel wide cells / one limb width). On the other hand, at least for human detection, it pays to sample orientation rather finely: both wavelets and shape contexts lose out significantly here.
Secondly, strong local contrast normalization is essential for good results, and traditional centre-surround style results are insensitive to 's value over a large range.
Centre-surround normalization. We also investigated an alternative centre-surround style cell normalization scheme, in which the image is tiled with a grid of cells and for each cell the total energy in the cell and its surrounding region (summed over orientations and pooled using Gaussian weighting) is used to normalize the cell. However as fig. 4(c) ("window norm") shows, this decreases performance relative to the corresponding block based scheme (by 2% at 10 −4 FPPW, for pooling with σ=1 cell widths). One reason is that there are no longer any overlapping blocks so each cell is coded only once in the final descriptor. Including several normalizations for each cell based on different pooling scales σ provides no perceptible change in performance, so it seems that it is the existence of several pooling regions with different spatial offsets relative to the cell that is important here, not the pooling scale.
To clarify this point, consider the R-HOG detector with overlapping blocks. The coefficients of the trained linear SVM give a measure of how much weight each cell of each block can have in the final discrimination decision. Close examination of fig. 6(b,f) shows that the most important cells are the ones that typically contain major human contours (especially the head and shoulders and the feet), normalized w.r.t. blocks lying outside the contour. In other wordsdespite the complex, cluttered backgrounds that are common in our training set -the detector cues mainly on the contrast of silhouette contours against the background, not on internal edges or on silhouette contours against the foreground. Patterned clothing and pose variations may make internal regions unreliable as cues, or foreground-to-contour transitions may be confused by smooth shading and shadowing effects. Similarly, fig. 6(c,g) illustrate that gradients inside the person (especially vertical ones) typically count as negative cues, presumably because this suppresses false pos-itives in which long vertical lines trigger vertical head and leg cells.
Detector Window and Context
Our 64×128 detection window includes about 16 pixels of margin around the person on all four sides. Fig. 4(e) shows that this border provides a significant amount of context that helps detection. Decreasing it from 16 to 8 pixels (48×112 detection window) decreases performance by 4% at 10 −4 FPPW. Keeping a 64×128 window but increasing the person size within it (again decreasing the border) causes a similar loss of performance, even though the resolution of the person is actually increased.
Classifier
By default we use a soft (C=0.01) linear SVM trained with SVMLight [10] (slightly modified to reduce memory usage for problems with large dense descriptor vectors). Using a Gaussian kernel SVM increases performance by about 3% at 10 −4 FPPW at the cost of a much higher run time.
Discussion
Overall, there are several notable findings in this work. The fact that HOG greatly out-performs wavelets and that any significant degree of smoothing before calculating gradients damages the HOG results emphasizes that much of the available image information is from abrupt edges at fine scales, and that blurring this in the hope of reducing the sensitivity to spatial position is a mistake. Instead, gradients should be calculated at the finest available scale in the current pyramid layer, rectified or used for orientation voting, and only then blurred spatially. Given this, relatively coarse spatial quantization suffices (6-8 pixel wide cells / one limb width). On the other hand, at least for human detection, it pays to sample orientation rather finely: both wavelets and shape contexts lose out significantly here.
Secondly, strong local contrast normalization is essential for good results, and traditional centre-surround style results are insensitive to 's value over a large range.
(f) (a) (b) (c) (d) (e) (f) (g)
Centre-surround normalization. We also investigated an alternative centre-surround style cell normalization scheme, in which the image is tiled with a grid of cells and for each cell the total energy in the cell and its surrounding region (summed over orientations and pooled using Gaussian weighting) is used to normalize the cell. However as fig. 4(c) ("window norm") shows, this decreases performance relative to the corresponding block based scheme (by 2% at 10 −4 FPPW, for pooling with σ=1 cell widths). One reason is that there are no longer any overlapping blocks so each cell is coded only once in the final descriptor. Including several normalizations for each cell based on different pooling scales σ provides no perceptible change in performance, so it seems that it is the existence of several pooling regions with different spatial offsets relative to the cell that is important here, not the pooling scale.
To clarify this point, consider the R-HOG detector with overlapping blocks. The coefficients of the trained linear SVM give a measure of how much weight each cell of each block can have in the final discrimination decision. Close examination of fig. 6(b,f) shows that the most important cells are the ones that typically contain major human contours (especially the head and shoulders and the feet), normalized w.r.t. blocks lying outside the contour. In other wordsdespite the complex, cluttered backgrounds that are common in our training set -the detector cues mainly on the contrast of silhouette contours against the background, not on internal edges or on silhouette contours against the foreground. Patterned clothing and pose variations may make internal regions unreliable as cues, or foreground-to-contour transitions may be confused by smooth shading and shadowing effects. Similarly, fig. 6(c,g) illustrate that gradients inside the person (especially vertical ones) typically count as negative cues, presumably because this suppresses false pos-itives in which long vertical lines trigger vertical head and leg cells.
Detector Window and Context
Our 64×128 detection window includes about 16 pixels of margin around the person on all four sides. Fig. 4(e) shows that this border provides a significant amount of context that helps detection. Decreasing it from 16 to 8 pixels (48×112 detection window) decreases performance by 4% at 10 −4 FPPW. Keeping a 64×128 window but increasing the person size within it (again decreasing the border) causes a similar loss of performance, even though the resolution of the person is actually increased.
Classifier
By default we use a soft (C=0.01) linear SVM trained with SVMLight [10] (slightly modified to reduce memory usage for problems with large dense descriptor vectors). Using a Gaussian kernel SVM increases performance by about 3% at 10 −4 FPPW at the cost of a much higher run time.
Discussion
Overall, there are several notable findings in this work. The fact that HOG greatly out-performs wavelets and that any significant degree of smoothing before calculating gradients damages the HOG results emphasizes that much of the available image information is from abrupt edges at fine scales, and that blurring this in the hope of reducing the sensitivity to spatial position is a mistake. Instead, gradients should be calculated at the finest available scale in the current pyramid layer, rectified or used for orientation voting, and only then blurred spatially. Given this, relatively coarse spatial quantization suffices (6-8 pixel wide cells / one limb width). On the other hand, at least for human detection, it pays to sample orientation rather finely: both wavelets and shape contexts lose out significantly here.
Secondly, strong local contrast normalization is essential for good results, and traditional centre-surround style
Part-based model
Recent works have proved that modeling objects as a deformable configuration of parts [34,46] leads to increased detection performance compared to rigid templates [32]. In the case of human detection, this is specially useful as the body can assume a large number of different poses. This model can also be used to estimate the 2D human pose of humans [47]. Abstract ribes a discriminatively trained, multiart model for object detection. Our sys--fold improvement in average precision mance in the 2006 PASCAL person dealso outperforms the best results in the en out of twenty categories. The system formable parts. While deformable part e quite popular, their value had not been fficult benchmarks such as the PASCAL tem also relies heavily on new methods aining. We combine a margin-sensitive mining hard negative examples with a latent SVM. A latent SVM, like a hida non-convex training problem. Howis semi-convex and the training probonce latent information is specified for es. We believe that our training methmake possible the effective use of more uch as hierarchical (grammar) models g latent three dimensional pose.
problem of detecting and localizing obtegory, such as people or cars, in static eveloped a new multiscale deformable ng this problem. The models are trained ve procedure that only requires bounde positive examples. Using these moda detection system that is both highly The notion that objects can be modeled by parts in a deformable configuration provides an elegant framework for representing object categories [1-3, 6, 10, 12, 13, 15, 16, 22]. While these models are appealing from a conceptual point of view, it has been difficult to establish their value in practice. On difficult datasets, deformable models are often outperformed by "conceptually weaker" models such as rigid templates [5] or bag-of-features [23]. One of our main goals is to address this performance gap.
Our models include both a coarse global template covering an entire object and higher resolution part templates. The templates represent histogram of gradient features [5]. As in [14,19,21], we train models discriminatively. However, our system is semi-supervised, trained with a maxmargin framework, and does not rely on feature detection. We also describe a simple and effective strategy for learning parts from weakly-labeled data. In contrast to computationally demanding approaches such as [4], we can learn a model in 3 hours on a single CPU. Abstract es a discriminatively trained, multit model for object detection. Our sysld improvement in average precision ance in the 2006 PASCAL person delso outperforms the best results in the out of twenty categories. The system rmable parts. While deformable part uite popular, their value had not been cult benchmarks such as the PASCAL also relies heavily on new methods ining. We combine a margin-sensitive ining hard negative examples with a ent SVM. A latent SVM, like a hidnon-convex training problem. Howsemi-convex and the training probnce latent information is specified for . We believe that our training methake possible the effective use of more h as hierarchical (grammar) models latent three dimensional pose.
oblem of detecting and localizing obgory, such as people or cars, in static eloped a new multiscale deformable this problem. The models are trained procedure that only requires boundpositive examples. Using these moddetection system that is both highly The notion that objects can be modeled by parts in a deformable configuration provides an elegant framework for representing object categories [1-3, 6, 10, 12, 13, 15, 16, 22]. While these models are appealing from a conceptual point of view, it has been difficult to establish their value in practice. On difficult datasets, deformable models are often outperformed by "conceptually weaker" models such as rigid templates [5] or bag-of-features [23]. One of our main goals is to address this performance gap.
Our models include both a coarse global template covering an entire object and higher resolution part templates. The templates represent histogram of gradient features [5]. As in [14,19,21], we train models discriminatively. However, our system is semi-supervised, trained with a maxmargin framework, and does not rely on feature detection. We also describe a simple and effective strategy for learning parts from weakly-labeled data. In contrast to computationally demanding approaches such as [4], we can learn a model in 3 hours on a single CPU. The notion that objects can be modeled by parts in a deformable configuration provides an elegant framework for representing object categories [1-3, 6, 10, 12, 13, 15, 16, 22]. While these models are appealing from a conceptual point of view, it has been difficult to establish their value in practice. On difficult datasets, deformable models are often outperformed by "conceptually weaker" models such as rigid templates [5] or bag-of-features [23]. One of our main goals is to address this performance gap.
Our models include both a coarse global template covering an entire object and higher resolution part templates. The templates represent histogram of gradient features [5]. As in [14,19,21], we train models discriminatively. However, our system is semi-supervised, trained with a maxmargin framework, and does not rely on feature detection. We also describe a simple and effective strategy for learning parts from weakly-labeled data. In contrast to computationally demanding approaches such as [4], we can learn a Abstract ribes a discriminatively trained, multiart model for object detection. Our sys--fold improvement in average precision mance in the 2006 PASCAL person det also outperforms the best results in the en out of twenty categories. The system formable parts. While deformable part e quite popular, their value had not been fficult benchmarks such as the PASCAL tem also relies heavily on new methods raining. We combine a margin-sensitive mining hard negative examples with a latent SVM. A latent SVM, like a hida non-convex training problem. Howis semi-convex and the training probonce latent information is specified for es. We believe that our training methmake possible the effective use of more uch as hierarchical (grammar) models g latent three dimensional pose.
problem of detecting and localizing obtegory, such as people or cars, in static eveloped a new multiscale deformable ng this problem. The models are trained ive procedure that only requires bounde positive examples. Using these moda detection system that is both highly The notion that objects can be modeled by parts in a deformable configuration provides an elegant framework for representing object categories [1-3, 6, 10, 12, 13, 15, 16, 22]. While these models are appealing from a conceptual point of view, it has been difficult to establish their value in practice. On difficult datasets, deformable models are often outperformed by "conceptually weaker" models such as rigid templates [5] or bag-of-features [23]. One of our main goals is to address this performance gap.
Our models include both a coarse global template covering an entire object and higher resolution part templates. The templates represent histogram of gradient features [5]. As in [14,19,21], we train models discriminatively. However, our system is semi-supervised, trained with a maxmargin framework, and does not rely on feature detection. We also describe a simple and effective strategy for learning parts from weakly-labeled data. In contrast to computationally demanding approaches such as [4], we can learn a model in 3 hours on a single CPU. Spatial model for the location of each part. Images from [33].
The basic idea is to have a model based on several HOG feature filters. The model for each object consists of one global root filter (see Figure 2.6(b)), which is equivalent to the rigid template as presented before, and several part models. The features of the part filters are computed at twice the spatial resolution of the root filter in order to capture smaller details. Each part model specifies a spatial model (see Figure 2.6(d)) and a part filter (see Figure 2.6(c)). The spatial model defines a set of allowed placements for a part relative to the detection window and a deformation cost for each placement.
Detection is done using a sliding window approach. The score is computed by adding the score of the root filter and the sum over all parts, taking into account the placement of each part, the filter score and the deformation cost. Usually both part-based and rigid template-based approaches are prone to double detections, therefore a nonmaxima suppression step is necessary to avoid too many false detections around one pedestrian. We will show examples of this phenomenon in Section 2.4.
Training is done by using a set of images with an annotated bounding box around each instance of an object (a pedestrian in our case). Learning is done in a similar way as in [32], only now, apart from learning the model parameters, the part placements also need to be learned. These are considered as latent values and therefore Latent SVM is used to learn the model.
Detection results
In this section, we discuss some detection results, show common failure cases and present some further methods proposed in recent literature. In Figure 2.7, we plot some results of the three methods referenced in the previous section on the publicly available dataset PETS2009 [20].
In Figure 2.7(a), we show a common failure case of background subtraction methods. As we can see, the three pedestrians in the center of the image are very close to each other, which means the background subtraction method obtains a single blob in that region.
In some cases it is possible to determine the presence of more than one pedestrian based on blob size and a knowledge of the approximate size of the pedestrian in pixels. In this case, though, the partial occlusion of one of the pedestrians makes it hard to determine exactly how many pedestrians belong to the foreground blob. The resulting detection is therefore placed in the middle of the group of pedestrians, which means we do not only have missed detections but also an incorrect position of the detection, which in fact will be considered as a false alarm. A similar situation is shown for the two pedestrians on the right side of the image. The final detection is positioned in the middle between them. We show results on the Town Center dataset [21] in Figure 2.8. This is a high resolution dataset of a busy town center, where partial occlusions and false alarms are very common. As we can see, double detections are specially problematic, for both the simple HOG detector and the part-based detector. It is common that two pedestrians trigger a single detection with a bigger bounding box, which means the non-maxima suppression is a key step in this case. Nonetheless, these methods still present two key advantages for this dataset: (i) most false detections can be easily removed using camera calibration and an approximate size of a pedestrian; (ii) there are few missed pedestrians. As we will see in Chapter 5, the Linear Programming algorithms for tracking are capable of handling false alarms better than missing data.
It is common to see pedestrians walking with objects, either pushing a bicycle, carrying a bag or pushing a trolley or a stroller, as we can see in Figure misdetection of the pedestrian. In recent works researchers proposed to include those objects in the tracking system. In [48] a tracker for unknown shapes was proposed in order to deal not only with pedestrians but also with carried objects. 3D information was used to create a model of unknown shapes which was then tracked through time.
Furthermore, in [49] pedestrians interaction with objects was included to support tracking hypotheses. This confirms the argument presented in this thesis, that context from a pedestrian's environment (in this case pedestrian-object interaction) can be extremely useful to improve tracking. Finally, tracking systems for complete scene understanding are becoming more and more important in the literature [50].
Chapter 3
Introduction to Linear Programming
In this chapter, we give an introduction to the theory of Linear Programming (LP), defining all the basic concepts used in further chapters. We start by formally defining a Linear
Program and its geometry. We then put special focus on the Simplex method, the most common LP solver and finally we introduce the concept of duality and the relation between LP and graphical models. We refer the interested reader to two books on Linear
Programming [51,52] and one on Network Flows [53] to delve deeper into the subject.
What is Linear Programming?
A linear program consists of a linear objective function
c 1 x 1 + c 2 x 2 + . . . + c n x n (3.1)
subject to linear constraints
a 11 x 1 + a 12 x 2 + . . . + a 1n x n ≤ b 1 (3.2) a 21 x 1 + a 22 x 2 + . . . + a 2n x n ≤ b 2 . . . . . . . . . a m1 x 1 + a m2 x 2 + . . . + a mn x n ≤ b m .
27
Solving the program means finding the x 1 , . . . , x n ∈ R that maximize (or minimize) the objective function while satisfying the linear constraints. The linear program can be expressed as
max {c x : x ∈ R n , Ax ≤ b} (3.3)
where A ∈ R m×n is the matrix of coefficients and b ∈ R m the vector that defines the constraints of the LP. The problem constraints can be written as equalities or inequalities (≤, ≥), as these can always be converted to a standard form without changing the semantics of the problem.
A point x ∈ R n is called feasible if it satisfies all linear constraints, see problem can be infeasible if its constraints are contradictory, e.g., x 1 > 1 and x 1 < −1.
A feasible x ∈ R n is an optimal solution to a linear program if c x ≥ c y for all feasible y ∈ R n , see Figure 3.
1(b).
A linear program is bounded if there exists a constant M ∈ R such that c x ≤ M for all feasible x ∈ R n . An example of an unbounded problem can be seen in Figure 3.1(c).
x 1
x 2
Space of feasible solutions
(a)
x 1 x 2 Optimal solution max x 2 (b) x 1 x 2 max x 1 (c)
Linear Programming forms
A Linear Program can be expressed in different forms, namely, Standard Form 1, Inequality Form, Standard Form 2 and General Form. In order to solve a problem with the Simplex method, for example, we need to have the problem in Standard Form 1, therefore, it is useful to know how to easily go from one form to another. All forms share the same objective function, which is a minimization, but change the way in which the constraints are expressed. Remember that we have n variables, x ∈ R n , and m constraints,
A ∈ R m×n , b ∈ R m .
Standard Form 1. The constraints are expressed as equalities and it is implied that the variables are nonnegative.
min c x (3.4) s.t. Ax = b x ≥ 0.
Inequality Form. The constraints are expressed as inequalities and we need to explicitly define the non-negativity constraints (if any).
min c x (3.5) s.t. Ax ≤ b
Standard Form 2. The constraints are expressed as inequalities and it is implied that the variables are nonnegative.
min c x (3.6) s.t. Ax ≤ b x ≥ 0.
General Form. The constraints are expressed both as equalities and inequalities. We need to explicitly define the non-negativity constraints (if any).
min c x (3.7) s.t. Ax ≤ b Gx = f
Once we have all the forms defined, we are interested in knowing how to go from one form to the other. We are specially interested in converting a problem to the Standard Form 1, which is the one we need to use the Simplex algorithm. Any LP can be converted into the Standard Form 1 by performing a series of operations. Let us consider the following example of an LP problem:
max x 1 + x 2 + 2x 3 (3.8) s.t. 2x 1 + 3x 2 ≤ 12 x 2 + x 3 ≥ 5 x 1 ≥ 4 x 2 ≥ 0
In order to express this problem in Standard Form 1, we can follow a set of simple transformations:
• To convert a maximization problem into a minimization one, we simply negate the objective function:
max x 1 + x 2 + 2x 3 → min −x 1 − x 2 − 2x 3 (3.9)
• To convert inequalities into equalities, we introduce a set of slack variables which represent the difference between the two sides of the inequality and are assumed to be nonnegative. The cost on the objective function for these variables is zero:
2x 1 + 3x 2 ≤ 12 → 2x 1 + 3x 2 + s 1 = 12 , s 1 ≥ 0 (3.10) x 2 + x 3 ≥ 5 → x 2 + x 3 − s 2 = 5 , s 2 ≥ 0
• If the lower bound of a variable is not zero, we introduce another variable and perform substitution:
x 1 ≥ 4 → y 1 = x 1 − 4 , y 1 ≥ 0 (3.11)
• We can replace unrestricted variables by the difference of two restricted variables:
x 3 → x 3 = x 4 − x 5 , x 4 ≥ 0 , x 5 ≥ 0 (3.12)
After all the transformations, we obtain the following LP in standard form:
min − y 1 − 4 − x 2 − 2x 4 + 2x 5 (3.13)
s.t. 2y 1 + 8 + 3x 2 + s 1 = 12
x 2 + x 4 − x 5 − s 2 = 5 y 1 , x 2 , x 4 , x 5 , s 1 , s 2 ≥ 0
Geometry of a Linear Program
The geometry of a Linear Program (LP) is important since most solvers exploit this geometry in order to obtain the optimal solution of an LP efficiently. A region defined by the LP, like the yellow striped region in Figure 3.1(a), has a set of corners, also called vertices. If an LP is feasible and bounded, then the optimal solution lies on a vertex.
More formally, a set P of vectors in R n is a polyhedron if P = {x ∈ R n : Ax ≤ b} for some matrix A and some vector b. P defines the set of feasible solutions, as shown in An inequality a x ≤ β is valid for a polyhedron P if each x * ∈ P satisfies a x * ≤ β.
The inequality is active at x * ∈ R n if a x * = β. For example, in Considering x ∈ R n , a ∈ R n \ {0} and β ∈ R, we can see the representation of the inequality constraint in Let us now consider the notion of a vertex. Looking at Figure 3.1(b), we can see that the optimal solution in yellow is a point inside P where the green and red constraints are active. In this 2D space, we need two constraints to define a vertex. More formally, a point x * ∈ P is a vertex of P if there exist n or more inequalities a x ≤ β that are valid for P and active at x * and not all active at any other point in P .
Another interpretation of the definition of vertices is that the point x * ∈ R n is a basic
solution if rank(A I ) = n, where A I x = b I is a sub-system of active inequations at x * . If
x * ∈ P , then it is a basic feasible solution. In this case, x * is a vertex of P iff it is a basic feasible solution.
Theorem 3.1. If a linear program max{c x : x ∈ R n , Ax ≤ b} is feasible and bounded and if rank(A) = n, the LP has an optimal solution that is a vertex.
Recall from linear algebra that a system of equations with m constraints and n variables can either be directly solvable if m = n and A is full-rank, which means it is invertible.
If m < n, we have an underdetermined system which leads to more than one optimal solution. For example, we can have several solutions that lie on an edge instead of only one solution on a vertex. Finally, if m > n, we have an overdetermined system, in which case it is possible that there exists no solution. Usually these problems are solved by using least-squares (see [45]).
We can draw an important consequence from Theorem 3.1, which is that an LP can be solved by enumerating all vertices and picking the best one. As the dimensionality of our search space and number of constraints increase, enumerating all solutions quickly becomes unmanageable. In the following Section, we present the Simplex algorithm developed by George B. Danzig in 1947, which drastically reduces the number of possible optimal solutions that must be checked.
The Simplex method
If we know that the optimal solution lies on a vertex, we could simply evaluate the objective function on each of the vertices and just pick the optimum one. Nonetheless, the number of vertices of an LP is typically too large, therefore we need to find a clever way to move towards the optimum vertex.
The Simplex method is an iterative method to efficiently solve an LP. The basic intuition behind the algorithm is depicted in Figure 3.3. Starting from a vertex in the feasible region, the idea is to move along the edges of the polyhedron until the optimum solution is reached. Each move from one vertex to another shall increase the objective function (in case we have a maximization problem), so that convergence is guaranteed.
In other words, the Simplex algorithm maintains a basic feasible solution at every step.
Given a basic feasible solution, the method first applies an optimality criterion to test its optimality. If the current solution does not fulfill this condition, the algorithm obtains another solution with a higher value of the objective function (which is closer to the optimum in the case of a maximization problem). Let us now define some more useful concepts.
Optimal solution
Starting vertex Let us assume we start with a solution vertex x * . While x * is not optimal, the algorithm finds another vertex x adjacent to x * with c x > c x * , and update x * := x . If no vertex can be found, we can assert that the LP is unbounded. This is summarized in Algorithm 1.
As we can see, there are two key aspects to be defined: firstly, how to assert that a vertex is optimal, and secondly, how to find an adjacent vertex with a better cost. Both will be detailed in the next subsections.
Algorithm 1 Basic idea of the Simplex algorithm
Start with vertex x * while x * is not optimal do if We find a vertex x adjacent to x * with c x > c x * then x * := x else
Assert that LP is unbounded.
end if end while
Optimality criteria
Again, let us start by defining some concepts, namely bases and degeneracy.
A subset B ⊆ {1, . . . , m} of the rows-indices of A, with |B| = n and A B invertible,
is called a basis of the LP. If in addition the point
A −1 B b B is feasible, then B is called a feasible basis. If a vertex x * ∈ P is represented by a basis B, then x * = A −1 B b B .
But a vertex can be represented by many bases. Let us consider the LP problem max c x depicted in Figure 3.4, where x ∈ R 2 . There are 4 constraints in this LP, identified by their coefficients {a 1 , a 2 , a 3 , a 4 } and depicted by green lines. Since we are in a 2D space, each pair of constraints forms a basis for x * . A possible set of feasible solutions created by constraints a 3 and a 4 is painted in light green. In total, we have 6 bases that represent x * , namely,
{{a 1 , a 2 }, {a 2 , a 3 }, {a 1 , a 3 }, {a 3 , a 4 }, {a 1 , a 4 }, {a 2 , a 4 }}.
An LP max{c x : x ∈ R n , Ax ≤ b} is degenerate if there exists an x * ∈ R n such that there are more than n constraints of Ax ≤ b active at x * . The LP depicted in Figure 3.4 is degenerate, since n = 2 and there are 4 active constraints at x * .
A basis B is optimal if it is feasible and the unique λ ∈ R m with λ A = c and λ i =
0, ∀i / ∈ B satisfies λ ≥ 0.
If all components of λ outside of B are zero, then we can write the following equality There are 6 bases that represent x * , namely,
λ B A B = c ,{{a 1 , a 2 }, {a 2 , a 3 }, {a 1 , a 3 }, {a 3 , a 4 }, {a 1 , a 4 }, {a 2 , a 4 }}. The half-space of constraints {a 3 , a 4 } is the area depicted in light green.
Theorem 3.4.
Suppose the LP is non-degenerate and B is a feasible but not optimal basis, then
x * = A −1 B b B is not an optimal solution.
Basically, for every vertex x * , we can quickly check if it is an optimal solution by checking if the basis B that represents this vertex is optimal or not. The proof of the theorem will help us see how to move closer to the optimal solution.
Proof. Let us assume that B is a feasible but not optimal basis. We can split the constraints of the LP into active and inactive ones with respect to B.
max c x, s.t. Ax ≤ b A B x ≤ b B , active at x * ABx ≤ bB, inactive (3.14) For a unique λ ∈ R m with λ A = c , we have that λ j = 0, ∀j / ∈ B. Since B is feasible
but not optimal, we know that there will be some λ i < 0 for some i ∈ B.
We now compute a d ∈ R n such that A B\{i} d = 0 and a i d = −1. That means d is orthogonal to all rows of A B except the one that represents constraint i. Let us first take a look at what happens to the objective function if we move along d:
x x + d (a) x x + k d a k x b k (b)c d = λ B A B d = λ i <0 a i d −1 (3.15)
Given λ i < 0 and the conditions in which we defined d, for which a i d = −1, we can see that c d > 0. This means that if we move in the direction d, we will improve our objective function value.
Let us now consider a given quantity ε > 0, which represents how much we move along direction d. Now we are interested in knowing if the new point x * + εd is feasible, i.e.if it satisfies Equation (3.14). We can see that the inequations are certainly satisfied:
A B (x * + εd) ≤ b B ,(3.16)
since the product εA B d < 0 because of the previous definition a i d = −1. This means that there exists an ε * such that x * + ε * d is feasible because it satisfies all inequalities expressed in Equation (3.14). But the value of the objective function at this new point will be
c (x * + εd) = c x * + ε c d >0 >0
, (3.17) which is greater than the objective value of x * , proving this is not an optimal solution.
Moving to a better neighbor
Now we have an ε > 0 with which we can move from x * in the direction d to a vertex close to the optimum, namely, to a better neighbor. The question now is how large can ε be. We need to find out how far we can go before we hit a constraint for the first time, because past a constraint, the feasible region ends. This is depicted in Figure 3.5 where the constraint is represented in orange.
Remember we had m constraints in Ax ≤ b. We denote K as the set of indices that represent the constraints that might be hit by x * + εd, and it is formally defined as
K = {k : 1 ≤ k ≤ m, a k d > 0}. (3.18)
a k d needs to be larger than zero, otherwise we would never hit the constraint a k x ≤ b k .
The set of constraints K will contain constraints not in basis B, since all A B d ≤ 0. There are now two cases:
1. K = ∅, which means we can move indefinitely in direction d, and therefore the LP is unbounded.
2. K = ∅, which means there is a constraint with index k which we will hit while moving x * in the direction d, as depicted in Figure 3.5(b). Let us now compute the value of ε k for which we hit constraint k:
a k (x * + ε k d) = b k ⇐⇒ ε k = b k − a k x * a k d (3.19)
We know this division can be done because the denominator is greater than zero.
The optimal ε * will be the smallest of all the ε k :
ε * = min k∈K ε k ,(3.20)
where k * ∈ K is the index for which we find ε * . The optimal ε * must be the minimum, because all greater ε k violate at least the constraint k * , and therefore go out of the feasible region. To know that there is, in fact, a new vertex
x = x * + ε * d,
which is adjacent to x * and with higher objective value, we have to prove that B defined as
B = B \ {i} ∪ {k * } (3.21)
is a basis. Note that we are incorporating the new constraint k * and taking out the i that did not make our basis B optimal (recall that λ i < 0). Remember that d ⊥ a B\{i} , but not d ⊥ a k * since a k * d > 0. This means that a k * is not a linear combination of a B\{i} , proving B is a basis. Furthermore, the inequalities A B x ≤ b B are active at x , which means x is a vertex and in fact adjacent to x * .
We have seen so far that the concepts of basic feasible solution and feasible basis are interchangeable, therefore we can rewrite the Simplex algorithm in basis notation, as shown in Algorithm 2.
Algorithm 2 The Simplex algorithm
Start with a feasible basis B
while B is not optimal do Let i ∈ B be the index with λ i < 0 (remember λ A = c and λ j = 0, ∀j / ∈ B) Compute d ∈ R n with A B\{i} d = 0 and a i d = −1 Determine K = {k : 1 ≤ k ≤ m, a k d > 0} if K = ∅ then Assert that LP is unbounded. else Let k * ∈ K be the index where min k∈K b k −a k x * a k d is attained Update B := B \ {i} ∪ {k * } end if end while
Theorem 3.5. If the Linear Program is non-degenerate, then the Simplex algorithm terminates.
The idea of the Simplex algorithm is to jump from one base to another (equivalently from vertex to vertex), making sure no base is revisited. We have proven before that when we move in direction d from point x * to x , we obtain c x > c x * , which means that we are making progress at each iteration of the Simplex, proving it will eventually terminate.
The degenerate case: Bland's pivot rule
The Simplex algorithm as described in Algorithm 2 can be applied to degenerate Linear
Programs, but we can encounter the problem of cycling, which is when we move from one basis to another without progress and end up returning to one of the bases we already visited. This means that the algorithm would never terminate. In order to avoid this, we need to carefully choose the indices that are leaving and entering the basis at each iteration, an operation that is called pivoting. In Algorithm 3, we highlight in orange the changes to the Simplex algorithm according to Bland's pivot rule [54], which allows Simplex to solve degenerate LP.
Algorithm 3 The Simplex algorithm with Bland's pivot rule
Start with a feasible basis B
while B is not optimal do Let i ∈ B be the smallest index with λ i < 0 (λ A = c and λ j = 0, ∀j / ∈ B) Compute d ∈ R n with A B\{i} d = 0 and a i d = −1 Determine K = {k : 1 ≤ k ≤ m, a k d > 0} if K = ∅ then Assert that LP is unbounded. else Let k * ∈ K be the smallest index where min k∈K b k −a k x * a k d is attained Update B := B \ {i} ∪ {k * } end if end while
Theorem 3.6. If Bland's rule is applied, the Simplex algorithm terminates.
For the interested reader, the proof of the theorem can be found in [55].
Finding an initial vertex
In all descriptions of Simplex in Algorithms 1, 2 and 3, it always starts by choosing a feasible initial vertex or basis. But how do we find this initial vertex? Finding a feasible solution of a Linear Program is almost as difficult as finding an optimal solution.
Fortunately, by using a simple technique, we can find a feasible solution of a related auxiliary LP and use it to initialize the Simplex method on our LP. Let us consider our initial LP to be in the standard form 2:
max c x (3.22) s.t. Ax ≤ b x ≥ 0.
We can split the conditions according to whether b i has a positive or negative value:
Ax ≤ b A 1 x ≤ b 1 , b 1 ≥ 0, b 1 ∈ R m 1 A 2 x ≤ b 2 , b 2 < 0, b 2 ∈ R m 2 (3.23)
and define a new artificial variable y. We now create an auxiliary LP where we minimize the sum of the new artificial variables:
min m 2 i=1 y i (3.24) s.t. A 1 x ≤ b 1 A 2 x ≤ b 2 + y x, y ≥ 0. y ≤ |b 2 |.
We can show that this auxiliary problem is always feasible, since we can always find an initial feasible solution like x * = 0, y * = |b 2 |, i.e.each y i is bounded by the absolute value of the corresponding component of b 2 . It fulfills all conditions of the auxiliary LP of Equation (3.24), therefore it is a feasible initial vertex. From here, we can apply the Simplex as described in Algorithm 3 to find the optimal solution. If we find an optimal solution with variables x * , y * which yields the optimal value of the objective function of Equation (3.24) to be zero, we can assert that the vertex x * is a feasible solution of the original LP problem in Equation (3.22). We will then use this initial vertex to start the Simplex algorithm to solve the original LP. On the other hand, if we find that the minimum value of the auxiliary problem is larger than zero, we can assert that the original LP is infeasible.
The final complete description of the Simplex algorithm is found in Algorithm 4. Finding the initial vertex is commonly called Phase I of the Simplex algorithm, while the optimization towards the final solution through pivoting is commonly referred to as Phase II. A hands-on example on how to solve a problem practically with Simplex will be presented in Section 3.5.
Algorithm 4 The complete Simplex algorithm with Bland's pivot rule
Create the auxiliary problem of the LP and find the optimal solution with basis B and objective function value z.
if z = 0 then B is a feasible basis of the initial LP. Start with the feasible basis B.
while B is not optimal do Let i ∈ B be the smallest index with λ i < 0 (λ A = c and λ j = 0, ∀j / ∈ B) Compute d ∈ R n with A B\{i} d = 0 and a i d = −1 Determine K = {k : 1 ≤ k ≤ m, a k d > 0} if K = ∅ then Assert that LP is unbounded. else Let k * ∈ K be the smallest index where min k∈K b k −a k x * a k d is attained Update B := B \ {i} ∪ {k * } end if end while else
Assert that the LP is infeasible. end if
Complexity
The Simplex method is remarkably efficient, specially compared to earlier methods such as Fourier-Motzkin elimination. However, in 1972 it was proven that the Simplex method has exponential worst-case complexity [56]. Nonetheless, following the observation that the Simplex algorithm is efficient in practice, it has been found that it has polynomial-time average-case complexity under various probability distributions.
In order for the Simplex to perform in polynomial time, we have to use certain pivoting rules that allow us to go from one vertex of the polyhedron to another in a small number of steps. We will better understand this concept when we introduce the graphical model representation of a polyhedron in Section 3.6.
The dual Linear Program
In this section, we introduce a very important property of Linear Programs: duality.
Given any general optimization problem, or primal problem, we can always convert it to a dual problem. For LPs the dual problem is also an LP. The motivation to use dualization, depicted in Figure 3.6, is that the dual problem gives us an upper bound on the objective function of the primal problem. As we saw in Section 3.3, the Simplex algorithm starts from a suboptimal solution and performs gradient ascent to iteratively find solutions with increasing objective value, until the optimum is reached. In the case of dual linear programs, we can find an upper bound and iteratively make it more stringent until it reaches the optimum. It is guaranteed for LPs that the smallest upper bound will correspond to the optimum solution z * of the primal problem.
Let us consider the following LP:
max x 1 + 2x 2 s.t. − 2x 1 + x 2 ≤ −2 x 2 ≤ 4 x 1 − 2x 2 ≤ −2 x 1 ≤ 4 x 1 , x 2 ≤ 0
We can try to find an upper bound on the value of the objective function. One way to do this, is by linearly combining the constraints of the problem, to obtain an expression of the form c x ≤ y b, where y are the coefficients of this linear combination. Let us multiply the first constraint by 2, the fourth by 5 and sum them up:
−2x 1 + x 2 ≤ −2 =⇒ ×2 =⇒ −4x 1 + 2x 2 ≤ −4 x 1 ≤ 4 =⇒ ×5 =⇒ 5x 1 ≤ 20 x 1 + 2x 2 ≤ 16
Note how we obtained our objective function after the sum, and therefore we can say that 16 is an upper bound. We can also try another combination, summing the fourth constraint and the second multiplied by 2:
x 1 ≤ 4 =⇒ ×1 =⇒ x 1 ≤ 4 x 2 ≤ 4 =⇒ ×2 =⇒ 2x 2 ≤ 8 x 1 + 2x 2 ≤ 12
In this case, we obtain an upper bound of 12, which turns out to be the smallest upper bound and therefore corresponds to the optimum of the objective function.
The general principle to find the dual problem is to multiply each of the constraints by a new positive variable, namely the dual variable and sum the constraints up:
max c 1 x 1 + c 2 x 2 +· · ·+ c n x n s.t. a 11 x 1 + a 12 x 2 +· · ·+ a 1n x n ≤ b 1 −→ e 1 ≤ b 1 −→ y 1 a 21 x 1 + a 22 x 2 +· · ·+ a 2n x n ≤ b 2 −→ e 2 ≤ b 2 −→ y 2 . . . a m1 x 1 +a m2 x 2 +· · ·+a mn x n ≤ b m −→ e m ≤ b m −→ y m −x 1 ≤ 0 −→ y m+1 −x 2 ≤ 0 −→ y m+2 . . . −x n ≤ 0 −→ y m+n .
Note that we already used this trick in Section 3.3.1, with λ as our new variables. The variables have to be positive in order not to change the inequality sign. Now we want to make this sum equal to our objective function:
z = c 1 x 1 + c 2 x 2 + · · · + c n x n ≡ m i=1 y i e i + . . . + y m e m − y m+1 x 1 − . . . − y m+n x n
which, by the constraints of the primal problem, is upper bounded by
z ≤ y 1 b 1 + y 2 b 2 + . . . + y m b m .
Recall that our objective is to find the smallest upper bound. Let us express this in a matrix notation. To make the notation clearer, we separate the new variables between the ones associated to the constraints of the primal y = {y 1 , . . . , y m } and the ones associated with the implicit positivity constraints, y s = {y m+1 , . . . , y m+n }.
min b y s.t. A y − y s = c y ≥ 0 y s ≥ 0
We can eliminate y s by substitution, y s = A y − c, obtaining the final equations for the primal and dual problems:
PRIMAL max c x s.t. Ax ≤ b x ≥ 0 DUAL min b y s.t. A y ≥ c y ≥ 0
So far, we have seen the relationship between a Linear Program and its dual. This is summarized in the following theorem:
Theorem 3.7. Weak Duality. Consider a Linear Program max{c x : x ∈ R n , Ax ≤ b, x ≥
0} and its dual min{b y : y ∈ R m , A y ≥ c, y ≥ 0}. If x * ∈ R n and y * ∈ R m are primal and dual feasible respectively, then c x * ≤ b y * .
This can be easily seen by the inequalities c x ≤ y Ax ≤ y b = b y, the first of which comes from the constraints of the dual problem, and the second from the constraints of the primal provided that y ≥ 0.
An even more important theorem is:
Theorem 3.8. Strong Duality. Consider a Linear Program max{c x : x ∈ R n , Ax ≤ b, x ≥ 0} and its dual min{b y : y ∈ R m , A y ≥ c, y ≥ 0}.
If the primal is feasible and bounded, then there exist a primal feasible x * and a dual feasible y * with c x * = b y * .
This means that with the dual we can find an upper bound that is tight at the optimal solution of the primal. This can be used to prove optimality of primal solutions and, as a consequence, optimality of dual solutions.
Proof. The proof of Theorem 3.8 is divided into two cases.
A has full column rank.
If this is the case, then we can use the Simplex algorithm to obtain an optimal basis B ∈ {1, . . . , m}. By the optimality of B, we know that y ∈ R m subject to y B A B = c , and that y i ≥ 0 for all i / ∈ B. We then know that the condition y ≥ 0 is fulfilled, and therefore y is dual feasible.
Now consider that x * = A −1 B b B
is the current primal solution returned by the Simplex. We can compare the value of the objective function at x * with the value of the dual objective function at y to check if they are, in fact, equal.
c x * primal = y B A B x * = y B A B A −1 B I b B = y B b B = y b dual 2. rank(A) < n.
First, we need to make sure our constraint matrix has full column rank, which is why we replace the vector of variables x with x 1 − x 2 . Now the Linear Program looks like:
max c (x 1 − x 2 ) A(x 1 − x 2 ) ≤ b x 1 , x 2 ≥ 0
Note, that the new LP will be equivalent to the old one in the sense that any solution will also be a solution of the initial LP with the same objective value. If we consider the new variable to be x =
x 1
x 2
The constraint matrix that also incorporates the positiveness constraints is:
A = A −A −I 0 0 −I , where I is an
n × n identity matrix, and the objective function vector is c = c −c , while
the right-hand side term is b = b 0 0 .
The new constraint matrix A does have full column rank, since the column vectors are now all independent thanks to the placement of the new identity matrices. We can now use the Simplex algorithm to find a solution. Let us denote the primal solution returned as (x * 1 , x * 2 ), while y = y 1 y 2 y 3 is the dual returned by the Simplex to verify the optimality of the primal solution. Let us write the conditions that should be verified by the dual, taking into account that y ≥ 0:
y 1 A − y 2 = c ⇒ y 1 A ≥ c y 1 (−A) − y 3 = −c ⇒ −y 1 A ≥ −c ⇒ y 1 A = c .
We have just proven that y 1 is dual feasible. Now the Simplex algorithm can check the condition of optimality for the primal solution by verifying that:
c −c x * 1 x * 2 = y b = y 1 y 2 y 3 b 0 0 = y 1 b
And this proves the theorem, because we have found one possible primal feasible solution x * 1 − x * 2 and one dual feasible solution y 1 whose objective function values coincide.
Proving optimality and infeasibility
So far we have seen that there is a close relationship between the dual and primal problems and between the dual and primal optimum solutions. But what happens, for example, if the dual problem is infeasible? Let us consider the following example:
PRIMAL max x 1 + 2x 2 + x 3 s.t. x 1 + x 2 ≤ 1 x 1 + x 3 ≤ 4 DUAL min y 1 + 4y 2 s.t. y 1 + y 2 = 1 y 1 = 2 y 2 = 1 y 1 , y 2 ≥ 0
If we check the primal problem carefully, we can identify c = (1, 2, 1), A = 1 1 0 1 0 1 ,
and b = 1 4
, and therefore the dual problem is defined as shown. Nonetheless, we
can quickly see that the dual problem is infeasible, since the conditions set y 1 = 2 and y 2 = 1 which means y 1 + y 2 will never be 1. An infeasible dual implies that we cannot determine a bound for the primal. If we take a closer look at the primal problem we see that it is, in fact, unbounded. For any α ≥ 0 that we choose, if we assign x = (−α, α, α), the problem is feasible and the objective value is 2α, which means the objective function can be maximized to infinity, making the problem unbounded.
We summarize the relationship between primal and dual problems in Table 3.1. In the first case, by the strong duality Theorem 3.8, if a primal has an optimal solution, the dual will also have an optimal solution. The second case is when the primal is unbounded. In that case, by the weak duality Theorem 3.7, the dual problem is infeasible.
Primal/Dual Optimal Unbounded Infeasible Optimal X Unbounded X Infeasible X X
We can then ask ourselves what would happen if we dualized the dual. It can be proven that the dual of the dual is a Linear Program that is equivalent to the primal. This proves that when the primal is infeasible, then the dual is unbounded. There is also another possible case, where both primal and dual are infeasible.
Now to recap what the Simplex algorithm does: the algorithm returns a primal solution
x * and a dual y * , and the only thing that needs to be done to prove the optimality of the solution is to check whether the equality c x * = b y * is fulfilled. The optimality proof is clear, and additionally, infeasibility can be proven by Farkas' lemma. Intuitively, if we consider such a vector λ to exist, then the inequality (λ A)x ≤ λ b would be valid, but since λ A = 0 and λ b = −1, there would be no point x ∈ R n that could satisfy the inequality Ax ≤ b, making the problem infeasible.
Simplex in practice
So far, we have presented the theory behind Linear Programming and the Simplex algorithm, and a step-by-step explanation of the initialization and optimization phases that lead to the complete algorithm described in Algorithm 4. But how does Simplex work in practice? How can we implement a Simplex solver?
If we want to code a Simplex solver, we need, first of all, a convenient data structure for Linear Programs and their solutions. Such data structure is called a Dictionary of an LP.
Dictionaries
A dictionary is a simple way to represent an LP. We can obtain it starting from the Standard Form 2 presented in Section 3.1.1, and performing a series of simple steps. Let us recall Standard Form 2 of an LP:
max c x
s.t. Ax ≤ b x ≥ 0.
The first thing we do it add slack variables on the constraint equations to convert them to equalities:
max c x
s.t. Ax + x B = b x ≥ 0.
Now we basically rearrange the equations into the following dictionary form:
x B = b − Ax I z = c 0 + c x I
which brings us to the dictionary form we will use throughout this section:
x B1 = b 1 + a 11 x I1 + · · · + a 1n x In x B2 = b 2 + a 21 x I1 + · · · + a 2n x In . . . x Bm = b m + a m1 x I1 + · · · + a mn x In z = c 0 + c 1 x I1 + · · · + c n x In .
The Once we have a way to structure the data of our problem, we can define how the Simplex method works with dictionaries. An overview is presented in Figure 3.7.
The main operation we perform on dictionaries is pivoting, which is a way to go from one dictionary to another. As we mentioned before, Simplex is divided into two phases: • Phase I, or Initialization Phase: we start with an infeasible dictionary and pivot until we reach a feasible dictionary or determine the problem is infeasible.
D 0 D 1 D k 1 D k D k+1 pivot pivot pivot pivot pivot D · · · · · ·
• Phase II, or Optimization Phase: we optimize our feasible dictionary and our solution until we reach the optimum or determine the problem is unbounded.
In the following subsections, we describe both phases and how they work with dictionaries.
Phase II: Pivoting
The idea of the pivoting operation is, given a feasible initial dictionary, to obtain a new dictionary which has a corresponding solution with a higher objective value. Recall that the solution associated with a dictionary is represented by the basic variables. During pivoting, we consider whether inserting some of the non-basic variables to the basis would actually lead to an objective value increase. Of course, if a variable enters the basis, another variable has to leave it. But how do we choose the entering and leaving variables?
Let us consider the following example:
max 5x 1 + 4x 2 + 3x 3 s.t. 2x 1 + 3x 2 + x 3 ≤ 5 4x 1 + x 2 + 2x 3 ≤ 11 3x 1 + 4x 2 + 2x 3 ≤ 8 x 1 , x 2 , x 3 ≥ 0
which has the following corresponding dictionary:
x 4 = 5 − 2x 1 − 3x 2 − x 3 x 5 = 11 − 4x 1 − x 2 − 2x 3 x 6 = 8 − 3x 1 − 4x 2 − 2x 3 z = 0 + 5x 1 + 4x 2 + 3x 3 .
We can immediately read the solution associated with this dictionary, which is constraints. Therefore, we can intuitively see that the basic variables will limit how much we can increase the entering variable. The basic variable that puts the tightest restriction will be chosen as leaving variable.
x 1 = 0, x 2 = 0, x 3 = 0, x 4 = 5, x 5 = 11, x 6 = 8,
Let us see which variable is limiting the increase of the value of x 1 . If we increase x 1 , then variable x 4 will decrease since the coefficient associated with x 1 is negative. We can only increase x 1 up to 5 2 , in which case x 4 = 0. x 5 limits x 1 ≤ 11 4 and x 6 limits x 1 ≤ 8 3 . In this case, x 4 limits x 1 to the lowest value, and therefore will be chosen as leaving variable.
The next step is to modify the dictionary according to the entering and leaving variables.
In order to do that, we first solve the equation of the leaving variable for the entering variable. In our example:
x 4 = 5 − 2x 1 − 3x 2 − x 3 −→ x 1 = 5 2 − 3 2 x 2 − 1 2 x 3 − 1 2 x 4
Now we substitute x 1 by the obtained expression in all the other equations to obtain a new dictionary:
We can read the new solution associated with this new dictionary, which is
x 1 = 5 2 , x 2 = 0, x 3 = 0, x 4 = 0, x 5 = 1, x 6 = 1 2 , z = 25 2 .
As we can see, pivoting has brought us to a new dictionary with higher objective value that the initial one.
x 1 = 5 2 − 3 2 x 2 − 1 2 x 3 − 1 2 x 4 x 5 = 1 + 5x 2 + 2x 4 x 6 = 1 2 + 1 2 x 2 − 1 2 x 3 + 3 2 x 4 z = 25 2 − 7 2 x 2 + 1 2 x 3 − 5 2 x 4 .
We can pivot one more time, with entering variable x 3 and leaving variable x 6 . Note that x 5 imposes no constraint on the increase of x 3 because they are not related by any equation. The new dictionary we will obtain after pivoting is:
x 3 = 1 + x 2 + 3x 4 − 2x 6 x 1 = 2 − 2x 2 − 2x 4 − x 6 x 6 = 1 + 5x 2 + 2x 4 z = 13 − 3x 2 − x 4 − x 6 .
If we look at the last dictionary obtained, we see that all the coefficients c j ≤ 0, which means we do not have a choice for entering variable. There is no non-basic variable we can choose that will increase the value of the objective function, which means we have reached the optimum at z = 13 with x 1 = 2,
x 2 = 0, x 3 = 1, x 4 = 0, x 5 = 1, x 6 = 0.
In Algorithm 5 we present an overview of Simplex Phase II:
In the following subsection, we will discuss the feasibility of dictionaries after pivoting as well as degenerate dictionaries. Most importantly, we will discuss how to identify unbounded problems.
Proving feasibility
One could ask if by pivoting we will always end up with a feasible dictionary. In other words, does pivoting maintain feasibility of the dictionaries? Here is a small proof. Let us again consider the general dictionary:
x B1 = b 1 + a 11 x I1 + · · · + a 1j x Ij + · · · + a 1n x In → x Ij ≤ b 1 −a 1j x B2 = b 2 + a 21 x I1 + · · · + a 2j x Ij + · · · + a 2n x In → x Ij ≤ ∞ . . . x Bi = b i + a i1 x I1 + · · · + a ij x Ij + · · · + a in x In → x Ij ≤ b i −a ij . . . x Bm = b m + a m1 x I1 + · · · + a mj x Ij + · · · + a mn x In → x Ij ≤ b m −a mj z = c 0 + c 1 x I1 + · · · + c j x Ij + · · · + c n x In
where x Ij is the entering variable and x Bi the leaving variable. Let us now analyze what happens with variable x B1 , assuming it is not the leaving variable. Given the new entering variable x Ij , it will be assigned a value of
x B1 = b 1 + a 1j b i −a ij .
In order for the new dictionary to be feasible, x B1 ≥ 0. Can we prove that it will never be negative?
Firstly, we can see that b 1 ≥ 0, because the current dictionary is feasible, i.e.x B1 = b 1 when all non-basic variables are set to zero. Secondly, we know a ij < 0, otherwise x Bi would not be the leaving variable related to the entering variable x Ij , because it would not constraint the increase of x Ij . The only thing we need to determine now, is the value of a 1j , for which we have two possibilities:
• a 1j ≥ 0, we can directly determine that x B1 ≥ 0.
• a 1j < 0, we cannot directly determine if x B1 will be nonnegative, but we do know
that b i −a ij ≤ b 1 −a 1j
, otherwise x B1 would be the leaving variable. From this, we can derive:
b i −a ij ≤ b 1 −a 1j −→ a 1j b i −a ij ≥ −b 1 −→ b 1 + a 1j b i −a ij ≥ 0.
Degeneracy
We have established that the pivoting operation maintains feasibility. The only question we need to answer now is what happens with the value of the objective function during pivoting. We know that the entering variable will take value x Ij = b i −a ij after pivoting, while the leaving variables will take value x Bi = 0. Given that all other non-basic variables will remain zero, the objective value of the new dictionary will be:
z = c 0 + c j b i −a ij .
We know c j > 0, otherwise x Ij would not be the entering variable. On the other hand, a ij < 0, otherwise x Bi would not be the leaving variable. If b 1 > 0, it would mean the objective value z can only increase, but we know that it can be that b 1 = 0, in which case the objective value would remain constant. A dictionary with these characteristics is called a degenerate dictionary. We can see an example below:
x 3 = 1 2 + 1 2 x 4 x 5 = 0 − 2x 1 + 4x 2 + 3x 4 x 6 = 0 + x 1 − 3x 2 + 2x 4 z = 4 + 2x 1 − x 2 − 4x 4 .
In this case, pivoting would bring us from one dictionary to another without ever increasing the objective value, which means the algorithm would cycle and not terminate.
In order to avoid cycling, we can apply Bland's rule as explained in Section 3.3.3.
Unbounded problems
There is only one case that needs to be analyzed by the Phase II algorithm, and that is the case of unbounded LPs. Let us consider the following dictionary:
x 4 = 5 − x 1 + x 2 x 5 = 6 + x 1 − x 3 x 6 = 2 + 2x 1 − x 3 x 7 = 4 + x 1 − x 2 z = 0 + 2x 1 + 3x 2 − 5x 3 .
At first glance, we cannot say if the problem is unbounded or not, so we just start pivoting. We choose x 2 as entering variable, which means x 7 is the leaving variable. The new dictionary we obtain is therefore:
x 2 = 4 + x 1 − x 7 x 4 = 9 x 5 = 6 + x 1 − x 3 x 6 = 2 + 2x 1 − x 3 z = 12 + 5x 1 − 3x 7 − 5x 3 .
The new entering variable of this dictionary should be x 1 , but let us look at what happens with the leaving variable. Remember that the leaving variable should limit the increase of x 1 , but in this case, when we increase x 1 , x 2 , x 5 and x 6 all increase without limits, and x 4 does not depend on x 1 . This means that we could arbitrarily increase x 1 and the non-negativity constraints would still be respected. When we cannot find any leaving variable, we can conclude that the problem is unbounded. Alternatively, we can say that the problem is unbounded when all entries of the column corresponding to the entering variable are nonnegative.
Phase I: Initialization
Up to now we described how to solve an LP given an initial feasible dictionary. Let us now consider the following LP
max x 1 + 2x 2 s.t. − 2x 1 + x 2 ≤ −2 x 2 ≤ 4 x 1 − 2x 2 ≤ −2 x 1 ≤ 4 x 1 , x 2 ≥ 0
and corresponding dictionary
x 3 = −2 + 2x 1 − x 2 x 4 = 4 − x 2 x 5 = −2 − x 1 + 2x 2 x 6 = 4 − x 1 z = 0 + x 1 + 2x 2 .
If we analyze the corresponding solution
x 1 = 0, x 2 = 0, x 3 = −2; x 4 = 4, x 5 = −2, x 6 = 4
, we see that variables x 3 and x 5 do not respect the non-negativity constraints, and therefore the initial solution is infeasible. In Figure 3.8, we plot the feasible region of the LP in orange. As we can see, the solution associated with the dictionary,
x 1 = 0, x 2 = 0 is outside of the feasible region.
The question is, what do we do when the initial dictionary is infeasible? The strategy we follow is to slightly modify the initial LP and create the auxiliary problem. We then perform a pivoting on the auxiliary problem, and given the solution we find, we can draw a conclusion about the feasibility of the original LP.
Let us first describe how to construct the auxiliary problem. If we look at the previous LP, we can see that the reason why the solution was not feasible, was that x 3 and x 5
had negative values. We could make those values positive by adding a certain quantity
x 0 , which would have to be at least 2. Let us forget about the objective function for
(2,2) (0,0) (3,4) (4,3) (4,4)
x 1
x 2
Feasible region
Infeasible initial solution
Move to a feasible initial solution
− 2x 1 + x 2 ≤ −2 + x 0 x 2 ≤ 4 + x 0 x 1 − 2x 2 ≤ −2 + x 0 x 1 ≤ 4 + x 0 x 1 , x 2 , x 0 ≥ 0
If we set x 0 = 2, the new problem would have solution x 1 = 0, x 2 = 0, x 3 = 0; x 4 = 6, x 5 = 0, x 6 = 6, which is feasible. In fact, one can prove that the auxiliary problem will always be feasible. Intuitively, we can also see that if the initial problem is feasible, then x 0 = 0, and the solution of the auxiliary problem will correspond to the solution of the initial LP. Nonetheless, if the initial problem is infeasible, then x 0 > 0. This is the intuition behind Phase I of the Simplex. We are going to work with the new constraints and the auxiliary problem's objective will be to find a minimum for x 0 . If the final solution of the auxiliary problem is x 0 = 0, we will conclude that the original problem is feasible. An initial solution to the original problem will be obtained so we can then start with Phase II of Simplex. Formally, the auxiliary problem has the following form:
max − x 0 s.t. Ax + x s − x 0 1 = b x, x s ≥ 0 x 0 ≥ 0,
where x s is the slack variable vector we added to form the dictionary, and x 0 is the new variable we use to create the auxiliary problem. Before, we said that the auxiliary problem is always feasible, and we can see that by looking at the solution associated with the initial dictionary of the auxiliary problem:
x = 0, x 0 = − min(b, 0), x s = b + x 0 1.
The value of x 0 is chosen so as to make the problem feasible, and therefore it must bring all the variables at least up to zero. In our previous example b min = −2 and therefore
x 0 = 2 in order to make x 3 and x 5 nonnegative. If we take a look at the slack variables,
we can see that x i s = b i − b min . Since by definition b i ≥ b min ,
we know that the slack variables will also be nonnegative, and therefore the auxiliary problem will always be feasible.
Coming back to our example, this is the complete auxiliary problem corresponding to the previous LP:
max − x 0 s.t. − 2x 1 + x 2 ≤ −2 + x 0 x 2 ≤ 4 + x 0 x 1 − 2x 2 ≤ −2 + x 0 x 1 ≤ 4 + x 0 x 1 , x 2 , x 0 ≥ 0
Let us now construct the initial dictionary for the auxiliary problem:
x 3 = −2 + x 0 + 2x 1 − x 2 x 4 = 4 + x 0 − x 2 x 5 = −2 + x 0 − x 1 + 2x 2 x 6 = 4 + x 0 − x 1 w = 0 − x 0 .
The pivoting of the auxiliary dictionary has a couple of special rules that we need to follow: (i) the initial move will always be to make x 0 the entering variable, and the leaving variable will be the one with the least value b i , (ii) whenever x 0 is one of the possible leaving variables, preferentially choose it. In our example, if x 0 enters, the leaving variable can either be x 3 or x 5 . We choose x 5 and obtain the following dictionary:
x 0 = 2 + x 1 − 2x 2 + x 5 x 3 = 0 + 3x 1 − 3x 2 + x 5 x 4 = 6 + x 1 − 3x 2 + x 5 x 6 = 6 − 2x 2 + x 5 w = −2 − x 1 + 2x 2 − x 5 .
The next pivot is done with x 2 as entering and x 3 as leaving variable, leading to the following dictionary:
x 0 = 2 − x 1 + 2 3 x 3 + 1 3 x 5 x 2 = 0 + x 1 − 1 3 x 3 + 1 3 x 5 x 4 = 6 − 2x 1 + x 3 x 6 = 6 − 2x 1 + 2 3 x 3 + 1 3 x 5 w = −2 + x 1 − 2 3 x 3 − 1 3 x 5 .
Finally, after choosing x 1 as entering variable, we see that x 0 is the leaving variable, which leads to the final dictionary of the auxiliary problem:
As we can see, the solution associated with the final auxiliary dictionary is x 0 = 0, x 1 = 2, x 2 = 2, x 3 = 0, x 4 = 2, x 5 = 0, x 6 = 2 and the final objective value is w = 0, which means that the original LP is feasible. As we can see, the point x 1 = 2, x 2 = 2 is inside the feasible region depicted in Figure 3.8.
The question now is, how do we construct a feasible dictionary for the original LP, so we can start Phase II of Simplex? The answer is simple, we just eliminate x 0 from
x 1 = 2 − x 0 + 2 3 x 3 + 1 3 x 5 x 2 = 2 − x 0 + 1 3 x 3 + 2 3 x 5 x 4 = 2 + 2x 0 − 1 3 x 3 − 2 3
x 5
x 6 = 2 + 2x 0 − 2 3 x 3 − 1 3 x 5 w = 0 − x 0 .
the constraints and rewrite the objective function z with respect to the new non-basic variables. Here is the resulting dictionary:
x 1 = 2 + 2 3 x 3 + 1 3 x 5 x 2 = 2 + 1 3 x 3 + 2 3 x 5 x 4 = 2 − 1 3 x 3 − 2 3 x 5 x 6 = 2 − 2 3 x 3 − 1 3 x 5 z = 6 + 4 3 x 3 + 5 3 x 5 .
Recall that the original objective function was z = x 1 + 2x 2 , which we just rewrite by substituting x 1 and x 2 . The solution associated with this dictionary is x 1 = 2, x 2 = 2, x 3 = 0, x 4 = 2, x 5 = 0, x 6 = 2, which is a feasible solution. Therefore, now we can use this dictionary to start Phase II of the Simplex algorithm to find the optimal solution.
Infeasible problems
In this section we just want to present an infeasible problem, and how the auxiliary problem helps us determine its infeasibility. Let us consider the following LP:
max 2x 1 − 3x 2 s.t. − x1 + x 2 ≤ −3 2x 1 + x 2 ≤ 10 x 1 − 2x 2 ≤ −2 x 1 , x 2 ≥ 0
with associated initial dictionary:
x 3 = −3 + x 1 − x 2 x 4 = 10 − 2x 1 − x 2 x 5 = −2 − x 1 + 2x 2 z = 0 + 2x 1 − 3x 2 .
The solution associated with this dictionary is x 1 = 0, x 2 = 0, x 3 = −3, x 4 = 10,
x 5 = −2,
which is infeasible. We therefore start Phase I of the Simplex algorithm by constructing the auxiliary problem:
x 3 = −3 + x 0 + x 1 − x 2 x 4 = 10 + x 0 − 2x 1 − x 2 x 5 = −2 + x 0 − x 1 + 2x 2 w = 0 − x 0 .
We start the pivoting for auxiliary problems by choosing x 0 as entering variable and x 3 as leaving, obtaining the following dictionary:
We continue pivoting by making x 1 enter and x 5 leave the basis:
Next, we make x 2 enter and x 4 leave:
x 0 = 3 − x 1 + x 2 + x 3 x 4 = 13 − 3x 1 + x 3 x 5 = 1 − 2x 1 + 3x 2 + x 3 w = −3 + x 1 − x 2 − x 3 . x 0 = 5 2 − 1 2 x 2 + 1 2 x 3 + 1 2 x 5 x 1 = 1 2 + 3 2 x 2 + 1 2 x 3 − 1 2 x 5 x 4 = 23 2 − 9 2 x 2 − 1 2 x 3 + 3 2 x 5 w = − 5 2 + 1 2 x 2 − 1 2 x 3 − 1 2 x 5 .
x 0 = 11 9 + 5 9
x 3 + 1 9
x 4 + 1 3
x 5
x 1 = 13 3 + 1 3 x 3 − 1 3 x 4 x 2 = 23 9 − 1 9 x 3 − 2 9 x 4 + 1 3 x 5 w = − 11 9 − 5 9 x 3 − 1 9 x 4 − 1 3 x 5 .
Once we reach this dictionary, we see that there are no possible entering variables, and we reach a solution where x 0 = 11 9 > 0. We can only conclude that the original problem is infeasible.
If we take another look at the constraints of the original problem, and we sum the first and the third constraints, we get:
−x 1 + x 2 + x 1 − 2x 2 ≤ −3 − 2 −→ −x 2 ≤ −5 −→ x 2 ≥ 5.
And if we take the second equation and subtract the first one, we get:
2x 1 + x 2 + x 1 − x 2 ≤ 10 + 3 −→ 3x 1 ≤ 13 −→ x 1 ≤ 13 3 .
If we now substitute the two variables into the first equation:
− 13 3 + 5 = 2 3 ≤ −3
x 2 has a value of 5 or larger, while x 1 cannot be larger than 13 3 , which means that the minimum value of the first constraint −x 1 + x 2 will be 2 3 , which is clearly larger than −3. Therefore, we can see that the problem is, indeed, infeasible. A walk from node i 1 ∈ V to i t ∈ V is a sequence i 1 , i 2 , . . . , i t of nodes such that (i k , i k+1 ) ∈ E for k = 1, . . . , t − 1. A walk is called a path if it has no repeated nodes. In Figure 3.9(b) we show an example of a walk 1, 2, 3, 4, 5, 3 in green; note that it is not a path since node 3 is repeated. We depict a path 1, 2, 3, 5 in orange.
Graph model representation
The distance between u, v ∈ V is the smallest t such that there exists a path i 1 , . . . , i t in G with i 1 = u and i t = v. The diameter of G is the largest distance between two nodes of G. In Figure 3.9(c) we show a case, where the longest distance between any two nodes in the graph is 2, therefore the diameter of the graph is 2. Now that we have defined some basic concepts of graphs, we can proceed by representing polyhedra used to describe Linear Programs as graphs.
A polyhedron P = {x ∈ R n : Ax ≤ b} with vertices defines a graph G P = (V, E) as follows. The set of nodes V is the set of vertices of P and (v 1 , v 2 ) ∈ E ⇐⇒ v 1 and v 2 are adjacent in P (see Figure 3.10). Considering the previously defined Linear Program max{c x : x ∈ R n , Ax ≤ b}, we know that the Simplex algorithm walks along the edges of a graph G P of P = {x ∈ R n :
Ax ≤ b}. The question we asked ourselves in previous sections was whether there is a version of Simplex requiring a polynomial number of iterations. A necessary condition for this is that the diameter of G P must be polynomial. We define a new variable ∆(n, m)
as the diameter of a graph G P of a polyhedron P ⊆ R n described by m inequalities. The best bound for ∆(n, m) found so far was presented in 1992 by Kalai and Kleitman [57] and it is defined as ∆(n, m) ≤ m 1+log n . This bound belongs to a family of functions called quasi polynomial, which grow much slower than exponential functions, but not as slow as polynomial functions.
Now we introduce an important property of a graph G P related to polyedron P = {x ∈ R n : Ax ≤ b}, which is that G P is connected. Furthermore, for each pair of vertices u, v there exists a path connecting u and v such that each inequality of Ax ≤ b active at both u and v is also active at each vertex of that path.
Let us look at the example shown in Figure 3.11, where we have our graph G P drawn in black and two vertices u and v marked in red. Inequality x 1 ≤ 1 is active at both vertices and there is a path (marked in red) along which this constraint is also active.
x 3 x 2
x 1
x 1 1 The most interesting thing is that vertices and feasible bases are equivalent concepts. A graph G P = (V, E) has a set of vertices or nodes which is indeed a set of feasible bases.
Let us look at the following example shown in Figure 3.12. There are six conditions active for this polyhedron (blue edges), they are written and numbered on the right side on the Figure. For each vertex, we can determine the active constraints, and therefore form the basis of that vertex. We show some of the vertex -basis correspondence in x 3 x 2
x 1 5. x 2 1 6. x 3 1 4. x 1 1 3. x 3 0 2. x 2 0 1. x 1 0 (0, 0, 0) (1, 0, 0) {1, 2, 3} {4, 2, 3} {4, 5, 6}
(1, 1, 1)
Matchings and vertex covers
A graph G = (V, E) is bipartite if one can partition V into V = A ∪ B such that each edge (u, v) ∈ E satisfies u ∈ A, v ∈ B.
As we can see in Figure 3.13, edges within a set are not allowed (marked in red), while edges that connect a node of one set with a node of the other set are allowed (marked in green). Each edge has a weight w ∈ R 0 , which can be used to represent costs, distances, etc. depending on what the graph is modeling.
A B w FIGURE 3.13: Bipartite graph: two sets A and B, edges between sets are allowed (green) while edges within the same set are not allowed (red). Edges are assigned a weight w.
A matching is a subset M ⊆ E of the edges such that each pair e 1 , e 2 ∈ M, e 1 = e 2 satisfies e 1 ∩ e 2 = ∅.
The maximum weight (bipartite) matching problem can be defined as follows:
given a (bipartite) graph G = (V, E) and edge weights w ∈ R 0 , determine a match-
ing M ⊆ E such that w(M ) = e∈M w e is maximal.
Let us consider a typical example of a matching problem in bipartite graphs: the job assignment problem. The problem is very simple: we have four job openings and four applicants. Each applicant has a performance score for each job, and we want to maximize the total performance score for the company. This can be translated to a bipartite graph as shown in Figure 3.14(a). One possible match is shown with green edges in Figure 3.14(b), but now the question is whether this match is maximal or not. In this case, the sum of the weight of the edges used in the matching is equal to 15.
In order to find out if this matching is optimal, we use the concept of w-vertex covers.
The relation between w-vertex covers and the maximum weight matching problem is similar to the relation between primal and dual problems.
A w-vertex cover is a vector y ∈ N V 0 such that ∀(u, v) ∈ E : y u + y v ≥ w uv . The value of a w-vertex cover is v∈V y v . An example of w-vertex cover for the job assignment bipartite graph can be found in Figure 3.14(c). In this case, we need to assign a value to each node
Back to Linear Programs and towards Integer Programs
Now we can return to Linear Programs to find another way to prove the weak duality on bipartite graphs, move towards strong duality and later on discuss integer programs and linear relaxation. These last concepts are extremely important for the multiple people tracking formulation we present in Chapter 4.
Let us first convert our graph to a Linear Programming formulation. We want to describe the matchings by linear constraints. We start by enumerating all edges and describing matchings as vectors ∈ {0, 1}. In
1 1 0 0 1 1 x ≤ 1 1 1 where x ∈ {0, 1} 3
These constraints have a similar expression as the ones we have seen before for Linear
Programs
Ax ≤ b x ∈ Z n ,(3.26)
except that now x can now only take integer values. This defines an Integer Program, a problem like the one depicted in Figure 3.16, where the conditions form the red polyhedron. The black points represent integer solutions, and those within the polyhedron are feasible. The green arrow is the direction of maximization of our optimization problem.
As we can see, if the program was a Linear Program as the ones defined in previous sections, the optimal solution would be the vertex marked by the green dot. But since our problem has decision variables which can only take integer values, we have to find the closest integer-valued solution, which is marked by the orange dot. The biggest drawback of Integer Programs is that they are N P-hard, which means that they cannot be solved in polynomial time. Though N P complexity is not a central topic of this thesis, we make a short explanation in the following lines. We refer the interested reader to [53] for more details.
A note on N P complexity • Class P: a decision problem P belongs to P if it can be solved by a deterministic Turing machine in polynomial time.
• Class N P: a decision problem P is in N P if for every instance of P that has a positive results, there is a certificate proving the positive result, which can be verified in polynomial time.
• Class N P-complete: a decision problem P is said to be N P-complete if: (i) P ∈ N P and (ii) all other problems in the class N P are reducible to P in polynomial time. This implies that, if there is an efficient algorithm for some N P-complete problem, there is an efficient algorithm for every problem in the class N P. As a result, an N P-complete problem is at least as hard as every other problem in the class N P.
• Class N P-hard: a problem P is said to be N P-hard if all other problems in the class N P are reducible to P in polynomial time. Informally, an N P-hard problem is at least as hard as the hardest problems in N P. To prove that this is indeed the optimal solution, we can add the three constraints and obtain 2x 1 + 2x 2 + 2x 3 ≤ 3 which brings us to
x 1 + x 2 + x 3 ≤ 3/2.
We can express the minimum w-vertex cover problem as an Integer Program as well, as shown in Equation (3.29), and its LP-relaxed version in Equation (3.30). In this case, since we are working with minimization, the LP relaxation version is a lower bound of the integer solution.
INTEGER PROGRAM min v∈V y v s.t. ∀(u, v) ∈ E : y u + y v ≥ w uv ∀v ∈ V : y v ≥ 0 y ∈ Z |V | (3.29) LP RELAXATION min v∈V y v s.t. ∀(u, v) ∈ E : y u + y v ≥ w uv ∀v ∈ V : y v ≥ 0 y ∈ R |V | (3.30)
With these Linear Programming representations, we can again prove the weak duality between the maximum weight matching and the w-vertex cover, relationship shown in The only thing left to prove is that the LP relaxation representation of the maximum weighted matching is equal to the minimum w-vertex cover, which would be the proof of strong duality.
e∈δ(v) x e ≤ 1 ∀e ∈ E : x e ≥ 0 ∀e ∈ E : x e ∈ Z |E| max e∈E w e x e s.t. ∀v ∈ V : e∈δ(v) x e ≤ 1 ∀e ∈ E : x e ≥ 0 ∀e ∈ E : x e ∈ R |E| min v∈V y v s.t. ∀(u, v) ∈ E : y u + y v ≥ w uv ∀v ∈ V : y v ≥ 0 y ∈ Z |V | min v∈V y v s.t. ∀(u, v) ∈ E : y u + y v ≥ w uv ∀v ∈ V : y v ≥ 0 y ∈ R |V |
Total unimodularity and strong duality
If we again take a look at the LP relaxed versions of the problems, we can see that they have the general forms described in Equations (3.31) and (3.32). In this section we will take a closer look at the matrix A G and see that these two problems are, in fact, duals of each other.
max e∈E w e x e s.t. ∀v ∈ V : e∈δ(v) x e ≤ 1 ∀e ∈ E : x e ≥ 0 x ∈ R |E| max w x s.t. A G x ≤ 1 x ≥ 0 (3.31) min v∈V y v s.t. ∀(u, v) ∈ E : y u + y v ≥ w uv ∀v ∈ V : y v ≥ 0 y ∈ R |V | max 1 y s.t. A G y ≥ w y ≥ 0 (3.32)
Let G = (V, E) be a graph and suppose the nodes and edges are ordered as v 1 , . . . , v n and e 1 , . . . , e m , respectively. The matrix A G ∈ {0, 1} n×m with
A i,j G = 1 if v i ∈ e j 0 otherwise
is the node-edge incidence matrix of G. By using Linear Programming strong duality we will be able to prove that, in fact, the optimal solutionw for the LP relaxation versions of the problems are equal to the solutions of their integer counterparts. This is true for bipartite graphs and in general Linear
Programs defined by a node-edge incidence matrix A which is totally unimodular.
A matrix A ∈ {0, ±1} is totally unimodular if the determinant of each square sub-matrix of A is equal to 0, 1 or -1.
Theorem 3.12. Let G = (V, E) be a bipartite graph. The node-edge incidence matrix A G of G is totally unimodular.
The proof of Theorem 3.12 is interesting since we can use it to prove the total unimodularity of the matrix that will define our tracking problem, as we will see in Chapter
4.
Proof. We will give a proof depending on the value of k, where B is a k × k sub matrix of
A G .
Remembering that the columns of A G represent edges and rows represent nodes,
we can see that each column will contain two 1's for the nodes that edge connects, and the rest will be 0's.
• k = 1.
If we only take a matrix of one element, this element can be B = 0, ±1 which means det(B) = 0, ±1.
• k > 1, B has a column with exactly one entry equal to 1. Of course, we can see that B is a sub matrix of A G , so we can use induction on B
to prove its total unimodularity.
• k > 1, each column of B contains exactly two entries equal to 1.
We can order the rows according to the set they belong to. Remember that a bi- Going back to integer and linear programs, we can state the following theorem. This basically tells us that as long as our matrix A is totally unimodular, even if the indicator variable x is not specified to be integer-valued, the optimal solution will always be integral.
Therefore, we can state the following corollary.
Corollary 3.14. If A ∈ Z m×n is totally unimodular, b ∈ Z m , and if max{c x : x ∈ R n , Ax ≤ b} is bounded, then
max{c x : x ∈ R n , Ax ≤ b, x ≥ 0} = max{c x : x ∈ Z n , Ax ≤ b, x ≥ 0}
From all this, we can conclude that the maximum weight of a matching is equal to the minimum value of a w-vertex cover, which is the strong duality in a bipartite graph.
This is all summarized in the following theorem by Egerváry [58]. If we take a look at Figure 3.18, we see that we have proven all inequalities to be equal when A G is totally unimodular, which is in fact true for the bipartite graph, so we have proven Theorem 3.15.
A similar but less general theory was developed independently by König in 1931 [59].
It defines a vertex cover of a graph G = (V, E) to be a subset U ⊆ V such that e ∩ U = ∅
for each e ∈ E. This is the same as a w-vertex cover in the special case when w = 1,
i.e.an all ones vector.
The shortest path problem
After explaining the main concepts of Linear Programming and their relationship to graph theory, in this section we focus on solvers for graphs, namely shortest paths.
Though we do not use this particular algorithm to solve our multiple people tracking problem, it has been widely used in the literature [28], [60] and therefore we consider them to be a valuable concept to be included in this Chapter. Towards the end of the section, we will see again the connection of this method to Linear Programming.
M ij = 1 if (i, j) ∈ E 0 otherwise.
We can see an example of this graph in
M uv = 1 if (u, v) ∈ A 0 otherwise.
This means that a 1 represents not an undirected edge but a directed arc, so we have to pay attention to the beginning and end point, since the direction of the arc changes where the 1 is placed within the adjacency matrix. We can see the same example as before in Figure 3.20(b), but this time as a directed graph. As we can see, the adjacency matrix is no longer symmetric.
c(W ) = k i=1 c(v i−1 , v i ).
The distance between two nodes s and t is the cost of a shortest path from s to t. For the multiple people tracking problem, we can use a solver based on Dijkstra's algorithm or Bellman-Ford algorithm that will find a series of k-shortest paths, each one representing a valid pedestrian trajectory [28,60].
The Bellman-Ford method
In this section, we describe a method to compute minimum length walks given a weighted directed graph D = (V, A) with no cycles of negative length and a designated node s ∈ V . The goal of the method is to compute shortest path distances from s to all other nodes, assuming that each node is reachable from s.
For k ≥ 0 and t ∈ V , we can define d k (t) to be the minimum length of any s − t walk, traversing at most k arcs. For example, d 0 (s) = 0, since the length of a walk from s to s traversing at most 0 arcs is, in fact, also 0. d 0 (t) = ∞ unless t = s, since we cannot reach any other node from s by traversing at most 0 arcs.
Let us assume d i (t) is known for each i ≤ k and each t ∈ V , and now we want to compute d k+1 (t) for each t ∈ V . Now we can encounter two cases: the first one is when a shortest walk traversing at most k + 1 arcs traverses exactly k + 1 arcs; the second one is when the shortest walk traversing at most k + 1 arcs actually traverses at most k arcs,
i.e.d k+1 (t) = d k (t). Both of these are upper bounds of d k+1 (t).
To sum up, for k ≥ 0 and t ∈ V : d k+1 (t) = min{d k (t), min and ∞ for all other nodes. We start the first iteration with k = 0, where we compute d 1 accordingly. By traversing at most 1 arc from s, we can reach nodes a and c with lengths 3 and 4, respectively. We therefore obtain the distances as shown in Table 3 Then for each t ∈ V , the computed d n−1 (t) is the distance between s and t.
4 ∞ ∞ d 2 0 3 ∞ 4 ∞ ∞ d 1 0 ∞ ∞ ∞ ∞ ∞ d 0 s a b c d e
If we do not encounter any negative length cycles, we just need to perform n − 1 iterations, where n is the number of vertices of the graph, and we are guaranteed to find the shortest path solution from s to all vertices in the graph.
The Bellman-Ford algorithm runs in time O(|V ||A|). While for most of the graphs the algorithm needs much less than |V | − 1 iterations, the algorithm still does not scale well.
Shortest path expressed as a Linear Program
There is a natural linear programming formulation for the shortest path problem.
Given a directed graph D = (V, A) with source node s, target node t, and cost c(u, v)
for each arc (u, v) ∈ A, consider the program with variables f (u, v):
min (u,v)∈A c(u, v)f (u, v) (3.33) subject to f ≥ 0 (3.34) and ∀u ∈ V, v∈V f (u, v) − v∈V f (v, u) = 1, if u = s; −1, if u = t; 0, otherwise. (3.35)
This LP has the special property that it is integral; more specifically, the decision variables of every basic optimal solution (when one exists) assume values of 0 or 1. This is because the condition matrix is totally unimodular, as explained in Section 3.7.2.
The shortest path problem can be seen from a network flow point of view [53], where we are interested in sending a commodity through a network at the smallest cost possible (see Equation (3.33)). In this case, each commodity sent through the network is one unit of "flow", represented by the variable f . The capacity of an arc is defined as the amount of flow that can be sent through that arc; this is the first condition of the LP, shown in Equation (3.34). The mass balance constraints are defined for each node and make sure that all flow that enters a node also exits that node as expressed in Equation (3.35).
k-shortest paths
To conclude this chapter, we will introduce the k-shortest paths algorithm. There are several types of k-shortest paths problems that can be solved, e.g., finding k paths with decreasing costs [61], but for the multiple object tracking problem we are interested in the problem of finding k-shortest disjoint paths [53,62]. This problem is based on the assumption that we are interested in finding edge-disjoint paths, i.e.paths that do not share common edges. This exclusion property is key to the multiple people tracking problem, since a node represents a detected person and therefore we cannot assign it to two trajectories.
Node potentials.
Let us first start by introducing some useful concepts. In many network flow algorithms it is useful to measure the cost of an arc relative to "imputed" costs associated with its incident nodes. These costs are typically intermediate data that is computed within the context of an algorithm. Let D = (V, A) be a directed graph; we associate to each node i ∈ V a number π(i), which we refer to as the potential of that node. We can define a reduced cost (or length) of an arc as:
c π (i, j) = c(i, j) − π(i) + π(j).
Often algorithms work with these reduced costs, since they have an interesting property: minimum cost flow problems with arc costs c(i, j) or c π (i, j) have the same optimal solutions since their objective functions only differ by a constant.
Residual network.
Sometimes it is convenient to measure flow f not in absolute terms, but rather in terms Given an original graph G, we define a residual network G(f 0 ) with respect to flow f 0 as follows. We replace arc (i, j) in the original network with two arcs: (i, j) with cost c(i, j) and residual capacity r(i, j) = u(i, j) − f 0 (i, j), and another arc (j, i) that has cost −c(i, j) and residual capacity r(j, i) = f 0 (i, j), as shown in Figure 3.23. The residual network consists of only the arcs with a positive residual capacity. This provides us with the flexibility of working with a residual network, and once we determine an optimal solution for it, we can convert it to an optimal solution of the original network.
i j , u(i, j) f 0 (i, j) f 0 (i, j) c(i, j) c(i, j) i j c(i, j) , u(i, j) f 0 (i, j)
Shortest paths for multiple people tracking.
The idea of the successive shortest path algorithm [53] is to maintain optimality of the solution at every step while trying to achieve feasibility. It maintains a solution f that satisfies the nonnegativity and capacity constraints but violates the mass balance constraints of the node.
The details of the general k-shortest paths algorithm in Algorithm 7. 4. Create a residual graph from sending flow δ along P .
5.
Compute the reduced costs c π (i, j) = c(i, j) − π(i) + π(j).
end for
Let us look at the example of Figure 3.24; we start with the initial graph depicted in This algorithm is described as an edge-disjoint successive shortest path, which means that a node might be used by two or more different paths, as a and b in the example of Trajectories found previously can be changed if the algorithm pushes the flow back.
Computational complexity.
Bellman-Ford algorithm has a complexity of O(|V ||A|) and while it can be applied to graphs with a wider range of inputs, it is slower than Dijkstra's algorithm which runs at O(|A| log |V |) under certain conditions. For the multiple people tracking problem, there exist negative costs in the network and therefore we cannot directly apply Dijkstra to find the shortest paths. Fortunately, we can convert this initial graph into an equivalent graph by using Bellman-Ford once at the beginning and creating a graph with reduced costs using node potentials. For the rest of k iterations, we can use Dijkstra's algorithm to find the shortest paths. This procedure is described in [60] for multiple people tracking with the network structure that we will present in Chapter 4.
Programming Linear Programs
There are several Linear Programming solvers available online. In this section, we give a quick overview of a C library which includes several Linear and Integer Programming solvers as well as the MatLab functions that allow us to solve LPs using Simplex.
GLPK Library
The GNU Linear Programming Kit (GLPK) package is intended for solving large-scale Linear Programs, Mixed Integer Programs (MIP), and other related problems. It is a set of routines written in C and organized in the form of a callable library. It can be downloaded from http://www.gnu.org/software/glpk/, where installation instructions can be found.
As an example, we will present how to write up the following LP:
max 4x 1 − 6x 2 + 3x 2 s.t. x 1 + 10x 2 + x 3 ≤ 5 2x 1 − x 2 = 0 x 1 , x 2 , x 3 ≥ 0
All GLP API data types and routines are defined in a header that should be included in all source files:
# include < g l p k . h>
The problem object contains all the information of the LP, i.e.the objective function, the constraint matrix, the parameters of the solver, etc. It can be initialized in the following way:
MatLab
MatLab provides a simple way to define and solve Linear Programs using the function linprog. The inputs are directly the matrices A, c and b. If the problem has some equalities, they can be specifically defined using A eq and b eq , as shown in the example below which represents the same LP as the one in the previous section.
Chapter 4 Linear Programming for Tracking
We have seen in previous chapters that tracking is commonly divided into object detection and data association. First, objects are detected in each frame of the sequence and second, the detections are matched to form complete trajectories. In Chapter 2, we presented an introduction to several state-of-the-art detection methods. In this chapter, we focus on data association, which is the core of this thesis. We define the data association problem formally and describe how to convert it to a minimum-cost network flow problem, which can be efficiently solved using Linear Programming. The idea is to build a graph in which nodes represent pedestrian detections. These nodes are fully connected to past and future observations by edges, which determine the relation between two observations with a cost. Thereby, the matching problem is equivalent to a minimum-cost network flow problem: finding the optimal set of trajectories is equivalent to sending flow through the graph so as to minimize the cost. This can be efficiently computed using the Simplex algorithm [64] or k-shortest paths [53] as presented in the previous chapter. In this chapter, we define the multiple object tracking problem using the Linear Programming formulation, which will be the basis for the contributions introduced in Chapters 5 and 6.
Related work: from local to global matching
The data association problem deals with keeping the identity of tracked objects given available detections. False alarms and missed detections mainly due to occlusions are two sources of inaccuracies in the data association problem, and these become more 91 apparent as the density of objects to be tracked is increased. Typically, data association is performed on a frame-by-frame basis, predicting pedestrians' motion from one frame to the next with, e.g. Kalman Filter [30] or particle filter [65][66][67] and then matching them with the detections using, e.g. the Hungarian algorithm [68] or the Auction algorithm [69]. While this type of approach is very useful for real-time applications [70], the matching decisions are made individually for each pedestrian and with only the information of the previous frame, which makes it difficult to distinguish targets in crowded environments and it is completely defenseless against occlusions. Joint particle approaches, such as the joint probabilistic data association filter (JPDAF) [71], can be used to make a joint motion prediction for all pedestrians at the same time. Sampling can be done using, e.g. Markov Chain Monte Carlo (MCMC) [72,73], but matching is still limited to be frame-by-frame.
In order to include more information from previous frames, researchers have proposed several solutions: multi-hypothesis (MHT) approaches [74,75], which extend the prediction of a pedestrian's motion to several frames, thereby creating several hypotheses of what path the pedestrian might have followed; solving the matching problem for a small fixed number of frames [10]; using Bayesian networks to reason about how trajectories split and merge [76]; or dealing with difficult matching situations, such as matching people in groups, using the Nash Equilibrium of game theory [77]. Nonetheless, for most of these techniques computational time increases exponentially as more and more frames and objects are taken into account, since the search space of hypotheses quickly grows.
In contrast, in [78] an efficient approximative Dynamic Programming (DP) scheme was presented in which trajectories are estimated in succession. The advantage is that tracking each individual is done using the information of all frames. On the other hand, if a trajectory is formed using a certain detection, the other trajectories which are computed later will not be able to use that detection anymore. This obviously does not guarantee a global optimum for all trajectories.
Recent works show that global optimization can be more reliable in crowded scenes, as it solves the matching problem jointly for all tracks. The multiple object tracking problem is defined as a linear constrained optimization flow problem and Linear Programming (LP) [64] is commonly used to find the global optimum. Linear Programming is widely used for Computer Vision applications such as 3D shape matching [79,80], image segmentation [81] or pose estimation [82]. The idea to use it for people tracking was first published in [83], although this method requires a priori the number of targets to track, which limits its application in real tracking situations. In [28], the scene is divided into equally-sized cells, each represented by a node in the constructed graph. Using the information from the Probability Occupancy Map, the problem is formulated either as a max-flow and solved with Simplex, or as a min-cost and solved using k-shortest paths, which is a more efficient solution. In [84], the problem is also defined as a maximum flow on a hexagonal grid, but instead of matching individual detections, they make use of tracklets. There also exist continuous solutions which do not work with a discrete state space, i.e.a finite set of possible detection locations, but with a continuous state space which provides a more accurate pedestrian location. In [85] the authors propose a well-designed local optimization scheme in a continuous state space. Mixed solutions have also been presented [86] where tracking is performed in the discrete domain but trajectory estimation is performed continuously. In [87], global and local methods are combined to match trajectories across cameras and across time.
Finally, in [4,27,60] the tracking problem is formulated as a Maximum A-Posteriori (MAP) problem which is mapped to a minimum-cost network flow and then efficiently solved using LP. In this case, each node represents a detection, which means the graph is much smaller compared to [28,84]. In this chapter we detail the graph construction and creation of the system of linear equations as proposed in [4,27,60]
Multiple object tracking: Problem statement
Let O = {o t j } be a set of object detections with o t j = (p t j , t), where p t j = (x, y, z) is the 3D position and t is the time stamp. A trajectory is defined as a list of ordered Optimizing Eq. (4.2) directly is intractable since the space of T is huge. Nonetheless, we make the assumption trajectories cannot overlap (i.e., a detection cannot belong to two trajectories) which allows us to treat each trajectory independently, and therefore decompose the equation as:
object detections T k = {o t 1 k 1 , o t 2 k 2 , · · · , o t N k N } with t 1 ≤ t 2 ≤ . . . ≤ t NT * = arg max T j P (o j ) T k ∈T P (T k ) (4.3)
where P (o j ) is the likelihood of detection o j and the trajectories are represented by a
Markov chain:
P (T ) = T k ∈T P in (o t 1 k 1 )P (o t 2 k 2 |o t 1 k 1 ) . . . P (o tm km |o t m−1 k m−1 ) . . . P (o tn kn |o t n−1 k n−1 )P out (o tn kn ) (4.4)
where P in (o t 1 k 1 ) is the probability that a trajectory k is initiated with detection o t 1 k 1 , P out (o tn kn ) the probability that the trajectory is terminated at o tn kn and P
Tracking with Linear Programming
In this section, we explain how to convert the MAP problem into a Linear Program, which is particularly interesting, since it can be efficiently solved in polynomial time, as explained in Chapter 3.
Let us recall the definition of a linear programming problem. It consists in minimizing or maximizing a linear function in the presence of linear constraints which can be both equalities and inequalities.
Minimize
c 1 f 1 + c 2 f 2 + . . . + c n f n (4.5)
Subject to a 11 f 1 + a 12 f 2 + . . . + a 1n f n ≥ b 1 (4.6)
a 21 f 1 + a 22 f 2 + . . . + a 2n f n ≥ b 2 . . . . . . . . . a m1 f 1 + a m2 f 2 + . . . + a mn f n ≥ b m
where Eq. In a minimum cost network flow problem, the objective is to find the values of the variables that minimize the total cost of the flows through the network. Defining the costs as negative log-likelihoods, and combining Equations (4.3) and (4.4), the following objective function is obtained:
T * = arg min T T k ∈T − log P (T k ) − j log P (o j ) (4.10) = arg min f i C in (i)f in (i) + i,j C t (i, j)f t (i, j) (4.11) + i C det (i)f det (i) + i C out (i)f out (i)
subject to the following constraints:
• Edge capacities: assuming each detection can only correspond to one trajectory, the edge capacities have an upper bound of 1. Furthermore, two conditions have to be fulfilled in order to make sure that, if an observation is active, this is either the start or end of a trajectory, or it is in the middle of a trajectory:
f in (i) + f det (i) ≤ 1 f out (i) + f det (i) ≤ 1 (4.12)
• Flow conservation at the nodes:
f in (i) + f det (i) = j f t (i, j) j f t (j, i) = f out (i) + f det (i) (4.13)
• Exclusion property:
f ∈ {0, 1} (4.14)
The condition in Eq. 4.14 requires to solve an integer program, which is known to be NP-complete. Nonetheless, we can relax the condition to have the following linear equation:
0 ≤ f ≤ 1. (4.15)
Now the problem is defined and can be solved as a linear program. If certain conditions are fulfilled, namely that the constraint matrix A is totally unimodular, as explained in Chapter 3, the solution T * will still be integer, and therefore it will also be the optimal solution to the initial integer program. If the unimodularity condition is not fulfilled, as we will see for the graph structure of Chapter 6, we can always use branching [53] to transform fractional solutions into integers.
Graphical model representation
To map this formulation to a cost-flow network, we define G = (V, E) to be a directed network with a cost C(i, j) and a capacity u(i, j) associated with every edge (i, j) ∈ E, as explained in Chapter 3. An example of such a network is shown in in following frames, with cost C t (i, j). This cost represents the spatial relation between different subjects. Assuming that a subject cannot move a lot from one frame to the next, we define the costs to be an increasing function of the distance between detections in successive frames. The time gap between observations is also taken into account in order to be able to work at any frame rate, therefore velocity measures are used instead of distances. The velocities are mapped to probabilities with a Gauss error function as shown in Equation (4.16), assuming the pedestrians cannot exceed a maximum velocity
V max . E(V t , V max ) = 1 2 + 1 2 erf −V t + Vmax 2 Vmax 4 (4.16)
As we can see in Figure 4.2, the advantage of using Equation (4.16) over a linear function is that the probability of lower velocities decreases more slowly, while the probability of higher velocities decreases more rapidly. This is consistent with the probability distribution of speed learned from training data (in our case, we use the two sequences in [29] to obtain the velocity distribution).
Therefore, the cost of a link edge is defined as:
C t (i, j) = − log P (o t j j |o t i i ) + C(∆f ) (4.17) = − log E p t+∆t j −p t i ) ∆t , V max + C(∆f ) where C(∆f ) = − log B ∆f −1 j
is the cost depending on the frame difference between detections. How high this cost is depends on parameter B j and its effects will be ana- problem is the trivial null flow. Consequently, we represent each observation with two nodes and a detection edge with negative cost:
C det (i) = log 1 − P det (o t i ) + log BB min p BB − p t i .
(4.18)
The higher the likelihood of a detection P det (o t i ) the lower the negative the cost of the detection edge, hence, flow is likely to be routed through edges of confident detections in order to minimize the total cost. If a map of the scene is available, we can also include this information in the detection cost. If a detection is far away from a possible entry/exit point, we add an extra negative cost to the detection edge, in order to favor the inclusion of that observation into a trajectory. The added cost depends on the distance to the closest entry/exit point p BB , and is only computed for distances higher than BB min =1.5 m. This is a simple probabilistic way of including other information present in the scene, such as obstacles or attraction points (shops, doors, etc).
Entrance and exit edges. The edges (s, e i ) connect the source s with all the end nodes e i , with cost C in (i) and flow f in (i). Similarly, (b i , t) connects the end node b i with sink t, with cost C out (i). This connection, as shown in Figure 4.3(b), was proposed in [4] so that when a track starts (or ends) it does not benefit from the negative cost of the detection edge. Setting C in = C out = 0 and taking into account the flow constraints of Eqs. (4.12) and (4.13), trajectories are only created with the information of link edges.
In contrast, the authors in [27] propose to create the opposite edges (s, b i ) and (e i , t), which means tracks entering and leaving the scene go through the detection node and therefore benefit from its negative cost (see Figure 4.3(a)). If the costs C in and C out are then set to zero, a track will be started at each detection of each frame, because it will be cheaper to use the entrance and exit edges than the link edges. On the other hand, if C in and C out are very high, it will be hard for the graph to create any trajectories. Therefore, the choice of these two costs is extremely important. In [27], the costs are set according to the entrance and exit probabilities P in and P out , which are data dependent terms that need to be calculated during optimization.
S b 1 e 1 C i T C out = −log(P out ) C in = −log(P in ) (a) S T b 1 e 1 C i C in = 0 C out = 0 (b) FIGURE 4
.3: (a) Graph structure as used in [27], which requires the computation of P in and P out in an Expectation-Maximization step during optimization. (b) Graph structure as used in [4] which does not require the computation of these two parameters; the trajectories are found only with the information of the link and detection edges.
Chapter 5 Tracking with social context
If a pedestrian does not encounter any obstacles, the natural path to follow is a straight
line. But what happens when the space gets increasingly crowded and the pedestrian can no longer follow the straight path? Social interaction between pedestrians is especially important when the environment is crowded.
Though each object can be tracked separately, recent works have proven that tracking objects jointly and taking their interaction into consideration can give much better results in complex scenes. Current research is mainly focused on two aspects to exploit interaction between pedestrians: the use of a global optimization strategy as presented in Chapter 3 and a social motion model [88]. The focus of this chapter is to marry the concepts of global optimization and social and grouping behavior to obtain a robust tracker able to work in crowded scenarios.
Related work: social forces
Most tracking systems work with the assumption that the motion model for each target is independent. This simplifying assumption is especially problematic in crowded scenes: imagine the chaos if every pedestrian followed his or her chosen path and completely ignored other pedestrians in the scene. In order to avoid collisions and reach the chosen destination at the same time, a pedestrian follows a series of social rules or social forces. These have been defined in what is called the Social Force Model (SFM) [88] which has been used for abnormal crowd behavior detection [89], crowd simulation [90,91] and has only recently been applied to multiple people tracking.
101
Most methods include these social forces or motion contexts in a predictive tracking framework. In [92], an energy minimization approach was used to estimate the future position of each pedestrian considering all terms of the social force model. In [29] and [93], the social forces were included in the motion model of the Kalman or Extended
Kalman filter. In [94] a method was presented to detect small groups of people in a crowd, but it is only recently that grouping behavior has been included in a tracking framework [95][96][97].
Predictive approaches though, are too local and unable to deal with trajectory changes (e.g. when people meet and stop to talk). Recently, [96] included group information in a graphical model. Nonetheless, the structure created to express these group relations is a graph which contains cycles and, therefore, Dual Decomposition [98] was needed to find the solution, which obviously is computationally much more expensive than using Linear Programming. Moreover, the results presented in [96] were only for short time windows. In [99] a solution is presented to include certain constant velocity conditions into a Linear Programming tracking framework. However, in that case the constraint matrix is no longer totally unimodular, so the authors propose to use Lagrangian relaxation in order to solve the problem. This kind of context information can also be extremely useful to track players in sports videos [100], given the great amount of interaction present in those sequences.
The authors of [84] also define the problem as a maximum flow on a hexagonal grid, but instead of matching individual detections, they make use of tracklets. This has the advantage that they can precompute the social forces for each of these tracklets, nonetheless, the fact that the tracklets are chosen locally means the overall matching is not truly global, and if errors occur during the creation of the tracklets, these cannot be overcome by global optimization. In [87], global and local methods are combined to match trajectories across cameras and across time.
In this chapter, we focus on the method presented in [4] where tracking is done by taking the interaction between pedestrians into account in two ways: first, using global optimization for data association and second, including social as well as grouping behavior. The key insight is that people plan their trajectories in advance in order to avoid collisions, therefore, a graph model which takes into account future and past frames is the perfect framework to include social and grouping behavior. The problem of multiple object tracking is formulated as a minimum-cost network flow problem as presented in Chapter 3. Instead of including social information by creating a complex graph structure which then cannot be solved using classic LP solvers, the method proposes an iterative solution relying on Expectation-Maximization. Results on several challenging, public datasets are presented to show the improvement of tracking in crowded environments. Experiments with missing data, noise and outliers are also shown to test the robustness of the approach.
The social force model
The social force model states that the motion of pedestrians can be described as if they were subject to "social forces". These forces are not directly exerted by the pedestrians' personal environment, but they are a measure for the internal motivations of the individuals to perform certain actions, in this case, movements. The idea is that there are certain sensory stimuli that cause a behavioral reaction that depends on personal aims.
This reaction is chosen among all behavioral alternatives with the objective of utility maximization. In summary, one can say that a pedestrian acts as if he/she would be subject to a set of external forces.
There are three main terms that need to be considered, visualized in Figure 5.1:
• Constant velocity: The acceleration of a pedestrian to keep a desired speed and direction.
• Collision avoidance: The term reflecting that a pedestrian keeps a comfortable distance from other pedestrians and borders.
• Group behavior: The attraction forces which occur when a pedestrian is attracted to a friend, shop, etc.
In this chapter we only consider the attractive effects of people within a group, since
we do not consider any information about the static environment such as shops, entries/exists, etc. In contrast with [29], we do not use the destination of the pedestrian as input, since we want to keep the tracking system as independent as possible from the environment.
In following sections we detail how to include this specific information into the Linear Programming multi-people tracking framework introduced in Chapter 4.
Updated MAP and Linear Programming formulation
The original social force model [88] describes a physical system that estimates the position of a pedestrian in a continuous way, which has been successfully used for crowd simulation [90,91]. Nonetheless, we use the social information within a different paradigm:
in our Linear Programming system, we have a set of hypothetical pedestrian positions (in the form of nodes) and we apply the social forces to find out the probability of a certain match (i.e., a certain trajectory being followed by a pedestrian).
When including social and grouping information in the Linear Programming formulation, we can no longer assume that the motion of each subject is independent, which means we have to deal with a much larger search space of T .
We extend this space by including the following dependencies for each trajectory T k :
• Constant velocity assumption: the observation o tm km ∈ T k depends on the previous two observations [o
t m−1 k m−1 , o t m−2 k m−2 ]
• Grouping behavior: If T k belongs to a group, the set of members of the group T k,GR has an influence on T k • Avoidance term: T k is affected by the set of trajectories T k,SFM which are close to T k at some point in time and do not belong to the same group as T k
The first and third dependencies are grouped into the SFM term. The sets T k,SFM and T k,GR are disjoint, i.e., for a certain pedestrian k, the set of pedestrians that have an attractive effect (the group to which pedestrian k belongs) is different from the set of pedestrians that have a repulsive effect on pedestrian k. Therefore, we can assume that these two terms are independent and decompose P (T ) as:
P (T ) = T k ∈T P (T k ∩ T k,SFM ∩ T k,GR ) (5.1) = T k ∈T P (T k,SFM |T k )P (T k,GR |T k )P (T k )
Let us assume that we are analyzing observation o t k . In Figure 5.2 we summarize which observations influence the matching of o t k . Typical approaches [27] only take into account distance (DIST) information, that is, the observation in the previous frame o t−1 k . We introduce the social dependencies (SFM) given by the constant velocity assumption (green nodes) and the avoidance term (yellow nodes). In this case, two observations, o t q and o t r that do not belong to the same group as o t k , will be considered to create a repulsion effect on o t k . On the other hand, the orange nodes which depict the grouping term (GR), are two other observations o t m and o t n which do belong to the same group as o t k and therefore have an attraction effect on o t k . Note that all these dependencies can only be modeled by high order terms, which means that either we use complex solvers [96] to find a solution in graphs with cycles, or we keep the linearity of the problem by using an iterative approach as we explain later on. The objective function is accordingly updated:
Constant velocity assumption
T * = arg max T P (O|T )P (T ) (5.2) = arg min T T k ∈T − log P (T k ) − log P (T SFM |T k ) − log P (T GR |T k ) + j − log P (o j ) = arg min f i C in (i)f in (i) + i C out (i)f out (i) + i,j [C t (i, j) + C SFM (i, j) + C GR (i, j)]f t (i, j) + i C det (i)f det (i)
In the following section, we define the new cost terms according to the Social Force Model.
New costs for the social terms
Constant velocity assumption. A pedestrian tries to keep a certain speed and direction, therefore we assume that at time t + ∆t we have the same speed as at time t and we estimate the pedestrian's position accordingly.
p t+∆t SFM,i = p t i + v t i ∆t (5.3)
Avoidance term. Pedestrians also try to avoid collisions and keep a comfortable distance from each other. This term is modeled as a repulsion field with an exponential distance-decay function with value α learned from training data. The estimation of the pedestrian's future position is computed using also the aforementioned avoidance acceleration term:
p t+∆t SFM,i = p t i + (v t i + a t+∆t i ∆t)∆t. (5.5)
To compute the cost of the edge connecting (i, j), the distance between estimated position and real measurement is used:
C SFM (i, j) = − log E p t+∆t SFM,i − p t+∆t j ∆t , V max (5.6)
where the function E is detailed in Eq. (4.16).
In Grouping behavior. Before modeling group behavior, we need to determine which tracks form each group and at which frame the group begins and ends (to deal with splitting and formation of groups). The idea is that if two pedestrians are close to each other over a determined period of time, they are likely to belong to the same group.
From the training sequence in [29], the distance and speed probability distributions of the members of a group P g vs. individual pedestrians P i are learned. If m and n are two trajectories which appear on the scene at t ∈ [0, N ], we compute the flags g m and g n which indicate to which groups do m and n belong. If
p t+∆t GR,i = p t i + 1 |{m|g m = g i }| {m|gm=g i } v t m ∆t (5.7)
The distance between this estimated position and real measurements is used in (4.16) to obtain the edge costs for the grouping term:
C GR (i, j) = − log E p t+∆t GR,i − p t+∆t j ∆t , V max (5.8)
An example is shown in Figure 5.3(c), where we can see that the maximum probability provided by the group term keeps the group configuration. In is considerably reduced, as we add the social and grouping behaviors, which means we have less ambiguities for data association. This is specially useful to decrease the number of identity switches, as we present in Section 5.6.
Optimization
To compute the SFM and grouping costs, we need to have information about pedestri- Typically, only 4 − 6 iterations are needed for the algorithm to converge to a solution.
Computational reduction
To reduce the computational cost, the graph can be pruned by using the physical constraints represented by the edge costs. If any of the costs C(i, j), C SFM (i, j) or C GR (i, j)
is infinite, the two detections i and j are either too far away to belong to the same trajectory or they do not match according to social and grouping rules, therefore the edge (i, j) is erased from the graphical model. For long sequences, the video can be divided into several batches and optimized for each batch. For temporal consistency, the batches have an overlap of F max = 10 frames. The runtime of [4] for a sequence of 800 frames (114 seconds), 4837 detections, batches of 100 frames and 6 iterations is 30 seconds on a 3GHz machine.
Experimental results
In this section we show the tracking results of several state-of-the-art methods on three publicly available datasets and compare them using the CLEAR metrics [101], explained below.
Metrics used for performance evaluation
The CLEAR metrics were presented in [101] for detection and tracking of both single objects as well as multiple objects. The framework includes guidelines for ground truth annotation, performance metrics, evaluation protocols, and tools including scoring software and baseline algorithms. The scores for multiple people tracking are computed in 2D using pedestrian bounding boxes. They are split into accuracy and precision:
Detection Accuracy (DA). Measures how many detections were correctly found and therefore is based on the count of missed detections m t and false alarms f t for each frame t.
DA = 1 − N f t=1 m t + f t N f t=1 N t G
where N f is the number of frames of the sequence and N t G is the number of ground truth detections in frame t. A detection is considered to be correct when the 2D bounding boxes of both ground truth and detection have some overlap. In this thesis, the overlap measure that we use is 25% which is the standard measure taken in most of the literature.
Tracking Accuracy (TA). Similar to DA but also including identity switches i t . In this case, the measure does not penalize identity switches as much as missing detections or false alarms, as we use a log 10 weight. That is why in most papers the number of identity switches is explicitly shown in order to better compare performance with other methods.
T A = 1 − N f t=1 m t + f t + log 10 (1 + i t ) N f t=1 N t G
Detection Precision (DP). Precision measurements represent how well bounding box
detections match the ground truth. For this, an overlap measure between bounding boxes is used:
Ov t = N t mapped i=1 |G t i ∩ D t i | |G t i ∪ D t i |
where N t mapped is the number of mapped objects in frame t, i.e., the number of detections that are matched to some ground truth object. G t i is the ith ground truth object of frame t and D t i the detected object matched to G t i . The DP measure is then expressed as:
DP = N f t=1 Ov t N t mapped N f
Tracking Precision (TP).
Measures the spatiotemporal overlap between ground truth trajectories and detected ones, considering also split and merged trajectories.
T P = N t mapped i=1 N f t=1 |G t i ∩D t i | |G t i ∪D t i | N f t=1 N t mapped
Analysis of the effect of the parameters
All parameters defined in previous sections are learned from training data using one sequence of the publicly available dataset [29]. In this section we study the effect of the few parameters needed in [4] and show that the method works well for a wide range of values and, therefore, no parameter tuning is needed to obtain good performance. The analysis is done on two publicly available datasets: a crowded town center [21] and the well-known PETS2009 dataset [20], to observe the different effects of each parameter on each dataset. In order to apply the Social Force Model to pedestrians, we need their 3D position in world coordinates. Since all pedestrians walk on a 2D ground plane, we can transform the 2D image coordinates to 3D real world coordinates (with z=0) using a simple homography [102]. We use the calibration provided with each dataset.
Number of iterations
The first parameter we analyze is M i , the number of iterations allowed. This determines how many times the loop of computing social forces and trajectories is executed as ex- we can see that after just 2 iterations the results remain very stable. Actually, the algorithm reports no changes in the trajectories after 3 iterations, and therefore stops even though the maximum number of iterations allowed is higher. The result with 1 and 2 iterations is not very different either, which means the social and grouping behavior does not effect significantly the results for this particular dataset. This is due to the fact that this dataset is very challenging from a social behavior point of view, with subjects often changing direction and groups forming and splitting frequently. More details and comments on these results can be found in Section 5.6.4.2. We observe a different effect on the TownCenter dataset, shown in Figure 5.4(a). In this case, there is a clear improvement when using social and grouping behavior (i.e., the result improves when we use more than one iteration). We also observe a pattern on how the Tracking Accuracy of the dataset evolves: there is a cycle of 3 iterations for which the accuracy increases and decreases in a similar way. This means that the algorithm is jumping between two solutions and will not converge to either one of them. This happens when pedestrians are close together for a long period of time but are not forming a group, which means that even with social forces it is hard to say which paths they will follow.
Maximum speed
This is the parameter that determines the maximum speed we assume for the pedestrians we are observing. In this case, we can see in speed allowed is between 3 m/s -7 m/s, which makes sense since the reported mean speed of pedestrians in a normal situation is around 2 m/s. More interestingly, we observe that the results remain constant when using higher maximum speed values. This is a positive effect of the global optimization framework, since we can use a speed limit much above average and this will still give us good results and will allow us to track, for example, a person running through the scene.
Cost for the frame difference
The last parameter, B j , appears in Eq. (4.18) and represents the penalty term we apply when the frame difference between two detections that we want to match is larger than 1. This term is used in order to give preference to matches that are close in time. Here again we can see different effects on the two datasets. In Figure 5.4(e), we see that the results are stable up to a value of 0.4. The lower the value, the higher the penalty cost for the frame difference, which means it is more difficult to match those detections which are more than 1 frame apart. When the value of B j is higher than 0.4, there are more ambiguities in the data association process, because it is easier to match detections across distant frames. In the TownCenter dataset, there is no occluding object in the scene, which means missing detections are sporadic within a given trajectory. In this scenario, a lower value for B j is better, since small gaps can be filled and there are fewer ambiguities. Nonetheless, we see different results in the PETS 2009 dataset in Figure 5.4(f) since there is an occluding object in the middle of the scene (see Figure 5.5) which occludes pedestrians for longer periods of time. In this case, a higher value of B j allows to overcome these large gaps of missing data, and that is why the best value for this dataset is around 0.6.
Evaluation with missing data, noise and outliers
We evaluate the impact of every component of the approach in [4] with one of the sequences of the dataset [29] which contains images from a crowded public place with several groups as well as walking and standing pedestrians. The sequence is 11601 frames long and contains more than 300 trajectories. First of all, the group detection method is evaluated on the whole sequence with ground truth detections: 61% are correctly detected, 26% are only partially detected and 13% are not found. Furthermore,
Outliers.
With an initial set of detections of GT with 2% missing data, tests are performed with [0, 10,20,30,40,50] percent outliers added in random positions over the ground plane. In Figure 5.7, the results show that the SFM is especially important when the tracker is dealing with outliers. With 50% of outliers, the SFM+GR terms reduce the number of identity switches by 70% w.r.t the DIST results.
Noise. This test is used to determine the performance of our approach given noisy detections, which are very common mainly due to small errors in the 2D-3D mapping.
From the GT set with 2% missing data, random noise is added to every detection. The These results corroborate the assumption that having good behavioral models becomes more important as observations deteriorate. In Figure 5.6 we plot the tracking results of a sequence with 12% simulated missing data. If we only use distance information,
we can see the resulting identity switches as shown in Figure 5.6(a). In
Tracking results
In this section, we compare results of several state-of-the-art methods on two publicly available datasets: a crowded town center [21] and the well-known PETS2009 dataset [20]. We compare results obtained with:
• [21]: using the results provided by the authors for full pedestrian detections. The HOG detections are also given by the authors and used as input for all experiments.
• [27]: globally optimum tracking based on network flow linear programming. For a fair comparison, we do not use appearance information for any method. The methods [21,29,97] are online, while [4,27] processes the video in batches. For these last two methods, all experiments are performed with 6 iterations, a batch of 100 frames, V max =7 m/s, F max = 10, α = 0.5 and B j = 0.3.
Town Center dataset
We perform tracking experiments on a video of a crowded town center [21] using one out of every ten frames (simulating 2.5 fps). We show detection accuracy (DA), tracking accuracy (TA), detection precision (DP) and tracking precision (TP) measures as well as the number of identity switches (IDsw). Note, the DP reported in [21] is about 9 percentage points higher than the input detection precision; this is because the authors use the motion estimation obtained with a KLT feature tracker to improve the exact position of the detections, while we use the raw detections. Still, our algorithm reports almost 67% fewer ID switches. As shown in Table 5.1, [4] algorithm outperforms [29,97], both of which include social behavior information, by almost 4 percentage points in accuracy and reduces the number of identity switches by more than 53%. In Figure 5.8 we can see an example where [29,97] fail.
The errors are created in the greedy phase of predictive approaches, where trajectories compete to get assigned to detections. A red trajectory is started by a false detection in the first frame. This trajectory then takes the detection in the second frame that should belong to the green trajectory (which ends in the first frame). In the third frame, the red trajectory takes over the yellow trajectory and a new blue trajectory starts where the green should have been. None of the resulting trajectories violate the SFM and GR conditions. On the other hand, a global optimization framework takes full advantage of the SFM and GR information and correctly recovers all trajectories. More results of the proposed algorithm can be seen in Figure 5.12.
Results on the PETS2009 dataset
In addition, we present results of monocular tracking on the PETS2009 sequence L1, View 1 with the detections obtained using the Mixture of Gaussians (MOG) background subtraction method. We compare the results with the previously described methods plus the monocular result of View 1 presented in [28], where the detections are obtained using the Probabilistic Occupancy Map (POM) and the tracking is done using k-shortest paths.
The first observation we make is that the linear programming methods (LP and LP+SFM+GR) clearly outperform predictive approaches in accuracy. This is because this dataset is very challenging from a social behavior point of view, because subjects often change direction and groups form and split frequently. Approaches based on a probabilistic framework [4,27] are better suited for unexpected behavior changes (like destination changes), where other predictive approaches fail [29,97]. We can also see that the LP+SFM+GR method has a higher accuracy than the LP method which does not take into account social and grouping behavior. The grouping term is specially useful to avoid identity switches between members of a group (see an example in Figure 5.11, the cyan and green pedestrians who walk together). Precision is similar for all methods since the same detections have been used for all experiments and we do not apply smoothing or correction of the bounding boxes.
Conclusions
In this chapter, we presented an overview of methods that integrate pedestrian interaction into a tracking framework in two ways: using a globally optimum solver or improving the dynamic model with social forces. Furthermore, we explained how to combine the strength of both approaches by finding the MAP estimate of the trajectories' total posterior, including social and grouping models using a minimum-cost network flow with an improved novel graph structure that outperforms existing approaches. Pedestrian interaction is persistent rather than transient, hence the probabilistic formulation fully exploits the power of behavioral models, as opposed to standard predictive and recursive approaches, such as Kalman filtering. Experiments on three public datasets reveal the importance of using social interaction models for tracking in difficult conditions, such as crowded scenes with the presence of missed detections, false alarms and noise.
Chapter 6 Tracking with multiple view context
Combinatorial optimization arises in many computer vision problems such as feature correspondence, multi-view multiple object tracking, human pose estimation, segmentation, etc. In the case of multiple object tracking, object locations in images are temporally correlated by system dynamics and are geometrically constrained by the spatial configuration of the cameras (i.e.the same object seen in two different cameras satisfies the epipolar constraints).
These two sources of structure have been typically exploited separately by either Tracking-Reconstruction or Reconstruction-Tracking. Splitting the problem in two phases has, obviously, several disadvantages because the available evidence is not fully exploited.
For example, if one object is temporarily occluded in one camera, both data association for reconstruction and tracking become ambiguous and underconstrained when considered separately. If, on the other hand, evidence is considered jointly, temporal correlation can potentially resolve reconstruction ambiguities and vice versa. However, finding the joint optimal assignment is a hard combinatorial problem that is both difficult to formulate and difficult to optimize. In this chapter, we argue that it is not necessary to separate the problem in two parts, and we present a novel formulation to perform 2D-3D assignments (reconstruction) and temporal assignments (tracking) in a single global optimization. The proposed graph structure contains a huge number of constraints, therefore, it cannot be solved with typical Linear Programming (LP) solvers such as the simplex algorithm. We rely on multi-commodity flow theory and use Dantzig-Wolfe decomposition and branching to solve the linear program. 125 FIGURE 6.1: We jointly exploit spatial and temporal structure to solve the multiple assignment problem across multiple cameras and multiple frames. With our proposed method, both tracking and reconstruction are obtained as the solution of one single optimization problem.
Related work: reconstruction vs. tracking
As we argued in previous chapters, we divide the problem of multiple target tracking into two steps: detection and data association. When dealing with multi-view data, data association is commonly split into two optimizations, namely sparse stereo matching and tracking. While stereo matching is needed for reconstruction (obtaining 3D positions from 2D calibrated cameras), tracking is needed to obtain trajectories across time.
As we have seen in Chapter 4, solving the tracking problem as one single optimization problem using Linear Programming is more reliable, as it solves the matching problem jointly for all tracks.
The sparse stereo matching problem for reconstruction is usually formulated as a linear assignment problem, and it is well-known that for more than 3 cameras the problem is NP-hard [103]. In [104], a comparison of the methods Tracking-Reconstruction vs.
Reconstruction-Tracking is presented. In [28], first reconstruction is performed using Probabilistic Occupancy Map (POM), and then tracking is done globally using Linear
Programming. In [105], the assignments are found using a data-driven MCMC approach, while [87] presented a formulation with two separate optimization problems:
linking across-time is solved using network flows and linking across-views is solved using set-cover techniques. In contrast to all previous works, we formulate the problem as a single optimization problem.
In this chapter we present a graph formulation that captures the whole structure of the problem which leads to a problem with a high number of constraints. This rules out standard Linear Programming solvers such as the simplex algorithm [4,27] or kshortest paths [28,60]. In [76], interactions between objects are modeled in a multiple hypotheses fashion and heuristics are applied to make the problem practical. We define our problem as a multi-commodity flow problem, i.e., each object has its own graph with a unique source and sink. Multi-commodity flows are used in [106] in order to maintain global appearance constraints during multiple object tracking. However, the solution is found by applying several k-shortest paths steps to the whole problem, which would be extremely time consuming for our problem and lead to non-integer solutions.
By contrast, we use decomposition and branching methods, which take advantage of the structure of the problem to reduce computational time and obtain better bounds of the solution. Decomposition methods are closely related to Lagrangian Relaxation based methods such as Dual Decomposition [98,107] which was used for feature matching in [108] and for monocular multiple people tracking with groups in [96]. In our case, we make use of the Dantzig-Wolfe decomposition [51,109] which allows us to take advantage of the special block-angular structure of our problem. It is well-known in the field of traffic flow scheduling [110] as it is able to handle huge linear programs. As is usual in multi-commodity flow problems, the solutions found are not integers and therefore branch-and-bound [111] is used. The combination of column generation and branch-and-bound methods is known as branch-and-price [112].
Recently, [113] proposed a Linear Programming solution using a simplified graph structure that also includes multi-camera information, but does not constrain the problem as tightly as the formulation presented in this chapter. The advantage is that their problem can be solved in linear time.
In this chapter, we present a global optimization formulation for multi-view multiple object tracking [6]. We argue that it is not necessary to separate the problem into two parts, namely, reconstruction (finding the 2D-3D assignments) and tracking (finding the temporal assignments) and propose a new graph structure to solve the problem globally. To handle this huge integer program, we introduce decomposition and branching methods which can be a powerful tool for a wide range of computer vision problems.
Multi-view Multi-object tracking
Tracking multiple objects in several calibrated camera views can be expressed as an energy minimization problem. We define an energy function that at the 2D level (i) enforces temporal smoothness for each camera view (2D-2D), and at the 3D level (ii) penalizes inconsistent 2D-3D reconstructions from camera pairs, (iii) enforces coherent reconstructions from different camera pairs and (iv) favors temporal smoothness of the putative 3D trajectories. In the following section, we detail the proposed graph structure used for multi-view multi-object tracking.
Proposed multi-layer graph
Matching between more than two cameras (k-partite matching) is an NP-hard problem.
In order to be able to handle this problem, we propose to create a multi-layer graph. In Thereby, the problem is fully defined as a singled global optimization problem.
In the following lines, we define the edges characterizing each of the two layers, namely, the entrance/exit, the detection and the temporal 2D edges that define the 2D layer and the reconstruction, the camera coherence and the temporal 3D edges that form the 3D layer.
Entrance/exit edges (C in , C out ). These edges determine when a trajectory starts and ends; the cost balances the length of the trajectories with the number of identity switches.
Shown in blue in Figure 6.2(a).
Detection edges (C det ). If all costs of the edges in a graph are positive and we do not know the amount of flow that has to go through that graph (i.e., the number of objects in the scene), then the trivial solution of zero flow is found. To avoid the trivial solution, some costs have to be negative so that the solution has a total negative objective cost.
Following [4,27], each detection p iv in view v ∈ {1 . . . V } is divided into two nodes, b
and e, and a new detection edge is created with cost
C det (i v ) = log 1 − P det (p t iv ) . (6.1)
The higher the likelihood of a detection P det (p t iv ) the higher the negative cost of the detection edge (shown in black in Figure 6.2(a)), hence, flow is likely to be routed through edges of confident detections in order to minimize the total cost. Temporal 2D edges (C t ). The costs of these edges (shown in orange in Figure 6.2(a)) encode the temporal dynamics of the targets. Assuming temporal smoothness, we define F to be a decreasing function [4] of the distance between detections in successive
frames C t (i v , j v ) = − log F p t+∆t jv −p t iv ∆t , V 2D max + B ∆f −1 f , (6.2)
where V 2D max is the maximum allowed speed in pixels and B ∆f −1 f is a bias that depends on the frame difference ∆f and favors matching detections in consecutive frames. The function F maps a distance to a probability which is then converted to a cost by the negative logarithm.
Note, that the 2D layer alone is a special case of our multi-layer graph and would be suited to find the trajectories on each camera independently. Finding a global optimum match between trajectories for k cameras means solving a k-partite matching problem, which in the case of k > 2 is well-known to be NP-complete. We take a slightly different approach and decide to track independently on each camera, but we introduce a series of edges that bind the 2D layers with 3D information. For this, we create the 3D layer which contains three types of edges.
Reconstruction edges (C rec ). These edges connect the 2D layer ( Figure 6.2(a)) with the 3D layer ( Figure 6.2(b)). For each camera pair, all plausible 2D-2D matches create new 3D hypothesis nodes (marked by squares in Figure 6.2(b)). The reconstruction edges, shown in green, connect each newly created 3D detection with the 2D detections that have originated it. The costs of these edges encode how well 2D detections match in 3D, which is implemented by computing the minimum distance between pairs of projection rays. Let C v be the set of all possible camera pairs and m k a new 3D hypothesis node generated from the 2D nodes i v 1 and j v 2 , where k = (v 1 , v 2 ) ∈ C v and v 1 , v 2 are two different views. Given the camera calibration, each 2D point defines a line in 3D, and L(j v 2 ). Now let P m k define the 3D point corresponding to the 3D node, which is the midpoint between the two closest points on the lines. The reconstruction cost is
L(i v 1 ) S T camera 1 camera 2 camera 3 j u v C in C out C det C t (a) 2D layer C t 3D cameras 1/2 cameras 1/3 cameras 2/3 P C rec C coh C det C det (b) 3D layerC rec (m k ) = log (1 − F (dist (L(i v 1 ), L(j v 2 )), E 3D )) ,(6.3)
where E 3D is the maximum allowed 3D error. These edges are active, i.e., have a positive flow, when both originating 2D detections are also active. This constraint can be expressed in linear form as explained in Sect. 6.2.2. Essentially, the 3D layer is a model of possible 3D events in the scene which is supported by 2D evidence (detections). The reconstruction edges are the link to that evidence.
Camera coherency edges (C coh ). Their purpose is to verify the evidence coming from two different cameras. Their cost is related to the 3D distance between two 3D nodes from different camera pairs. We show a few of these edges in Figure 6.2(b) in purple.
Considering two camera pairs k, l ∈ C v , two 3D nodes m k and n l and their corresponding 3D points P t m k and P t n l , we define the camera coherency edge cost as
C coh (m k , n l ) = log 1 − F P t m k − P t n l , E 3D . (6.4)
These edges are active when the two 3D nodes they connect are also active.
Temporal 3D edges (C t 3D ). The last type of edges are the ones that connect 3D nodes in several frames (shown in orange in Figure 6.2(b)). The connection is exactly the same as for the 2D nodes and their cost is defined as
C t 3D (m k , n k ) = log 1 − F P t+∆t m k −P t n k ∆t , V 3D max ,(6.5)
where V 3D max is the maximum allowed speed in world coordinates. These edges are active when the two 3D nodes they connect are also active.
It is important to note that the 3D layer costs are always negative. To see this, recall that F maps a distance to a probability, and the lower the distance it evaluates, the higher the probability will be and hence the higher the negative cost. If the costs were positive, the solution would favor a separate trajectory for each camera and frame, because finding a common trajectory for all cameras and frames activates these edges. Instead, these edges act as prizes for the graph, so that having the same identity in 2 cameras is beneficial if the reconstruction, camera coherence and temporal 3D edges are sufficiently negative.
C rec (N v1v2 ) C rec (M v1v3 ) v 1 v 2 v 3 C coh (N v1v2 M v1v3 ) (a) f coh f rec f det v 1 v 2 v 3 (b) FIGURE 6
.3: 3D layer edges: (a) The 2D nodes in each camera activate the reconstruction and camera coherency edges because they are assigned the same trajectory ID visualized in red. The reconstruction error C rec is defined as the minimum line distance between projection rays. The camera coherency edges C coh are defined as the 3D distance between putative reconstructions (illustrated as red silhouettes in 3D) from different camera pairs. (b) graph structure of the 3D layer: active edges are shown in continuous lines. The red 2D nodes (circles) activate the 3D nodes (square nodes) since they are assigned the same ID (product of flows equals one).
Linear programming
In the literature, multiple object tracking is commonly formulated as a Maximum A-Posteriori (MAP) problem. To convert it to a Linear Program (LP), its objective function is linearized with a set of flow flags f (i) ∈ {0, 1} which indicate whether an edge i is in the path of a trajectory or not [4,27]. The proposed multi-layer graph can be expressed as an LP with the following objective function:
T * = arg min f C T f = i C(i)f (i) = V v=1 iv C in (i v )f in (i v ) + V v=1 iv C out (i v )f out (i v ) + V v=1 iv C det (i v )f det (i v ) + V v=1 iv,jv C t (i v , j v )f t (i v , j v ) + k∈Cv m k C rec (m k )f rec (m k ) + k∈Cv l∈Cv m k ,n l C coh (m k , n l )f coh (m k , n l ) + k∈Cv m k ,n k C t 3D (m k , n k )f t 3D (m k , n k ) (6.6)
where k, l ∈ C v are the indices of different camera pairs. The problem is subject to the following constraints:
• Edge capacities: we assume that each detection belongs to only one trajectory, thus the flow that goes through detection edges can only assume the values f (i) = {0, 1}. Since integer programming is NP-hard, we relax the conditions to obtain a linear program: 0 ≤ f (i) ≤ 1. In the remainder of this chapter, all conditions will be expressed in their relaxed form.
• Flow conservation at the 2D nodes:
f in (i v ), f out (i v )
indicate whether a trajectory starts or ends at node i v .
f det (i v ) = f in (i v ) + jv f t (j v , i v ) f det (i v ) = jv f t (i v , j v ) + f out (i v ) (6.7)
• Activation for reconstruction edges: these 2D-3D connections have to be activated,
i.e., have a positive flow, if their 2D originating nodes are also active. More formally, this imposes the following relationship:
f rec (m k ) = f det (i v 1 )f det (j v 2 ) (6.8)
• Activation for the camera coherency edges: for 3D-3D connections we take a similar approach as for the reconstruction edges and define the flow to be dependent on the 3D nodes it connects:
f coh (m k , n l ) = f rec (m k )f rec (n l ) (6.9)
• Activation for temporal 3D edges:
f t 3D (m k , n k ) = f rec (m k )f rec (n k ) (6.10)
As we can see, the pairwise terms in Eqs. (6.8), (6.9) and (6.10) are non-linear. Let
f ab = f a f b be a pairwise term consisting of two flows f a and f b . Using the fact that the flows are binary, we can encode the pairwise term with the following linear inequations:
f ab − f a ≤ 0 f ab − f b ≤ 0 f a + f b − f ab ≤ 1. (6.11)
We can now express the constraints in Eqs. (6.8), (6.9) and (6.10) in linear form. These constraints define the 3D layer of the graph as a cascade of prizes. Consider two 2D nodes on different cameras which belong to different trajectories. The question will be whether it is favorable to assign the same trajectory ID to both 2D nodes. The answer depends on the prize costs this assignment activates. When both 2D nodes are assigned the same trajectory ID, the corresponding 3D reconstruction edge is activated. If two 3D
nodes from different camera pairs are activated, the camera coherency edge between them is activated, and the same will happen across time. This means that trajectories are assigned the same ID only if the reconstruction, camera coherency and temporal 3D
costs are sufficiently negative to be beneficial to minimize the overall solution.
Multi-commodity flow formulation
The goal of the flow constraints defined in the previous section is to activate certain prize edges when two 2D nodes are activated by the same object. This means that in one graph we can only have a total flow of 1, which corresponds to one object. To that end, we create one more condition on the number of objects per camera:
0 ≤ iv f in (i v ) ≤ 1 0 ≤ iv f out (i v ) ≤ 1 ∀v (6.12)
In order to deal with several objects, we use the multi-commodity flow formulation, well-known in traffic scheduling [109]. We create one graph for each object n to be tracked on the scene. Each graph has its own source and sink nodes, and each object is a commodity to be sent through the graph. The problem has now a much larger set of variables f = f 1 . . . f N obj . Obviously, with no further restrictions, computing the global optimum would result in the same solution for all instances of the graph, i.e., we would find the same trajectory for all objects. Therefore, we need to create a set of binding constraints which prevent two trajectories from going through the same edges: n f n (i) ≤ 1 n = 1 . . . N obj (6.13) where f n (i) is the flow of object n going through the edge i. This set of binding constraints creates a much more complex linear program which cannot be solved with standard techniques. Nonetheless, the problem still has an interesting block-angular structure, which can be exploited. The problem consists of a set of small problems (or subproblems), one for each object, with the goal to minimize Eq. (6.6) subject to the constraints in Eq. (6.7)-(6.12). On the other hand, the set of complex binding constraints in Eq. (6.13) defines the master problem. This structure is fully exploited by the Dantzig-Wolfe decomposition method which is explained in the next section, allowing the algorithm to find a solution with less computation time.
Branch-and-price for multi-commodity flow
Branch-and-price is a combinatorial optimization method for solving large scale integer linear problems. It is a hybrid method of column generation and branching.
Column generation: Dantzig-Wolfe decomposition. The principle of decomposition is to divide the constraints of an integer problem into a set of "easy constraints" and a set of "hard constraints". The idea is that removing the hard constraints results in several subproblems which can be easily solved by k-shortest paths, simplex, etc. Let us rewrite our original minimum cost flow problem:
min f C T f = N obj n=1
(c n ) T f n (6.14)
subject to:
A 1 f ≤ b 1 A n 2 f n ≤ b n 2 0 ≤ f ≤ 1 (6.15)
where x n j λ n j ≤ b 1 J j=1 λ n j = 1 0 ≤ λ n j ≤ 1 (6.17) where f n = J j=1 λ n j x n j and {x j } J j=1 are the extreme points of a polyhedron. This problem is solved using column generation (Algorithm 9). The advantage of this formulation is that the N obj column generation subproblems can be solved independently and therefore in parallel. We use the parallel implementation found in [114], which is based on [109].
Algorithm 9 Column generation
while Restricted master problem new columns > 0 do 1. Select a subset of columns corresponding to λ n j which form what is called the restricted master problem 2. Solve the restricted problem with the chosen method (e.g., simplex).
3. Calculate the optimal dual solution µ 4. Price the rest of the columns with µ(A n 1 f n − b n 1 ) 5. Find the columns with negative cost and add them to the restricted master problem. This is done by solving N obj column generation subproblems.
min f (c n ) T f n + µ(A n 1 f n − b n 1 ) s.t. A n 2 f n ≤ b n 2 end while
Branching. Typically in multi-commodity flow problems, the solution is not guaranteed to be composed of all integers. Nonetheless, once we find the fractional solution,
we can use branching schemes to find the integer optimal solution. This mixture of column generation and branching is called branch-and-price. One important thing is that branching must be done on the original variables, not on the λ n j of the master problem. For more details we refer to [53,112].
Experimental results
In this section, we show the tracking results of the proposed method on two key problems in computer vision, namely multi-camera multiple people tracking and 3D human pose tracking. We compare our method with the following approaches for multi-view multiple object tracking:
• Greedy Tracking-Reconstruction (GTR): first tracking is performed in 2D on a frame-by-frame basis using bipartite graph matching, and then 3D trajectories are reconstructed from the information of all cameras.
• Greedy Reconstruction-Tracking (GRT): first 3D positions are reconstructed from all cameras. In a second step, 3D tracking is performed on a frame-by-frame basis using bipartite graph matching.
• Tracking-Reconstruction (TR): first tracking is performed in 2D using [27] and then 3D trajectories are recovered as in GTR.
• Reconstruction-Tracking (RT): first the 3D positions are reconstructed as in GRT and then 3D tracking is performed using [27].
Tests are performed on two publicly available datasets [20,115] and a comparison with existing state-of-the-art tracking approaches is made using the CLEAR metrics [101], DA (detection accuracy), TA (tracking accuracy), DP (detection precision) and TP (tracking precision).
(a) Tracking-Reconstruction (b) Reconstruction-Tracking (c) Proposed method FIGURE 6.4: Even with 40% of outliers our method 6.4(c) can recover the trajectories almost error free on the entire the sequence. This is in contrast to 6.4(a) and 6.4(b) that struggle with the ambiguities generated by the outliers.
Multi-camera multiple people tracking
In this section, we show the tracking results of our method on the publicly available PETS2009 dataset [20], a scene with several interacting targets. Detections are obtained using the Mixture of Gaussians (MOG) background subtraction. For all experiments, we set B f = 0.3, E 3D = 0.5 m which represents the diameter of a person, V 2D max =250 pix/s and V 3D max =6 m/s which is the maximum allowed speed for pedestrians. Note that for this particular dataset, we can infer the 3D position of a pedestrian with only one image since we can assume z = 0. Since we evaluate on view 1 and the second view we use does not show all the pedestrians, it would be unfair towards the RT and GRT methods to only reconstruct pedestrians visible in both cameras. Therefore, we consider the detections of view 1 as the main detections and only use the other cameras to further improve the 3D position. We also compare our results to monocular tracking using [27] and multi-camera tracking with Probability Occupancy Maps and Linear Programming [28]. As we can see in the results with 2 camera views, outperforms all other methods. In general, TR and RT methods perform better than their counterparts GRT and GTR, since matching across time with Linear Programming is robust to short occlusions and false alarms. Nonetheless, it still suffers from long term occlusions. In contrast, our method is more powerful than existing approaches when dealing with missing and noisy data, with misdetection rates 8.5 to 15 percentage points lower than other methods. Notably, our method also outperforms [28] in accuracy, even though our results are computed using only 2 cameras instead of 5. When using 3 cameras, the 2D-3D inaccuracies become more apparent since the detections of the third camera project badly on the other two views (see Figure 6.6). Interestingly, RT and TR methods are greatly affected by these inaccuracies, while our method is more robust and still able to further reduce the missed detections by 4.6 percentage points.
In Figure 6.5, we can see an example with 2 camera views. A pedestrian hides behind a pole and therefore goes undetected for a number of frames in view 1. In this case, the RT method is not able to reconstruct any 3D position, and so a new track is initiated when the pedestrian is visible again in view 1. The advantage of the proposed approach is that, during the occlusion, the pedestrian can be tracked in view 2 using only 2D
information. When he reappears in view 1 and therefore 3D information is available again, the method is able to correctly assign the same identity as he had before. It combines the power of RT methods to correctly identify pedestrians with the power of TR methods to track by usings only one view.
In Figure 6.6, we show an example with three camera views where a pedestrian (red) is occluded in two of the three views for a length of 22 frames. The RT method is unable to recover any 3D position, and therefore loses track of the pedestrian. The TR method tries to track the pedestrian in one view, but the gap is too large and TR fails to finally recover the whole 3D trajectory. The proposed method overcomes the long occlusion and the noisy 2D-3D correspondences to recover the full trajectory. We obtain a better accuracy than RT(3) by 13.5 percentage points which further proves the advantages of our approach.
Human Motion
We also tested our algorithm on the problem of human pose tracking using the publicly available human motion database HumanEva [115]. The problem we consider here is the following: given a set of 2D joint locations in two cameras, the goal is to link the locations across time and across cameras at every frame to reconstruct the sequence of poses. In these experiments, we use only two cameras at a reduced frame rate of 10 fps to reconstruct the 3D poses. To obtain joint locations in the image, we project the ground truth 3D data using the known camera parameters. The parameters used are:
B f = 0.3, E 3D = 0.01 mm, V 2D max =400
pix/s and V 3D max =3 m/s. We study the robustness of our algorithm to missing data and outliers. Missing data often occurs due to occlusions, while outliers appear as the result of false detections.
Missing data: To simulate missing data, we increasingly removed percentages of the 2D locations ranging from 0 to 40%. As can be seen in Figure 6.7(a), our proposed method outperforms all other baselines and brings significant improvement. In Figure 6.8, we
show the trajectories of the lower body reconstructed with our method with 20% of missing data. The 3D error for our method stays below 5 mm, whereas it goes up to 10 mm for the other methods.
Outliers: We added from 0% to 40% of uniformly distributed outliers in windows of 15 × 15 pixels centered at randomly selected 2D joint locations. Again, our method shows a far superior performance as the percentage of outliers increases, see Figure 6.7(b).
Notably, our method performs equally well independently from the number of outliers.
Since outliers are uncorrelated across cameras, they produce lower prizes in the 3D layer of our graph and are therefore correctly disregarded during optimization. This clearly shows the advantage of globally exploiting temporal and 3D coherency information together. Here, the 3D error is only 2 mm for our method. Furthermore, in Figure 6.7(c),
we plot the count of the identity switches for an increasing number of outliers. Our method is the only one that is virtually unaffected by outliers, an effect that is also shown in Figure 6
Conclusions
In this chapter, we presented a formulation to jointly track multiple targets in multiple views. The proposed graph structure captures both temporal correlations between objects as well as spatial correlations enforced by the configuration of the cameras and allows us to solve the problem as one global optimization. To find the global optimum, we used the powerful tool of branch-and-price, which allows us to exploit the special block-angular structure of the program to reduce computational time. We tested the performance of the proposed approach on two key problems in computer vision:
multiple people tracking and 3D human pose tracking. We outperform state-of-the-art approaches, which proves the strength of combining 2D and 3D constraints in a single global optimization.
Chapter 7 Conclusions
In a world where video cameras are becoming an inherent part of our lives, it is becoming more important to develop methods to automatically analyze such data streams.
Many tasks such as surveillance, animation or activity recognition need to have information about where people are located and how they are moving. Hence, multiple people tracking has become a classical problem in computer vision. Though a lot of research has been done in this field, there are still major challenges to overcome, especially in crowded environments.
In this thesis, we approached the problem of multiple people tracking using the paradigm of tracking-by-detection. Recent advances in detectors make it possible to have a reasonably stable detection rate even in moderately crowded scenarios. Nonetheless, occlusions and false alarms are still a big problem that has to be faced during the tracking step. We argued that classical tracking methods fail to fully exploit two sources of context, namely social context and spatial context coming from different views. Including this context in an efficient way within a global optimization tracker has been the main scope of this thesis.
We first presented our tracking framework based on Linear Programming. Multiple people tracking is formulated as a unique optimization problem for all pedestrians in all frames, and a globally optimum solution is found for all trajectories. This already provides the perfect setup to introduce any kind of context, since the trajectories are inherently linked to each other.
The first source of context we explored was that of the social context. In a scenario where a pedestrian walks alone, it is obvious he or she will follow a straight path towards his The spatial context is the second source of context that we aimed at fully exploiting in this thesis. We proposed to create a unique graph structure capturing both temporal correlations between objects as well as spatial correlations enforced by the configuration of the cameras, allowing us to solve the problem as one global optimization. Given the large number of constraints and variables, is it intractable to solve this problem using standard Linear Programming solvers. We therefore used the powerful tool of branch-and-price to find the global optimum, which allowed us to exploit the special block-angular structure of the program to reduce computational time as well as to find a better lower bound. Performance was tested for multiple people tracking, outperforming state-of-the-art approaches and proving the strength of combining 2D and 3D
constraints in a single global optimization. The main strength was that pedestrians visible in only one view can be tracked in 2D, while pedestrians visible in several views can be tracked using 3D information as well, making the method very flexible and robust at the same time. Perhaps the most interesting contribution of our formulation is that it can be of considerable interest to model complex dependencies which arise in a wide range of computer vision problems. We also applied our method to 3D human pose tracking for which we obtained largely better results compared to classical approaches.
One weakness of the method is that it is very sensitive to noise. For the multiple people tracking sequence, there are large calibration errors which reduced the accuracy of our results significantly. On the 3D human pose tracking dataset though, calibration is extremely accurate and therefore we can see results which are perfect even when we have up to 50% of outliers present in the data. This is because the graph structure contains a high number of constraints that tightly link 2D and 3D information. If calibration is correct, this structure does not allow any tracking error and provides excellent accuracy results. Nonetheless, in practice we know there will be a certain percentage of errors in the 3D position estimation, introduced either by the camera calibration or simply by the detector which can wrongly estimate the 2D bounding box around a pedestrian. As future work, we would like to explore ways of relaxing the sensitivity of the method to noise while keeping the tight formulation.
Another direction for improvement would be to find a solver with better computational complexity. Currently, the methods' complexity increases exponentially with the number of objects and cameras. In practice, it takes about one day to find the solution for one tracking sequence.
The work presented in this thesis has shown that context can be a key source of information that can significantly improve tracking results, especially if introduced in a global optimization framework which guarantees that this information will be fully exploited to improve all trajectories. Nonetheless, we believe that the tracking-bydetection framework has reached a saturation point in which results can now only be marginally improved. There is only so much that can be done to improve tracking given a certain detection set. Long occlusions are still a common unsolved problem, there are just too many assumptions that the tracker needs to make in order to correctly follow a pedestrian occluded for half of the sequence. We strongly believe that detection and tracking should not be treated as two separate tasks. Detection can benefit considerably from motion cues, while tracking can benefit from detailed appearance cues used commonly by detectors.
Appendix A A case study: microorganism tracking and motion analysis
Throughout the thesis we have focused on tracking and motion analysis of pedestrians.
Humans are usually the center of attention for many computer vision tasks, e.g., detection [37,41], tracking [4,6], pose estimation [16,47], crowd analysis [25,26]. Nonetheless, Computer Vision can be useful in many other fields where huge amounts of data need to be automatically analyzed, for example cell tracking [116] for medical purposes.
In this Appendix, we present a case study where Computer Vision is proven to be useful for the field of marine biology and chemical physics. An automatic method is presented for the tracking and motion analysis of swimming microorganisms. This includes early work done by the author at the beginning of the PhD.
Many fields of interest in biology and other scientific research areas deal with intrinsically three-dimensional problems. The motility of swimming microorganisms such as bacteria or algae is of fundamental importance for topics like pathogen-host interactions [117], predator-prey interactions [117], biofilm-formation [118], or biofouling by marine microorganisms [119,120].
We present a complete system for the automatic analysis of digital in-line holographic data. This microscopy technique provides videos of a 3D volume, see Figure A For multiple microorganism tracking, we propose a geometrically motivated and globally optimal multi-level Hungarian to compensate for leaving and entering particles, recover from missing data and erase outliers to reconstruct the trajectories of the microorganisms [10]. Afterwards, we focus on the classification of four motion patterns of the green alga Ulva linza with the use of Hidden Markov Models [11]. Furthermore, our system is able to find and separate different patterns within a single sequence. Besides classification of motion patterns, a key issue is the choice of features used to classify and distinguish the involved patterns. For this reason, we perform an extensive analysis of the importance of typical motion parameters, such as velocity, curvature, orientation, etc. The system we developed is highly flexible and can easily be extended. Especially for forthcoming work on cells, microorganisms or human behavior, such automated algorithms are of pivotal importance, as they allow high throughput analysis of individual segments in motion data.
A.1 Related work
Understanding the motility and behavioral patterns of microorganisms allows us to understand their interaction with the environment and thus to control environmental parameters to avoid unwanted consequences such as infections or biofouling. To study these effects in 3D several attempts have been made: tracking light microscopy, capable of tracking one bacterium at a time [121], stereoscopy [122] or confocal microscopy [123].
Berg built a pioneering tracking light microscope, capable of tracking one bacterium at a time in 3D. This has been used to investigate bacteria like Escherichia Coli [121]. Another way of measuring 3D trajectories is stereoscopy, which requires two synchronized cameras [122]. Confocal microscopy has also been used to study the motion of particles in colloidal systems over time, however the nature of this scanning technique limits the obtainable frame rate [123].
For any of these techniques, in order to draw statistically relevant conclusions, thousands of images have to be analyzed. Nowadays, this analysis is still heavily dependent on manual intervention. Recent work [116] presents a complete vision system for 2D cell tracking, which proves the increasing demand for efficient computer vision approaches in the field of microscopy as an emerging discipline. Research on the automatic analysis of biological images is extensive [124], but most of the work focuses on position as well as on the shape of the particle [125]. Several methods exist for multiple object detection based on methods such as Markov Chain Monte Carlo (MCMC) [73], inference in Bayesian networks [76] or the Nash Equilibrium of game theory [77]. These have been proven useful to track a fairly small number of targets but are less appropriate when the number of targets is very large, as in our case. Statistical methods like Kalman filters [116], particle filters or recursive Bayesian filters [74] are widely used for tracking but they need a dynamical model of the target, a requirement that can be challenging to fulfill depending on the microorganism under study and to which we dedicate the second part of this paper. In contrast to [74,116], we do not use the output predictions of the filters to deal with occlusions, but rather use past and future information to complete broken trajectories and detect false alarms. Therefore, we do not need an extra track linking step as in [116]. Furthermore, we deal with 3D trajectories of random and fast motions which are unsuited for a prediction-based approach. In this work we propose a global optimal matching solution and not a local one as suggested in [126].
Besides generating motion trajectories from microscopic data, a subsequent classification allows biologists to get the desired information from large image sets in a compact fashion. Indeed, the classification of motion patterns in biology is a well-studied topic [127], but identifying these patterns manually is a complicated and time consuming task. Recently, machine learning and pattern recognition techniques have been introduced to analyze such complex movements in detail. These techniques include: Principal Component Analysis (PCA) [128], a linear transformation used to analyze high dimensional data; Bayesian models [129], which use a graph model and the rules of probability theory to select among different hypotheses; Support Vector Machines (SVM) [130], which use training data to find the optimum parameters of the model representing each class. A comparison of machine learning approaches applied to biology can be found in [131]. In order to classify biological patterns, we need to use an approach able to handle time-varying signals. Hidden Markov Models [132] are statistical models especially known for their application in temporal pattern recognition. They were first used in speech recognition, and since then HMMs have been extensively applied to vision. Applications vary from handwritten word recognition [133], face recognition [134] or human action recognition [135,136].
A.2 Detection of 3D positions
In this section, we present the details of digital in-line holography, how this microscopy technique allows us to obtain 3D positions of microorganisms as well as the image processing methods used to robustly extract these positions from the images.
A.2.1 Digital in-line holographic microscopy (DIHM)
Digital in-line holographic microscopy provides an alternative, lensless microscopy technique which intrinsically contains three dimensional information about the investigated volume. It does not require a feedback control which responds to motion and it uses only one CCD chip. This makes the method very straightforward and in practice can be implemented with a very simple setup as shown in Figure A. methods [139,140] to achieve this in case a source image is not readily available. These resulting holograms can then be reconstructed back into real-world coordinates by a
Kirchhoff-Helmholtz transformation [138] shown in Equation (A.1). As we can see in Figure A.3, the idea behind the reconstruction is to obtain a series of stacked XY projections from the hologram image. These projections contain the information at different depth values. From these images, we can obtain the 3 final projections XY , XZ and Y Z, as described in [141]. These projections contain the image information of the complete observation volume, i.e. from every object located in the light cone between pinhole and detector. The resolution in X and Y is δ x,y = λ N A , where N A stands for the numerical aperture given by N A = D 2L , where D is the detector's side length. The resolution in the Z direction, that is the direction of the laser, is worse,
K(r) =
δ z = λ 2N A 2 .
This is because the third dimension, Z, is obtained with a mathematical reconstruction, unlike confocal microscopy, where the value of every voxel is returned.
On the other hand, confocal microscopes take a long time to return the values of all voxels in a volume, and are therefore unsuited for tracking at a high frame rate.
Using video sequences of holograms, it is possible to track multiple objects in 3D over time at a high frame rate, and multiple spores present in a single frame can be tracked simultaneously [119,126,142]. Using this advantage of digital in-line holographic microscopy, a number of 3D phenomena in microbiology have been investigated: Lewis et al. [143] examined the swimming speed of Alexandrium (Dinophyceae), Sheng et al. [144,145] studied the swimming behavior of predatory dinoflagellates in the presence of prey, and Sun et al. [146] used a submersible device to investigate in situ plankton in the ocean.
A.2.2 Detection of the microorganisms
As we saw in Chapter 2, we can use information such as edges or color histograms in order to detect humans, or we can build more complex models from training data in order to robustly detect humans in single images or in videos with moving cameras. In our case we can use simpler detection methods since the shape of our targets is much more constant than that of a human. In our sequences, we are observing the green algae Ulva linza which has a spherical spore body and four flagella. Since the body scatters most of the light, in the projected images the particles have a circular shape. In order to preserve and enhance the particle shape (see Figure A.4(a)) but reduce noise and illumination irregularities of the image (see Figure A.4(b)), we apply the Laplacian of Gaussian filter (LoG), which, for its shape, is a blob detector [147]: Due to the divergent nature of the light cone, the particles can appear smaller or larger in the projections depending on the z-plane. Therefore, the LoG filter is applied in several scales [147] according to the magnification. Note that the whole algorithm is extremely adaptable, since we can detect differently shaped microorganisms by just
LoG(x, y) = −1 πσ 4 1 − x 2 + y 2 2σ 2 e − x 2
A.3 Automatic extraction of 3D trajectories
In this section we present the complete method to estimate the 3D trajectories of microorganisms over time. Our algorithm, the Multi-level Hungarian, is a robust method evolved from the Hungarian-Munkre's assignment method and is capable of dealing with entering and leaving particles, missing data and outliers. The diagram of the method is presented in Figure A.6.
A.3.1 Cost function and bipartite graph matching
Let us briefly refresh some of the key concepts that we have seen in Chapter 3. Graph
Matching is one of the fundamental problems in Graph Theory and it can be defined as: given a graph G = (V, E), where E represents its set of edges and V its set of nodes or vertices, a matching M in G is a set of pairwise non-adjacent edges, which means that no edges share a common vertex. For our application, we are specially interested in the Assignment Problem, which consists in finding a maximum weight matching in a weighted bipartite graph. In a general form, the problem can be expressed as: "There are N jobs and N workers. Any worker can be assigned to any job, incurring some cost that varies depending on the job-worker assignment. All jobs must be performed by assigning exactly one worker to each job in such a way that the total cost is minimized (or maximized)". For the subsets of vertices X and Y , such that V = X∪Y and X∩Y = ∅,
we build a cost matrix in which the element C(i, j) will represent the weight or cost related to the edge connecting i in X and j in Y .
For numerical optimization, we use the Hungarian or Munkres' assignment algorithm, a combinatorial optimization algorithm [68,69] problem in polynomial time. For implementation details on the Hungarian, we recommend [148]. Our initial problem configuration is: there are M particles in frame t 1 and N particles in frame t 2 . The Hungarian will help us to determine which particle in t 1 corresponds to which particle in t 2 , allowing us to reconstruct their full trajectories in 3D space. Nonetheless, the Hungarian algorithm has some disadvantages which we should be aware of. In the context of our project, we summarize in Table A The cost function C, as key input for the Hungarian algorithm, is created using the Euclidean distances between particles, that is, element C(i, j) of the matrix represents the distance between particle i of frame t 1 and particle j of frame t 2 . With this matrix, we need to solve a minimum assignment problem, since we are interested in matching those particles which are close to each other.
Note that it is also possible to include other characteristics of the particle, like speed, size or gray level distribution, in the cost function. Such parameters can act as additional regularizers during trajectory estimation.
A.3.1.1 IN and OUT states
In order to include more knowledge about the environment in the Hungarian algorithm and avoid matches with very high costs, we have created a variation of the cost matrix.
In our experiments, particles can only enter and leave the scene by crossing the borders of the Field Of View (FOV) of the holographic microscope, therefore, the creation and deletion of particles depends on their distance to the borders of the FOV. Nonetheless, the method can be easily extended to situations where trajectories are created (for example by cell division) or terminated (when the predator eats the prey) away from the FOV borders.
As shown in Figure A The cost of the added elements includes the information of the environment by calculating the distance of each particle to the nearest edge of the FOV. Note that the lower border of the z axis is not included, as it represents the surface where the microorganisms might settle and, therefore, no particles can enter or leave from there.
If the distance is small enough, the Hungarian algorithm matches the particle with an IN/OUT state.
In Figure A.8, we consider the simple scenario in which we have 4 particles in one frame and 4 in the next frame. As we can see, there is a particle which leaves the scene from the lower edge and a particle which enters the scene in the next frame from the right upper corner. As shown in Figure
A.3.1.2 Maximum cost restriction
Due to noise and illumination irregularities of the holograms, it is common that a particle is not detected in several frames, which means a particle can virtually disappear in the middle of the scene. If a particle is no longer detected, all matches can be greatly affected. That is why we introduce a maximum cost restriction for the cost matrix which does not allow matches with costs higher than a given threshold V . This threshold is the observed maximum speed of the algae spores under study [141]. The restriction is guaranteed by using the same added elements as the ones used for the IN/OUT states, therefore if a particle is near a volume border or cannot be matched to another particle which is within a reachable distance, it will be matched to an IN/OUT state. This ensures that the resulting matches are all physically possible. Still, if we have missing data and a certain particle is matched to an IN/OUT state, we will recover two trajectories instead of the complete one. In the next section, we present a hierarchical solution to recover missing data by extending the matching to the temporal dimension.
A.3.2 Multi-level Hungarian for missing data
If we consider just the particles detected using thresholding, we see that there are many gaps within a trajectory (see Figure A.12(a)). These gaps can be a result of morphing (different object orientations yield different contrast), changes in illumination, etc. The standard Hungarian is not capable of filling in the missing data and creating full trajectories, therefore, we now introduce a method based on the standard Hungarian that allows us to deal with missing data, outliers and create full trajectories. The general routine of the algorithm, the multi-level Hungarian, is:
• Find the matchings between particles in frames [i − 2 . . . i + 2], so we know the position of each particle in each of these frames (if present). (Section A.3.2.1).
• Build a table with all these positions and fill the gaps given some strict conditions.
A.3.2.1 The levels of the multi-level Hungarian
The multi-level Hungarian takes advantage of the temporal information in 5 consecutive frames and is able to recover from occlusions and gaps in up to two consecutive frames. The standard Hungarian gives us the matching between the particles in frame t 1 and frame t 2 and we use this to find matchings of the same particle in 5 consecutive frames, [i − 2, . . . , i + 2]. In order to find these matchings, the Hungarian is applied on different levels. The first two levels, represented in Figure A.9 by red arrows, are created to find the matching of the particles in the frame of study, frame i. But it can also be the case that a particle is not present in frame i but is present in the other frames. To solve all possible combinations given this fact, we use levels 3, 4 and 5, represented in Figure A.9 by green arrows.
Below we show a detailed description and purpose of each level of the multi-level Hungarian:
• Level 1: Matches particles in frame i with frames i ± 1.
• Level 2: Matches particles in frame i with frames i ± 2. With the first two levels, we know, for all the particles in frame i, their position in the neighboring frames (if they appear).
• Level 3: Matches particles in frame i − 1 with frame i + 1.
• Level 4: Matches particles in frame i ± 1 with frame i ∓ 2. Level 3 and 4 solve the detection of matchings when a particle appears in frames i ± 1 and might appear in i ± 2, but is not present in frame i. 2. It is not the first or last particle of the row. We use this strict condition to avoid the creation of false particle positions or the incorrect elongation of trajectories.
Let us look at particle 6 of the table in Figure A. 10. In this case, we do not want to add any particle in frames i − 2 and i − 1, since the trajectory could be starting at frame i.
In the case of particle 4, we do not want to add a particle in frame i + 2 because the trajectory could be ending at i + 1. This process is repeated iteratively until no particles are added to the table.
After convergence, the deleting iteration starts and we erase the outliers considered as noise. A new particle position is deleted if, and only if, two conditions are met:
1. The particle is present in the frame of study i.
2. There are less than 3 particles in the same row.
We only erase particles from the frames [i − 1,i,i + 1] because it can be the case that a particle appears blurry in the first frames but is later correctly detected and has more continuity. Therefore, only particles whose complete neighborhood is known are removed. This process is repeated iteratively until no particles are deleted from the table.
The resulting particles are shown in Figure A.10.
i-2 i-1 i i+1 i+2 6 5 4 3 2 1 FIGURE A.10: Table with: the initial particles detected by the multi-level Hungarian (green ellipses), the ones added in the adding iteration (yellow squares) and the ones deleted in the deleting iteration (red crosses). In the blank spaces no position has been added or deleted.
A.3.2.3 Missing data interpolation
During the adding iteration, we use the information of the filtered projection in order to find the correct position of the new particle ( Figure A.6). For example, if we want to add a particle in frame i − 1, we go to the filtered projections XY, XZ, YZ in t = i − 1, take the positions of the corresponding particle in t = i or t = i − 2 and search for the maximum likelihood within a window w. If the position found on that frame is already present in the particles' table, we go back to the projection and determine the position of the second maximum value. This allows us to distinguish two particles which are close to each other.
There are many studies on how to improve the particle depth-position resolution (zposition). As in [149], we use the traditional method of considering the maximum value of the particle as its center. Other more complex methods [140] have been developed which also deal with different particle sizes, but the flexibility of using morphological filtering already allows us to easily adjust our algorithm.
A.3.3 The final Hungarian
Once the final particle positions are obtained (in Figure A.6, orange box labeled "Final particles"), we perform one last step to determine the trajectories. We use the standard Hungarian to match particles in all pairs of consecutive frames.
A.4 Motion pattern classification
In this section we describe the different types of motion patterns as well as the design of the combined HMM and the features used for their classification.
A.4.1 Hidden Markov Models
Hidden Markov Models [132] are statistical models of sequencial data widely used in many applications in artificial intelligence, speech and pattern recognition and modeling of biological processes.
In an HMM, the system is modeled as a Markov process N unobserved states For a more detailed introduction to HMM theory, we refer to [132].
A.4.2 Types of patterns
In our experimental setup, we are interested in four patterns shown by the green algae Ulva linza as depicted in Figure and intensive surface probing or Spinning(4). These characteristic swimming patterns are highly similar to the patterns observed before in [150] for the brown algae Hincksia irregularis.
Orientation. Trajectory 1 in Figure
A.4.3 Features used for classification
An analysis of the features used for classification is presented in this section. Most of the features are generally used in motion analysis problems. An intrinsic characteristic of digital in-line holographic microscopy is the lower resolution of the Z position compared to the X,Y resolutions [140]. Since many of the following features depend on the depth value, we compute the average measurements within 5 frames in order to reduce the noise of such features. The four characteristic features used are:
• v, velocity: the speed of the particles is an important descriptive feature, as we can see in Figure A.1(b). We use only the magnitude of the speed vector, since the direction is described by the next two parameters. Range is [0, maxSpeed].
maxSpeed is the maximum speed of the particles as found experimentally in [141].
• α, angle between velocities: it measures the change in direction, distinguishing stable patterns from random ones. Range is [0, 180].
• β, angle to normal of the surface: it measures how the particles approach the surface or how they swim above it. Range is [0, 180].
• D, distance to surface: this can be a key feature to differentiate surface-induced movements from general movements. Range is (m z , M z ], where m z and M z are the z limits of the volume under study.
In order to work with Hidden Markov Models, we need to represent the features for each pattern with a fixed set of symbols. The total number of symbols will depend on the number of symbols used to represent each feature N symbols = N v N α N β N D .
In order to convert every symbol for each feature into a unique symbol for the HMM, we
J = J 1 + (J 2 − 1)N J 1 + (J 3 − 1)N J 1 N J 2 + (J 4 − 1)N J 1 N J 2 N J 3 (A.3)
In the next sections, we present how to use the resulting symbols to train the HMMs.
The symbols are the observations of the HMM, and the training process gives us the probability of emitting each symbol in each of the states and the probability of going from one state to the others.
A.4.4 Building and training the HMMs
In speech recognition, an HMM is trained for each of the phonemes of a language. Later, words are constructed by concatenating several HMMs of the phonemes that form the word. HMMs for sentences can be created by concatenating HMMs of words, etc. We take a similar hierarchical approach in this paper. We train one HMM for each of the patterns and then we combine them into a unique Markov chain with a simple yet effective design that will be able to describe any pattern or combination of patterns. This approach can be used in any problem where multiple motion patterns are present.
Individual HMM per pattern. In order to represent each pattern, we build a Markov chain with N states and we only allow the model to stay in the same state or move one state forward. Finally, from state N we can also go back to state 1. The number of states N is found empirically using training data (we use N = 4 for all experiments, see Section A.5.5). The HMM is trained using the Baum-Welch algorithm to obtain the transition and emission matrices.
Combined HMM. The idea behind a combined HMM that represents all patterns is that we can not only classify sequences where there is one pattern present, but sequences where the particle makes transitions between different patterns. In Figure A. The START state is just created to allow the system to begin at any pattern (orange). We define P start = P SwitchT oM odel = 1−P switch N P , where N P is the number of patterns. As START does not contain any information of a pattern, it does not emit any symbol.
The purpose of the new state SWITCH is to make transitions easier. Imagine a given trajectory which makes a transition from Pattern 1 to Pattern 2. While transitioning, the features create a symbol that neither belongs to Pattern 1 nor 2. The system can then go to state SWITCH to emit that symbol and continue to Pattern 2. Therefore, all SWITCH emission probabilities are 1 N symbols . Since SWITCH is such a convenient state, we need to impose restrictive conditions so that the system does not go to or stay in SWITCH too often. This is controlled by the parameter P switch , set to the minimum value of all the probabilities in the model minus a small . This way, we ensure that P switch is the lowest transition probability in the system.
Finally, the sequence of states given by the Viterbi algorithm determines the motion pattern observed. Our implementation uses the standard Matlab HMM functions.
A.5 Experimental results
In order to test our algorithm, we use 6 sequences (labeled S1 to S6) in which the swimming motion of Ulva linza spores is observed [119]. All sequences have some particle positions which have been semi-automatically reconstructed, manually labeled and inspected (our ground truth) for later comparison with our fully-automatic results.
A.5.1 Performance of the standard Hungarian
First of all, we want to show the performance of the final standard Hungarian described in Section A.3.3. For this, we use the ground truth particle positions and apply the Hungarian algorithm to determine the full trajectories of the microorganisms. Comparing the automatic matches to the ground truth, we can see that in 67% of all sequences the total number of particles is correctly detected, while in the remaining 33%, there is just a 5% deviation in the number of particles. The average accuracy of the matchings reaches 96.61%.
To further test the robustness of the Hungarian algorithm, we add random noise to each position of our particles. The added noise is in the same order as the noise intrinsically present in the reconstructed images, determined experimentally in [141]. N = 100 experiments are performed on each of the sequences and the accuracy is recorded. Results show that the average accuracy of the matching is just reduced from 96.61% to 93.51%, making the Hungarian algorithm very robust to the noise present in holographic images and therefore well suited to find the trajectories of the particles.
A.5.2 Performance of the multi-level Hungarian
To test the performance of the multi-level Hungarian, we apply the method to three sets of particles:
• Set A: particles determined by the threshold (pre multi-level Hungarian)
• Set B: particles corrected after multi-level Hungarian • Set C: ground truth particles, containing all the manually labeled particles !!"" !#"" !$"" " $"" #"" !$"" We then start by comparing the number of particles detected, as shown in Table A.2.
As shown in Table A.2, the number of particles detected in Set A is drastically reduced in Set B, after applying the multi-level Hungarian, demonstrating its abilities to compensate for missing data and merging trajectories. If we compare it to Set C, we see that the number is still too high, indicating possible tracks which were not merged and so detected as independent.
Nonetheless, as we do not know the exact amount of particles present in a volume (not all particle positions have been labeled), it is of great value for us to compare the average length of the trajectories, defined as the number of frames in which the same particle is present. The results are shown in Table A Now let us consider just useful trajectories for particle analysis, that is, trajectories with a length of more than 25 frames which are the trajectories that will be useful later for motion pattern classification. Tracking with the standard Hungarian returns 20.7% of useful trajectories from a volume, while the multi-level Hungarian allows us to extract 30.1%. In the end, this means that we can obtain more useful information from each analyzed volume.
Ultimately, this means that fewer volumes have to be analyzed in order to have enough information to draw conclusions about the behavior of a microorganism.
A.5.3 Performance of the complete algorithm
Finally, we are interested in determining the performance of the complete algorithm, including detection and tracking. For this comparison, we are going to present two values:
• Missing: percentage of ground truth particles which are not present in the automatic determination.
• Extra: percentage of automatic particles that do not appear in the ground truth data. In Table A.4 we show the detailed results for each sequence.
Our automatic algorithm detects between 76% and 91% of the particles present in the volume. This gives us a measure of how reliable our method is, since it is able to detect most of the verified particle positions. Combining this information with the percentage of particles detected by our algorithm but not labeled, we can see that our method extracts much more information from the volume of study. This is clear in the case of S6, where we have a volume with many crossing particles which are difficult to label manually, and where our algorithm gives us almost 75% more information.
We now consider the actual trajectories and particle position and measure the position error of our method. The error is measured as the Euclidean distance between each point of the ground truth and the automatic trajectories, both at time t. In Figure A.12(a),
we can see the 3 independent trajectories found with the standard Hungarian and the final merged trajectory, which proves the power of our algorithm to fill in the gaps (pointed by arrows). In Figure A.12(b), we can see that the automatic trajectory is much shorter (there is a length difference of 105 frames), although the common part is very similar with an error of just 4,2 µm. Figure A.12(c), on the other hand, shows a perfectly matched trajectory with a length difference of 8 frames and error of 6,4 µm for the whole trajectory which is around twice the diameter of the spore body. This proves that the determination of the particle position is accurate but the merging of trajectories can be improved.
A.5.4 Comparison with a Linear Programming tracker
In order to compare the Multi-level Hungarian with the Linear Programming formulation introduced in Chapter 4, we perform several experiments with simulated and real data. For the first experiment, we simulate 15 randomly moving particles and an increasing number of missing data, from 2% to 10%. Four methods are compared:
• Standard Hungarian (SH): matching frame by frame, shown in black.
• Multi-level Hungarian (MLH): matching taking into account several frames, as presented in Section A.3.2, shown in blue.
• Linear programming, 1 level (LP1Lev): matching using Linear Programming but only allowing matching of particles which are at a maximum distance of one frame, shown in cyan.
• Linear Programming, 2 levels (LP): first, matching using Linear Programming, 1 level and second, creating another graph with the found trajectories, allowing particles to be matched when they are up to 5 frames apart, shown in pink.
In Figure A.13(a), we can see the ratio between the number of trajectories found by each algorithm and the ground truth number of trajectories. The closer this ratio is to 1, the better the algorithm performs. A similar measure is the one plotted in Figure A.13(b), where we plot the length ratio. Again, if this ratio is 1, it means that the average length of the trajectories found automatically is the same as the length of ground truth trajectories.
Note that, as the percentage of missing data increases, the SH and LP1Lev have an increasing ratio of the number of trajectories and a decreasing ratio of length. This means that these algorithms are not capable of filling the gaps found in the data, and therefore the trajectories found are shorter as the missing data percentage increases.
The MLH performs better on both ratios, but cannot achieve the superior performance of LP 2 levels, which scores an almost perfect 1 on both ratios. This means that the LP is virtually unaffected by up to 10% of missing data.
In Figure A.13(c), we show the percentage of automatically found trajectories which contain two or more ground truth trajectories, i.e.two or more trajectories have been merged. In Figure A.13(d), on the other hand, we show the number of ground truth trajectories that are split into two or more. As can be seen, the MLH is superior to SH and LP1Lev in terms of splitting fewer trajectories, but the LP 2 levels improves the number of split trajectories by 60% to 70%. Though it also merges more trajectories than the other methods, the percentage of merged trajectories ranges from 4% to 7% on average, which means overall the LP 2 levels is far superior than the other methods.
While the MLH performs much better than SH and LP 1 level, it is also computationally more expensive than the other methods, as can be seen in Figure A.14. Using the same experimental setup as before but with increasing number of objects, we observe that the computational cost increases exponentially with the number of objects to be tracked.
Since our datasets contain up to 25 objects per sequence, the algorithm takes only a few minutes to track each sequence.
Finally, we apply the four algorithms to the 6 sequences of real data and plot the average length of the trajectories in Figure a length of 25 frames or more. The LP 2 levels (pink) obtains a much larger number of these trajectories for each sequence, which means this method is able to extract much more useful information from each sequence than SH, MLH or LP1Lev.
The next sections are dedicated to several experimental results on the automatic classification of biological motion patterns. All trajectories used from now on are obtained automatically with the method described in Section A.3 and are classified manually by experts, which we refer to as our ground truth classification data.
A.5.5 Evaluation of the features used for classification
The experiments in this section have the purpose of determining the impact of each feature on the correct classification of each pattern. We perform leave-one-out tests on our training data which consists of 525 trajectories: 78 for wobbling, 181 for gyration, 202 for orientation and 64 for intensive surface probing. To perform these tests, all training sequences except one are used to train the HMM. The remaining sequence is then tested with the combined HMM and, using the Viterbi algorithm, the sequence of likely states is obtained. With this information, we can classify the sequence into one of the four patterns. For each test, we set one parameter to 1, which means that the corresponding feature has no effect in the classification process. For example, the first bar in blue labeled "No
Depth" is done with N D = 1. The classification rate for each pattern (labeled from 1 to 4), as well as the mean for all the patterns (labeled Total), is recorded.
As we can see, the angles α and β (see section A.4.3) are the less relevant features, since the classification rate with and without these features is almost the same. The angle α depends on the z component, hence the lower resolution in z can result in noisy measurements. In this case, the trade-off is between having noisy angle data which can be unreliable, or an average measure which is less discriminative for classification.
The most distinguishing feature, according to Figure A.16, is the speed. Without it, the total classification rate decreases to 55.51% and down to just 11.05% for the Orientation pattern.
Based on the previous results, we could think of just using the depth and speed infor- The confusion matrix for these parameters is shown in Figure A.17. As we can see, patterns 3 and 4 are correctly classified. The common misclassifications occur when Orientation (1) is classified as Gyration (3), or when Wobbling (2) is classified as Spinning (4).
In the next section we discuss these misclassifications in detail.
A.5.6 Classification on other sequences
In this section, we present the performance of the algorithm when several patterns appear within one trajectory and also analyze the typical misclassifications. As test data we use four sequences which contain 27, 40, 49 and 11 trajectories, respectively. We obtain classification rates of 100%, 85%, 89.8% and 100%, respectively. Note that for the third sequence, 60% of the misclassifications are only partial, which means that the model detects that there are several patterns but only one of them is misclassified. One of the misclassifications that can occur is that Wobbling (2) is classified as Spinning In general, the model has been proven to handle changes between patterns extremely well. In Figure A.19(a), we see the transition between Gyration (3) and Spinning (4).
In Figure Trajectories which are too short to be classified are plotted in black.
A.6 Conclusions
In this chapter, we presented a fully-automatic method to analyze 4D digital in-line holographic microscopy videos of moving microorganisms by detecting the microorganisms, tracking their full trajectories and classifying the obtained trajectories into meaningful motion patterns.
The detection of the microorganisms is based on a simple blob detector and can be easily adapted for any microorganism shape. To perform multiple object tracking, we modified the standard Hungarian graph matching algorithm, so that it is able to overcome the disadvantages of the classical approach. The new multi-level Hungarian recovers from missing data, discards outliers and is able to incorporate geometrical information in order to account for entering and leaving particles. The automatically determined trajectories are compared with ground truth data, proving the method detects between 75% and 90% of the labeled particles. Nonetheless, we have seen that the proposed tracking approach does not outperform the Linear Programming formulation presented in Chapter 4.
For motion pattern classification, we presented a simple yet effective hierarchical design which combines multiple trained Hidden Markov Models (one for each of the patterns) and has proven successful to identify different patterns within one single trajectory.
The experiments performed on four full sequences result in a total classification rate between 83.5% and 100%.
As future work, we plan on including the motion pattern information into the tracking framework, in a similar fashion as we included social behaviors to improve pedestrian tracking in Chapter 5.
Verfolgung (Tracking) von Personen ist von wesentlicher Bedeutung für viele Anwendungen aus den Bereichen der Sicherheitstechnik, der Fahrerassistenzsysteme oder der Animation, und eine wichtige Grundlage für die Aktivitätserkennung. In komplexen Umgebungen mit großen Menschenmengen treten regelmäßig Verdeckungen und Falscherkennungen auf, und obwohl in den letzten Jahren erhebliche Fortschritte erzielt wurden, ist das Tracking nach wie vor eine anspruchsvolle Aufgabe. Üblicherweise wird das Tracking in zwei Schritte unterteilt: Erstens die Erkennung, d.h. die Lokalisierung der Fußgänger im Bild, und zweitens die Datenassoziation, d.h. die Verknüpfung der Detektionen über alle Einzelbilder, um Trajektorien zu bilden. Ansätze zur Lösung des Problems der Datenassoziation sind oftmals bestrebt neue, komplexere Formulierungen zu entwickeln, um vollständigere Trajektorien zu erhalten. Diese lenken wiederum den Fokus auf Optimierungsmethoden, die notwendig sind um sie zu lösen. Dabei werden üblicherweise nur grundlegende Informationen wie der Abstand zwischen Detektionen genutzt. In dieser Schrift liegt der Fokus auf der Datenassoziation. Ich argumentiere, dass kontextabhängige Informationen verfügbar sind und effizient in Trackingalgorithmen eingebaut werden können, in Form von sozialem und räumlichem Kontext. Als Werkzeug zum Tracking benutze ich die Lineare Programmierung als globale Optimierungsmethode, die eine eindeutige Lösung für alle Fußgängertrajektorien und alle Einzelbilder findet. Das ist der perfekte Aufbau, um kontextabhängige Informationen zu integrieren. Zuerst präsentiere ich ein effizientes Verfahren zum Erfassen von Sozial-und Gruppenverhalten, um das monokulare Tracking zu verbessern. Die Berücksichtigung dieser Informationsquelle führt zu einem viel genaueren Tracking-Ergebnis, besonders in Szenarien mit Menschenmengen. Zweitens präsentiere ich eine Formulierung, um 2D-3D Zuordnungen (Rekonstruktion) und temporale Zuordnungen (Tracking) in einer einzigen, globalen Optimierung durchzuführen. Ich zeige, dass das Verknüpfen von Rekonstruktion und Tracking in einer gemeinsamen Formulierung zu einem erheblichen Anstieg der Genauigkeit führt.
FIGURE 1 . 1 :
11Organization of the thesis 1.3 Papers of the author In this section, the publications of the author are detailed by topic and chronological order. The core parts of the thesis are based on four main publications of the author: [1] L. Leal-Taixé, M. Fenzi, A, Kuznetsova, Bodo Rosenhahn, Silvio Savarese. Learning an image-based motion context for multiple people tracking. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014.
[ 2 ]
2L. Leal-Taixé, M. Fenzi, A, Kuznetsova, Bodo Rosenhahn, Silvio Savarese. Multitarget tracking with context from interaction feature strings. IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPR). SUNw: Scene Understanding Workshop, June 2014.
[ 3 ]
3L. Leal-Taixé, Bodo Rosenhahn. Pedestrian interaction in tracking: the social force model and global optimization methods. Modeling, Simulation and Visual Analysis of Crowds: A multidisciplinary perspective. Springer, 2012.
[ 5 ]
5L. Leal-Taixé, G. Pons-Moll, B. Rosenhahn. Exploiting pedestrian interaction via global optimization and social behaviors. Theoretic Foundations of Computer Vision: Outdoor and Large-Scale Real-World Scene Analysis. Springer, 2012.
[ 6 ]
6L. Leal-Taixé, G. Pons-Moll, B. Rosenhahn. Branch-and-price global optimization for multi-view multi-object tracking. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2012.
[ 4 ]
4L. Leal-Taixé, G. Pons-Moll, B. Rosenhahn. Everybody needs somebody: modeling social and grouping behavior on a linear programming multiple people tracker.
This work was partially funded by the German Research Foundation, DFG projects RO 2497/7-1 and RO 2524/2-1 and the EU project AMBIO, and done in collaboration with the Institute of Functional Interfaces of the Karlsruhe Institute of Technology. Digital in-line holography is a microscopy technique which has gotten an increasing amount of attention over the last few years in the fields of microbiology, medicine and physics, as it
[ 7 ]
7S. Maleschlijski, G. H. Sendra, A. Di Fino, L. Leal-Taixé, I. Thome, A. Terfort, N. Aldred, M. Grunze, A.S. Clare, B. Rosenhahn, A. Rosenhahn. Three dimensional tracking of exploratory behavior of barnacle cyprids using stereoscopy. Biointerphases. Journal for the Quantitative Biological Interface Data. Springer, 2012.
[ 8 ]
8S. Maleschlijski, L. Leal-Taixé, S. Weisse, A. Di Fino, N. Aldred, A.S. Clare, G.H. Sendra, B. Rosenhahn, A. Rosenhahn. A stereoscopic approach for three dimensional tracking of marine biofouling microorganisms. Microscopic Image Analysis with Applications in Biology (MIAAB), September 2011.
[ 9 ]
9L. Leal-Taixé, M. Heydt, A. Rosenhahn, B. Rosenhahn. Understanding what we cannot see: automatic analysis of 4D digital in-line holography data. Video Processing and Computational Video. Springer, July 2011.
[ 11 ]
11L. Leal-Taixé, M. Heydt, S. Weisse, A. Rosenhahn, B. Rosenhahn. Classification of swimming microorganisms motion patterns in 4D digital in-line holography data. 32nd Annual Symposium of the German Association for Pattern Recognition (DAGM), September 2010.
[ 10 ]
10L. Leal-Taixé, M. Heydt, A. Rosenhahn, B. Rosenhahn. Automatic tracking of swimming microorganisms in 4D digital in-line holography data. IEEE Workshop on Motion and Video Computing (WMVC), December 2009.
, L. Leal-Taixé, B. Rosenhahn. Real-time sign language recognition using a consumer depth camera. IEEE International Conference on Computer Vision Workshops (ICCV). 3rd IEEE Workshop on Consumer Depth Cameras for Computer Vision (CDC4CV), December 2013.
[ 13 ]
13M. Fenzi, L. Leal-Taixé, B. Rosenhahn, J. Ostermann. Class generative models based on feature regression for pose estimation of object categories. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2013.
[ 14 ]
14M. Fenzi, R. Dragon, L. Leal-Taixé, B. Rosenhahn, J. Ostermann. 3D Object Recognition and Pose Estimation for Multiple Objects using Multi-Prioritized RANSAC and Model Updating. 34th Annual Symposium of the German Association for Pattern Recognition, DAGM. August 2012.
[ 15 ]
15G. Pons-Moll, L. Leal-Taixé, B. Rosenhahn. Data-driven manifold for outdoor motion capture. Theoretic Foundations of Computer Vision: Outdoor and Large-Scale Real-World Scene Analysis. Springer, 2012.
[ 16 ]
16G. Pons-Moll, A. Baak, J. Gall, L. Leal-Taixé, M. Mueller, H.-P. Seidel and B. Rosenhahn. Outdoor human motion capture using inverse kinematics and von Mises-Fisher sampling. IEEE International Conference on Computer Vision (ICCV), November 2011.
[ 17 ]
17G. Pons-Moll, L. Leal-Taixé, T. Truong, B. Rosenhahn. Efficient and robust shape matching for model based human motion capture. 33rd Annual Symposium of the German Association for Pattern Recognition (DAGM), September 2011.
[ 18 ]
18L. Leal-Taixé, A.U. Coskun, B. Rosenhahn, D. Brooks. Automatic segmentation of arteries in multi-stain histology images. World Congress on Medical Physics and Biomedical Engineering, September 2009. Atherosclerosis is a very common disease that affects millions of people around the world. Currently most of the studies conducted on this disease use Ultrasound Imaging (IVUS) to observe plaque formation, but these images cannot provide any detailed information of the specific morphological features of the plaque. Microscopic imaging using a variety of stains can provide much more information although, in order to obtain proper results, millions of images must be analyzed. In this work, we present an automatic way to find the Region of Interest (ROI) of these images, where the atherosclerotic plaque is formed. Once the image is well-segmented, the amount of fat and other measurements of interest can also be determined automatically. Aside from the aforementioned publications, the author edited a post-proceedings book of the Dagstuhl Seminar organized in 2012: [19] F. Dellaert, J.-M. Frahm, M. Pollefeys, B. Rosenhahn, L. Leal-Taixé. Theoretic Foundations of Computer Vision: Outdoor and Large-Scale Real-World Scene Analysis. Springer, April 2012.
Figures 2 .
21(a) and 2.1(b). We use the tracking-by-detection paradigm, which we detail in the following section.
FIGURE 2 . 1 :
21Scenarios with different crowdness levels. (a) Sparse: individuals are detected and tracked throughout the scene. (b) Semi-crowded: it is still possible to detect individuals, but occlusions and missed detections are very common, making tracking challenging.
Figure 2 .
23(b). Of course this method detects all kinds of moving objects, and therefore
input image. (b) Background subtraction using the pre-learned model. White pixels are classified as foreground, black pixels as background. (c) Final detected bounding boxes.
FIGURE 2 . 4 :
24Overview of the feature extraction and object detection chain.
Figure 6 .
6Our HOG detectors cue mainly on silhouette contours (especially the head, shoulders and feet). The most active blocks are centred on the image background just outside the contour. (a) The average gradient image over the training examples. (b) Each "pixel" shows the maximum positive SVM weight in the block centred on the pixel. (c) Likewise for the negative SVM weights. (d) A test image. (e) It's computed R-HOG descriptor. (f,g) The R-HOG descriptor weighted by respectively the positive and the negative SVM weights.
Figure 6 .
6Our HOG detectors cue mainly on silhouette contours (especially the head, shoulders and feet). The most active blocks are centred on the image background just outside the contour. (a) The average gradient image over the training examples. (b) Each "pixel" shows the maximum positive SVM weight in the block centred on the pixel. (c) Likewise for the negative SVM weights. (d) A test image. (e) It's computed R-HOG descriptor. (f,g) The R-HOG descriptor weighted by respectively the positive and the negative SVM weights.
Figure 6 .
6Our HOG detectors cue mainly on silhouette contours (especially the head, shoulders and feet). The most active blocks are centred on the image background just outside the contour. (a) The average gradient image over the training examples. (b) Each "pixel" shows the maximum positive SVM weight in the block centred on the pixel. (c) Likewise for the negative SVM weights. (d) A test image. (e) It's computed R-HOG descriptor. (f,g) The R-HOG descriptor weighted by respectively the positive and the negative SVM weights.
Figure 6 .
6Our HOG detectors cue mainly on silhouette contours (especially the head, shoulders and feet). The most active blocks are centred on the image background just outside the contour. (a) The average gradient image over the training examples. (b) Each "pixel" shows the maximum positive SVM weight in the block centred on the pixel. (c) Likewise for the negative SVM weights. (d) A test image. (e) It's computed R-HOG descriptor. (f,g) The R-HOG descriptor weighted by respectively the positive and the negative SVM weights.
Figure 6 .
6Our HOG detectors cue mainly on silhouette contours (especially the head, shoulders and feet). The most active blocks are centred on the image background just outside the contour. (a) The average gradient image over the training examples. (b) Each "pixel" shows the maximum positive SVM weight in the block centred on the pixel. (c) Likewise for the negative SVM weights. (d) A test image. (e) It's computed R-HOG descriptor. (f,g) The R-HOG descriptor weighted by respectively the positive and the negative SVM weights.
Figure 6 .
6Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'Our HOG detectors cue mainly on silhouette contours (especially the head, shoulders and feet). The most active blocks are centred on the image background just outside the contour. (a) The average gradient image over the training examples. (b) Each "pixel" shows the maximum positive SVM weight in the block centred on the pixel. (c) Likewise for the negative SVM weights. (d) A test image. (e) It's computed R-HOG descriptor. (f,g) The R-HOG descriptor weighted by respectively the positive and the negative SVM weights.
Figure 6 .
6Our HOG detectors cue mainly on silhouette contours (especially the head, shoulders and feet). The most active blocks are centred on the image background just outside the contour. (a) The average gradient image over the training examples. (b) Each "pixel" shows the maximum positive SVM weight in the block centred on the pixel. (c) Likewise for the negative SVM weights. (d) A test image. (e) It's computed R-HOG descriptor. (f,g) The R-HOG descriptor weighted by respectively the positive and the negative SVM weights.
Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) 1063-6919/05 $20.00 © 2005 IEEE (g) FIGURE 2.5: The HOG detector is based mainly on silhouette contours. As we can see, the most active blocks are centered on the image background just outside the contour. (a) Average gradient image over the training samples. (b) Each "pixel" shows the maximum positive SVM weight in the block centered in the pixel. (c) Likewise for the negative SVM weights. (d) Test image. (e) HOG descriptor. (f) HOG descriptor weighted by positive SVM weights. (g) Likewise for negative weights. Images from [32].
Figure 1 .
1Example detection obtained with the person model. The model is defined by a coarse template, several higher resolution part templates and a spatial model for the location of each part.
TTI-C and UC Irvine [email protected]
Figure 1 .
1Example detection obtained with the person model. The model is defined by a coarse template, several higher resolution part templates and a spatial model for the location of each part.
Figure 1 .
1Example detection obtained with the person model. The model is defined by a coarse template, several higher resolution part templates and a spatial model for the location of each part.
TTI-C and UC Irvine [email protected]
Figure 1 .
1Example detection obtained with the person model. The model is defined by a coarse template, several higher resolution part templates and a spatial model for the location of each part.
part-based model detector. (a) Example of a detection of a person. Green box represents the root filter detection while the yellow boxes represent the part detections. (b) Coarse template or root filter. (c) Templates of the parts. (d)
FIGURE 2 . 7 :
27Example of detection results on one frame of the PETS2009 sequence. (a) Using background subtraction (Section 2.3.1). (b) Using HOG features and SVM learning (Section 2.3.2). (c) Using part-based model, HOG features and Latent SVM learning (Section 2.3.3). In Figure 2.7(b), we show results of a HOG detector with SVM learning. Here, the major problems are double detections and the threshold of the score that determines what is a detection and what is not. As we can see in Figure 2.7(b), if the threshold is too low we can get a lot of false detections. The advantage is that we can detect half-occluded pedestrians like the orange pedestrian behind the pole. A part-based detector returns the result shown in Figure 2.7(c). As we can see, it is successful in finding partially occluded people or people who are close together. It only fails to detect the pedestrian occluded by the pole; this is mainly because one of the most distinctive parts for detections is the one of the head and shoulders, forming an omega shape.
2. 9 .FIGURE 2 . 8 :FIGURE 2 . 9 :
92829The close proximity to those objects often leads to double detections or can even lead to the complete Example of detection results on the Town Center sequence. (a,c) Using HOG features and SVM learning (Section 2.3.2). (b,d) Using part-based model, HOG features and Latent SVM learning (Section 2.3.3). Example of pedestrians walking with objects. This often leads to double detections or missed detections, but pedestrian-object interactions can be a useful source of information to improve tracking.
Figure 3 .
31(a). If there are feasible solutions to a linear program, then it is called a feasible program. A
FIGURE 3. 1 :
1Representations of Linear Programs with constraints represented by colored lines and half-spaces represented by colored regions. (a) Representation of a Linear Program with three constraints represented in blue, green and red, and its space of possible solutions (in yellow). (b) Optimal solution for the Linear Program of maximizing x 2 (yellow dot). (c) An unbounded Linear Program, when trying to maximize x 1 the solution space is infinite, as indicated by the arrows pointing towards the unbounded direction.
Figure 3
3Figure 3.1(a).
Figure 3 .FIGURE 3. 2 :
321(b) we can see that the optimal solution, depicted as a yellow dot, is active for the green and red constraints. (a) Half-space defined by the inequality constraint. (b) Hyperplane defined by the equality constraint.
Figure 3 .
32(a) as a half-space and the equality constraint in Figure 3.2(b) as a hyperplane.
FIGURE 3 . 3 :
33Idea of the Simplex algorithm. Starting from a vertex of the polyhedron, we move along the edges until we reach the optimum solution. Two distinct vertices x 1 and x 2 of P = {x ∈ R n : Ax ≤ b} are adjacent, if there exist n − 1 linearly independent inequalities of Ax ≤ b active at both x 1 and x 2 .
Theorem 3.2. x 1 = x 2 ∈ P are adjacent iff there exists c ∈ R n such that a set of solutions of max{c x : x ∈ P } is the line segment spanned by x 1 and x 2 .
Theorem 3. 3 .
3If B is an optimal basis, then x * = A −1 B b B is an optimal solution of the LP.
FIGURE 3 . 4 :
34LP with 4 constraints identified by their coefficients {a 1 , a 2 , a 3 , a 4 } and depicted by green lines.
FIGURE 3 . 5 :
35(a) Move in the direction d as shown in the proof of Theorem 3.4. (b) Reaching constraint k by moving ε k in the direction d. Now we want to move from x * in the direction given by d, as depicted in Figure 3.5(a).
FIGURE 3 . 6 :
36The motivation to find the dual of a problem is to find an upper bound on the objective function of the primal.
Lemma 3. 9 .
9Farka's Lemma. A system of inequalities Ax ≤ b is infeasible if and only if there exists a vector λ with elements λ i ≥ 0 such that λ A = 0 and λ b = −1.
variables x B = x B1 , . . . , x Bm are called basic variables, while x I = x I1 , . . . , x In are called non-basic variables. There is a solution associated with each dictionary, which is obtained by setting all non-basic variables to zero and reading out the values of the basic variables from the equations in the dictionary. If all variables of the solution have values which respect the non-negativity constraints, the dictionary is said to be feasible. The indices of the basic variables form the basis B of our solution, a concept we presented in previous sections. As we can see, there is already an advantage of representing LPs with dictionaries: by reading out the resulting values of basic variables, we directly obtain candidate solutions to the problem.
FIGURE 3 . 7 :
37Overview of the Simplex method with dictionaries
and the corresponding objective value z = 0. Let us consider the possible entering variables. Remember that an entering variable should increase the objective value, therefore, we are looking for non-basic variables with positive coefficients c j > 0. The variable with the highest coefficient in our example is x 1 . Ideally, we want to increase this variable as much as possible so as to increase the value z as much as possible. In the current dictionary, x 1 has value 0. Let us set x 1 = 10. What happens to the basic variables? If we look at the equations of the dictionary, we see that if x 1 = 10, x 2 = 0, x 3 = 0 then x 4 = −15, which violates the non-negativity
Algorithm 5
5Phase II: optimization phase Input: a feasible dictionary D while There exists an entering variable with the largest c j ≥ 0 do Select a corresponding leaving variable with the lowest b i −a ij that limits the value of the entering variables x Ij ≤ b i −a ij if There exists no leaving variable. then The problem is unbounded. else Perform pivoting to obtain D . end if end whileReturn dictionary D as final.
FIGURE 3 . 8 :
38Feasible region of the LP depicted in orange. The solution associated with the initial dictionary (0,0) is not feasible. During initialization, we move towards a feasible initial solution,(2,2) in this example.a moment and focus on the new constraints created by adding the variable x 0 . They would look like:
Now that we have a clear definition of Linear Programs and its important properties, and we know how to solve a Linear Program with Simplex, we move towards the graphical model of a polyhedron. Going from LP representations to graphical model representations and vice versa is certainly useful since, for example, there will be certain LPs which will be solved faster by using network flow solvers like k-shortest paths.An undirected graph G = (V, E) consists of a finite set V of nodes or vertices and a set E of edges, where each edge e ∈ E is a two-element subset of vertices e = (u, v), where u = v ∈ V . An example of such a graph is shown in Figure 3.9(a), where V = {1,
FIGURE 3. 9 :
9Basic concepts of graph theory
P (b) Graph representation of P , vertices in orange, edges in blue.
FIGURE 3 . 10 :
310Conversion from polyhedron to graph The diameter of G P is the diameter of P . If a version of the Simplex algorithm requires only a polynomial number of iterations (in both n and m), then the diameter of each polyhedral graph is polynomial.
FIGURE 3 . 11 :
311Path between u and v that satisfies the inequality active at both vertices.
Figure 3 .
312.
FIGURE 3 .
312: Identifying vertices with their feasible bases.
FIGURE 3 . 14 :
314Bipartite graph representation of the job assignment problem.(number inside the red rectangle) so that the value of the edge that connects two nodes is smaller than or equal to the sum of the node values. Using the concept of w-vertex cover we can prove the optimality of a matching M .Lemma 3.10. Let G = (V, E) be a graph and let w ∈ R 0 be edge weights. If M is a matching of G and if y is a w-vertex cover of G, then w(M ) ≤ v∈V y v .This lemma is equivalent to the weak duality of Linear Programs presented in Theorem 3.7. We will know that the matching M and the w-vertex cover y are both optimal if their values are equal, as is the case ofFigure 3.14, where w(M ) = 15 = v∈V y v .
Figure 3 .
315(a) we see an example graph, in this case, the numbers identify the edges and not the nodes. We show two possible matchings, one marked by red edges and the other by green edges. The vectors corresponding to these matchings are: x M for example, would not be a possible matching vector, since the edges 1 and 4 share a node, and so do edges 4 and 7. One of the characteristics of a vector that represents a matching is that one node can only be connected once by an edge. We describe this property formally now.
FIGURE 3 . 15 :
315Example graphsFor v ∈ V we denote the set of edges incident to v by δ(v) = {e ∈ E : v ∈ e}. The set {x M : M is matching of G} is a set of feasible solutions that satisfies v ∈ V : e∈δ(v) x e ≤ 1 e ∈ E : x e ∈ {0, 1},(3.25)where x e is an indicator that tells us if an edge is used in the matching (1) or not (0).Let us look at the simple graph ofFigure 3.15(b), where nodes are denoted by black numbers and edges by red ones. The conditions defined before in Equation (3.
FIGURE 3 . 16 :
316Representation of an integer program. The conditions define the red polyhedron, solutions inside it will be feasible. The green arrow points to the direction of maximization of our optimization problem, while the orange dot marks the optimal integer-valued solution.
FIGURE 3 . 3
3317: Representation of computational complexity classes. Some examples of common problems belonging to each of the classes are given. Returning to the max-weight matching problem, we formulate it as an Integer Program with the constraints shown in Equation 3.26. The variables x e are the indicators of whether an edge belongs to a matching or not, as we said before. Recall that the goal of the max-weight matching problem was to maximize the sum of w e , which are the weights of the edges of the matching solution. Note that the condition x e ∈ {0, 1} is now expressed as x e ≥ 0, because together with the other condition, we only allow the variables to be bounded between 0 and 1, and if they can only take integer values, then they can only take the value 0 or 1. E : x e ≥ 0 ∀e ∈ E : x e ∈ Z |E| (E : x e ≥ 0 ∀e ∈ E : x e ∈ R |E| (3.28) We can convert the problem into a Linear Program by changing the condition marked in red in Equation (3.27) and simply considering a larger set of feasible solutions. By doing so, we would obtain the formulation of Equation (3.28). Since we are considering a larger set of solutions (Z ⊂ R), it follows that the solution of the relaxed problem will always be an upper bound of the integer program.Let us look into a quick example illustrating the difference between the integer program solution and the linear relaxation solution. We take the graph ofFigure 3.15(b), where all weights are 1 and the maximum number of active edges connected to a node is also 1. In this setting, the maximum cardinality of a matching is 1, since the use of any edge invalidates the use of all other edges. If we consider the Linear Program relaxation though, we can find a solution like x 1 = 1/2, x 2 = 1/2, x 3 = 1/2. This would make the objective value equal to 3/2 which is indeed larger than 1.
Figure 3 . 18 .
318Theorem 3.11. The maximum weight of a matching is at most the minimum value of w
FIGURE 3 . 18 :
318Relationship between the maximum weighted matching problem, its integer and LP relaxation versions, and the minimum w-vertex cover, also with its integer and LP relaxation versions.
FIGURE 3. 19 :
19Node-edge incidence matrix A G of the graph represented on the left. The green numbers identify the nodes, while the red ones identify the edges.
If we develop the determinant along that column, then all coefficients are 0 except for one, and we can derive the following expression: det(B) = ±1 det(B ), where B is a (k−1)×(k−1) sub matrix obtained by deleting the column and row marked by an orange line.
partite graph consists of two sets of vertices (for example, F and M ), and that an edge can only connect one vertex from set F to a vertex of the other set M . put all the rows of the first set on top and all the rows of the second set at the bottom, we can see that for each column we will have exactly one 1 above the orange line and another below it. If we then add up all the rows above the line, we will obtain an all 1's vector. We will obtain the same if we add up all rows below the line. This means that these rows are not linearly independent, making det(B) = 0.
Theorem 3. 13 .
13If A ∈ Z m×n is totally unimodular and b ∈ Z m , then every vertex of the polyhedron P = {x ∈ R n : Ax ≤ b} is integral.
Theorem 3. 15 .
15Let G = (V, E) be a bipartite graph and let w ∈ N 0 be edge weights. The maximum weight of a matching is equal to the minimum value of a w-vertex cover.
Theorem 3. 16 .
16Let G = (V, E) be a bipartite graph. The maximum cardinality of a matching of G is equal to the minimum cardinality of a vertex cover of G.
So far we have talked about undirected graphs G = (V, E), where V = {1, . . . , n} is the set of vertices or nodes and M ∈ {0, 1} |V |×|V | represents their adjacency matrix. Every pair of nodes connected by an edge has a 1 entry in the matrix, i.e.:
Figure 3 .
320(a), note that the adjacency matrix is symmetric since the edges are undirected.A directed graph, on the other hand, is a tuple D = (V, A), where V is a finite set of vertices or nodes and A is the set of arcs or directed edges of D. We denote a directed edge by its defining tuple (u, v) ∈ A. The nodes u and v are called tail and head of (u, v), respectively. In the example ofFigure 3.20(b), the edge(3,4) would have 3 as head and 4 as tail. The adjacency matrix of directed graphs is composed by:
FIGURE 3. 20 :
20Undirected vs. directed graphs For our multiple people tracking problem, we use weighted directed graphs, where each edge has a weight related to it. Let D = (V, A) be a directed graph without cycles, where c : A → R are the lengths or costs of the arcs. The length of a walk W = v 0 , . . . , v k is the sum of the lengths or costs of its arcs:
FIGURE 3 . 21 :
321Weighted directed graph. Shortest path from s to t with length 4 marked in green.InFigure 3.21 we can see an example of a weighted directed graph. We can see, for example, that the cost of the walk s, a, b, c is 3 + 1 − 2 = 2. The shortest path from s to t is marked in green and has cost s, a, b, d, t = 3 + 1 + 3 − 3 = 4.The shortest path problem can be formally defined as: given a directed graph with edge costs and a designated node s, compute d(s, v) for each v ∈ V . This is an N P-hard problem in general but solvable in polynomial time if there are no negative cycles. Acycle is defined as a walk v 0 , v 1 , . . . , v k with v 0 = v k .There are many solvers to find the shortest path in a graph, e.g. Dijkstra's algorithm or the Bellman-Ford algorithm. Like Dijkstra's Algorithm, Bellman-Ford is based on the principle of relaxation, in which an approximation to the correct distance is gradually replaced by more accurate values until eventually reaching the optimum solution. In both algorithms, the approximate distance to each vertex is always an overestimate of the true distance, and is replaced by the minimum of its old value with the cost of a newly found path.
depict the procedure to compute the values d k+1 (t) assuming that d k (t) are precomputed. The idea of the algorithm is to iteratively set d k+1 (t) to the smallest value possible. Since both d k (t) and d k (t) + c(u, t) are upper bounds of d k+1 (t), we make sure that this is set to its smallest possible value at each iteration.Algorithm 6 Bellman-Ford algorithm initialize ∀t ∈ V \ {s}, d 0 (t) = ∞ d 0 (s) = 0 for k = 0 to n − 2 do for each t ∈ V do d k+1 (t) := d k (t) end for for each (u, t) ∈ A do if d k (t) + c(u, t) < d k+1 (t) then d k+1 (t) := d k (t) + c(u, t) ∃t ∈ V with d n (t) < d n−1 (t) thenD has a negative cycleend ifLet us consider the following example as depicted inFigure 3.22. We start the computation of distances as explained in Algorithm 6, by first initializing d 0 to 0 for the node s
bottom. We keep computing distances for k = 2, 3 until we reach k = 4, where the algorithm converges and all values of d 5 are equal to d 4 .
Theorem 3. 17 .
17Given D = (V, A), s ∈ V , d n = d n−1 for n = |V | iff D does not have a cycle of negative length that is reachable from s.Theorem 3.18. Given D = (V, A), s ∈ V , and suppose no negative cycle is reachable from s.
of incremental flow about some given feasible solution, what would be the equivalent of an intermediate solution. For this, we can define a new additional network called the residual network [63]. The advantage is that the formulations of a problem in the original network and in the residual one are actually equivalent.
FIGURE 3 . 23 :
323How to construct a residual network. An interesting property of residual networks is that a flow f is feasible in the network G if and only if its corresponding flow f , defined by f (i, j)−f (j, i) = f (i, j)−f 0 (i, j) and f (i, j)f (j, i) = 0, is feasible in the residual network G(f 0 ). Furthermore, cf = c f + cf 0 .
to k do 1. Compute the shortest path from node s to node t, computed using the Bellman-Ford algorithm of Section 3.8.1. 2. Update π := −d. 3. δ := min[e(s), −e(t), min{r(i, j) : (i, j) ∈ P }].
Figure 3 .
324(a) where we find the first shortest path s − b − t. With this we compute the node potentials shown in Figure 3.24(b) and create the residual graph from sending flow through the path s − b − t. We then go to step 5 of Algorithm 7 and compute the new reduced costs as shown in Figure 3.24(c). Now we can start the cycle again by computing a new shortest path s − a − b − t, new potentials and residual graph (Figure 3.24(d)), new reduced costs and the final shortest path s − a − t as shown in
Figure 3 .
324(e). The three shortest paths found are shown in red, green and black in
Figure 3 .
324(f).
Figure 3 .
324. In order to convert it to a node-disjoint successive shortest path algorithm, one can divide each node into two nodes and insert an extra edge in the middle with graph with zero potentials. First shortest path found.
Compute new node potential and create the residual graph.
Reduce costs according to new potentials and find new shortest path.
Compute new node potential and create the residual graph.
Reduce costs according to new potentials and find new shortest path.
Three shortest paths found on this graph.
FIGURE 3 . 24 :
324Example of how k-shortest path algorithm works capacity equal to 1. This procedure is done anyway for multiple people tracking as explained in Chapter 4, so we can directly compute the k-shortest paths as explained in Algorithm 7.For multiple people tracking, we build a graph with the detections and add a source s node and a sink t node, from where all flows start and end. The algorithm iterates k times the following two steps: (i) find the shortest path in the network; (ii) create a residual network and augment the flow along the path. Each time a flow of 1 is pushed to the network (δ = 1), which can be interpreted as one trajectory.
Finallyo n s t r a i n t s i z
important characteristic of the problem is whether it is a maximization or a minimization, which we can change with the function:Now we are ready to define the actual LP. An important property of the GLPK library is that the index 0 is not used, therefore we will always start inputing information from index 1.The information of the objective function is included in the columns or structural variables of the problem object. The structural variables contain both the coefficients c of the LP as well as the limits of the variables x, for example, the non-negativity constraints. Inour example: g l p _ a d d _ c o l s ( l p , 3 ) ; / / Create Columns : S t r u c t u r a l v a r i a b l e s values b of the constraints are set using the rows or auxiliary variables of the problem object. In our example, we have to represent one equality and one inequality: , we only need to define the matrix A, which represents the coefficients of the variables in the constraints. This information is introduced using three variables, ia and ja which contain the indices i and j of each matrix element, and the corresponding ar which contains the actual value of the coefficient a ij . double * a r ; i n t * i a , * j a ; a r =new double [ c [ 3 ] = 1 ; j a [ 3 ] = 3 ; a r [ 3 ] = 1 ; / * a [ 1 , 3 ] = 1 * / i a [ 4 ] = 2 ; j a [ 4 ] = 1 ; a r [ 4 ] = 2 ; / * a [ 2 , 1 ] = 2 * / i a [ 5 ] = 2 ; j a [ 5 ] = 2 ; a r [5]= −1; / * a [2 ,2]= −1 * / Once we have defined the whole LP, we are ready to proceed to the solver.We can obtain the objective value of the optimal solution as well as the values of x: p ) ; / / c l e a r problem from memory There exists also a MEX package in order to use the GLPK library in MatLab. It can be downloaded here http://glpkmex.sourceforge.net. It can be used in a similar way as the native MatLab LP solver, which we explain next.
[ i n f , i n f , i n f ] ; [ x , f v a l ] = l i n p r o g ( c , A , b , Aeq , beq , l b , ub ) ; % Solve ! The values of the variables at the optimum can be found in x and the final objective value is fval, which are both outputs of the function linprog.
and the goal of multiple object tracking is to find the set of trajectories T * = {T k } that best explains the detections. This is equivalent to finding the T that maximizes the a-posteriori probability given the set of detections O, which is known as maximum posterior or MAP problem.
followed by o tm km in the trajectory.
(4.5) is the objective function and Eq. (4.6) represents the constraints. c 1 , c 2 , . . . , c n denote the known cost coefficients and f 1 , f 2 , . . . , f n are the decision variables to be determined. To convert our problem into a linear program, we linearize the objective function by defining a set of flow flags f = {f in (i), f out (i), f t (i, j), f det (i)} which are limited to the values of {0, 1}. Flow flag f t (i, j) is defined as: are only allowed if ∆f ≤ F max , where ∆f is the frame number difference between observations o t j j and o t i i and F max is the maximum allowed frame gap. Flow flag f det (i) is defined as: out (i) is: f in (i) (or f out (i)
of a graph with the special source s and sink t nodes, 6 detections which are represented by two nodes each: the beginning b i and the end e i .
Figure 4
4special nodes, the source s and the sink t; all flow that goes through the graph starts at the s node and ends at the t node. Thereby, each unit of flow represents a trajectory T k , and the path that it follows indicates which observations belong to eachT k . Each observation o i is represented with two nodes, the beginning node b i ∈ V and the end node e i ∈ V (see Figure 4.1). A detection edge connects b i and e i . Below we detail the three types of edges present in the graphical model and the cost for each type: Link edges. The edges (e i , b j ) connect the end nodes e i with the beginning nodes b j
FIGURE 4 . 2 :
42lyzed in section 5.6.2.3. Detection edges. The edges (b i , e i ) connect the beginning node b i and end node e i , with cost C det (i). If all costs of the edges are positive, the solution to the minimum-cost Blue = normalized histogram of speeds learned from training data. Red = probability distribution if cost depends linearly on the velocity. Green = probability distribution if the relation of cost and velocities is expressed by Equation (4.16). A V max =7 m/s is used in the experiments.
FIGURE 5 . 1 :
51The three terms of the social force model that are included in the tracking framework
FIGURE 5 . 2 :
52Diagram of the dependencies for each observation o t k .
velocity assumption is used to estimate the positions of all pedestrians at time t + ∆t. From these estimated positions, the repulsion acceleration they exert on each other can be computed as shown in Eq. (5.4). For a pedestrian i, only non-members of his group (g m = g i ) who are less than 1 m away, that is p
Figure 5 .
53 we plot the probability distributions computed using different terms. Note, this is just for visualization purposes, since we do not compute the probability for each point on the scene, but only for the positions where the detector has fired. There are 4 pedestrians in the scene, the purple one and 3 green ones walking in a group. As shown in 5.3(b), if we only use the estimated positions (yellow heads) given the previous speeds, there is a collision between the purple pedestrian and the green marked with a 1. The avoidance term shifts the probability mode to a more plausible position.
PFIGURE 5 . 3 :
53i (m, n), then g m = g n . Therefore, for every observation o t i , we will have a group label g i which indicates to which group the observation belongs, if any. If several pedestrians form a group, they tend to keep a similar speed, therefore, if o t i belongs to a group, we can use the mean speed of all the other members of the group to estimate the next position for o t i : Three green pedestrians walk in a group, the estimated positions in the next frame are marked by yellow heads. The purple pedestrian's linearly estimated position (yellow head) clearly interferes with the trajectory of the group. Representation of the probability map (blue is 0 red is 1) for the purple pedestrian's next position using: (a) only distances, (b) only SFM (constant velocity assumption and avoidance term), (c) only GR (considering the purple pedestrian belongs to the group), (d) distances+SFMand (e) distances+SFM+GR.
Figure 5 .
53(d) we show the combined probability of the distance and SFM information which narrows the space of probable positions. Finally, Figure 5.3(e) represents the combined probability of DIST, SFM and GR. As we can see, the space of possible locations for the purple pedestrian
ans' velocities, which can only be obtained if we already have the trajectories. We solve this in an expectation-maximization (EM) fashion where the parameters to estimate are the flow flags f i and the latent variables are the velocities and group flags. The proposed solver is presented in Algorithm 8; on the first iteration, trajectories are estimated only with the information defined in Section 4.3, while for the rest of iterations, the SFM and GR is also used. The algorithm stops when the trajectories do not change or when a maximum number of iterations M i is reached. Algorithm 8 Iterative optimization while T i = T i−1 and i ≤ M i do if i == 1 then 1.1. Create the graph using only DIST information else 1.2. Create the graph using DIST, SFM and GR information end if 2. Solve the graph to find T i 3. Compute velocities and groups given T i end while
plained in Algorithm 8. Looking at the results on the PETS 2009 dataset in Figure 5.4(b),
FIGURE 5 . 4 :
54Figures 5.4(c) and 5.4(d) a clear trend in which the results are very bad when we underestimate the pedestrians maximum speed, since we are artificially splitting trajectories. The results converge when the maximum Tracking accuracy (black) and precision (magenta) obtained for the Town Center dataset (left column) and the PETS 2009 dataset (right column) given varying parameter values.
FIGURE 5 . 5 :
55Four frames of the PETS2009 sequence (separation of 9 frames), showing several occlusions, both created by the obstacle on the scene and among pedestrians. All occlusions can be overcome with the proposed method. an extra 7% false positive groups are detected. All experiments are performed with 6 iterations, a batch of 100 frames, V max =7 m/s, F max = 10, α = 0.5 and B j = 0.3. Using the ground truth (GT) pedestrian positions as the baseline for our experiments, we perform three types of tests: missing data, outliers and noise, and compare the results obtained with: • DIST: proposed network model with distances • SFM: adding the Social Force Model (Section 5.4) • SFM+GR: adding SFM and grouping behavior (Section 5.4) Missing data. This experiment shows the robustness of our approach given missed detections. This is evaluated by randomly erasing a certain percentage of detections from the GT set. The percentages evaluated are [0, 4, 8, 12, 16, 20] of the total number of detections over the whole sequence. As we can see in Figure 5.7, both SFM and SFM+GR increase the tracking accuracy when compared to DIST. (a) Wrong match with DIST, corrected with SFM. (b) Missing detections cause the matches to shift due the global optimization; correct result with SFM.(c) Missed detection for subject 3 on two consecutive frames. With SFM, subject 2 in the first frame (yellow arrow) is matched to subject 3 in the last frame (yellow arrow), creating an identity switch; correct result with grouping information.
FIGURE 5 . 6 :FIGURE 5 . 7 :
5657Top row: Tracking results with only DIST. Bottom row: Tracking results with SFM+GR. Green = correct trajectories, Blue = observation missing from the set, Red = wrong match. Experiments are repeated 50 times with random generation of outliers, missing data and noise and the average result, maximum and minimum are plotted. Blue star = results with DIST, Green diamond = results with SFM, Red square = results with SFM+GR. From top to bottom: Experiment with simulated missing data, with outliers, and with random noise.
variances of the noise tested are [0, 0.002, 0.004, 0.006, 0.008, 0.01] of the size of the observed scene. As expected, group information is the most robust against noise; if the position of pedestrian A is not correctly estimated, other pedestrians in the group will contribute to the estimation of the true trajectory of A.
Figure 5 .
56(b) we can see how missing data affects matching results. The matches are shifted; this chain reaction is caused by the global optimization. In both cases, the use of SFM allows the tracker to extrapolate the necessary detections and find the correct trajectories. Finally, in Figure 5.6(c) we plot the wrong result caused by track 3 having two consecutive missing detections. Even with SFM, track 2 is switched for 3 since the switch does not create extreme changes in velocity. In this case, the grouping information is key to obtaining good tracking results. More results are shown in Figure 5.10.
FIGURE 5 . 8 :
58Predictive approaches[29,97] (first row) vs. Proposed method (second row)•[29]: tracker based on Kalman Filter which includes social behavior.•[97]: tracker based on Kalman Filter which includes social and grouping behavior.•[4]: globally optimum tracking based on network flow linear programming and including social and grouping behavior.
FIGURE 5 . 9 :
59Results of the proposed method on the PETS2009 dataset, view 1. DA=Detection accuracy. DP=Detection precision. TA=Tracking accuracy. TP=Tracking precision.
FIGURE 5 . 10 :
510Visual results on the BIWI dataset (Section 5.6.3). The scene is heavily crowded, social and grouping behavior are key to obtaining good tracking results.
FIGURE 5 . 11 :FIGURE 5 . 12 :
511512Visual results on the PETS2009 dataset (Section 5.6.4.2). Visual results on the Town Center dataset (Section 5.6.4.1).
Figure 6 .
62 an example of the proposed graph with three cameras and two frames is shown.The first layer, the 2D layer, depicted inFigure 6.2(a), contains 2D detections (circular nodes) and the flow constraints and is where trajectories are matched across time. The second layer, the 3D layer, depicted in Figure 6.2(b), contains the putative 3D locations (square nodes) obtained from the 2D detections on each pair of cameras. It is designed as a cascade of prizes and favors consistent matching decisions across camera views.
FIGURE 6 . 2 :
62An example of the proposed multi-layer graph structure with three cameras and two frames. Let u and v represent a 2D detection j and P be a 3D reconstructed point.
(A 1 , b 1 ) represent the set of hard constraints Eq. (6.13), and (A 2 , b 2 ) the set of easy constraints, Eqs. (6.8)-(6.12), which are defined independently for each object n = 1 . . . N obj . The idea behind Dantzig-Wolfe decomposition is that the set T * = {f ∈ T : f integer}, with T bounded, is represented by a finite set of points, i.e., a bounded convex polyhedron is represented as a linear combination of its extreme points. The master problem is then defined as:
FIGURE 6 . 5 :
65Results on the PETS sequence, tracking with 2 camera views. Identity switch appears when using Reconstruction-Tracking, while the proposed method is able to correctly track the pedestrian even behind the pole.
FIGURE 6 . 6 :
66Results on the PETS sequence, tracking with 3 camera views. Although there are clear 2D-3D inaccuracies, the proposed method is able to track the red pedestrian which is occluded in 2 cameras during 22 frames.
FIGURE 6 . 7 :
67Robustness evaluation: simulation of increasing rate of missing data 6.7(a) and increasing rate of outliers 6.7(b),6.7(c).
. 4 .FIGURE 6 . 8 :
468This last result is particularly important for pose tracking, as ID switches result in totally erroneous pose reconstructions. Proposed method with 20% of missing data. Note that the trajectories are assigned the same ID in both views.
143 or her destination. Nonetheless, this path is affected by all kinds of obstacles, static or moving, in a real life scenario. This effect becomes increasingly apparent in crowded scenarios, therefore, we argued it is far more natural to include the environment and other moving targets during the multiple people tracking task. Most pedestrian movements and reactions to the environment are captured by what is called the Social Force Model. We presented a method to efficiently introduce social and grouping behavior into the Linear Programming tracker. The observation that people interaction is persistent rather than transient made it clear that the probabilistic formulation fully exploits the power of behavioral models, as opposed to standard predictive and recursive approaches, such as Kalman filtering. Experiments were shown on several public datasets revealing the importance of using social interaction models for tracking under difficult conditions, such as crowded scenes with the presence of missed detections, false alarms and noise. Social information was proven to be specially useful in keeping the correct identity of a pedestrian, which is in the end the main goal of tracking.Even though the inclusion of social awareness in tracking improved trajectories significantly, there is only so much a tracker can do given a certain set of detections. Pedestrian pose, illumination or most commonly occlusions can make it hard to detect pedestrians under certain conditions. Using input from multiple cameras is the most common solution to increase the chances of detecting all pedestrians, specially in surveillance scenarios where the same space is filmed from several angles. Nonetheless, information coming from multiple cameras is typically combined in an ad-hoc fashion. Furthermore, object locations in the images are temporally correlated by system dynamics and are geometrically constrained by the spatial configuration of the cameras. These two sources of structure have been typically exploited separately, but splitting the problem in two phases has obviously several disadvantages, because the available evidence is not fully exploited. For example, if one object is temporarily occluded in one camera, both data association for reconstruction and tracking become ambiguous and underconstrained when considered separately. If, on the other hand, evidence is considered jointly, temporal correlation can potentially resolve reconstruction ambiguities and vice versa.
to study complex movements of microorganisms. The huge amount of information that we can extract from holographic images makes it necessary to have an automatic method to analyze this complex 4D data. Our system performs the detection of 3D positions, tracking of complete trajectories and classification of motion patterns. The input data, the projections obtained with digital in-line holography (inverted colors for better visualization). Sample trajectory in red. (b) The output data we want to obtain from each volume, the classification into four motion patterns, colored according to speed: orientation (1), wobbling (2), gyration (3) and intensive surface probing (4).
FIGURE A. 3 :
3extends over the 2D surface of the screen with coordinates ξ = (X, Y, L), where L is the distance from the source (pinhole) to the center of the detector (CCD chip), I(ξ) is the contrast image (hologram) on the screen obtained by subtracting the images with and without the object present and k the wave number: k = 2π/λ. Illustration of the reconstruction process. From the hologram a stack of XY projections is obtained in several depths and from those, the final 3 projections (XZ, XZ and Y Z) are obtained.
Enhancement of the shape of the microorganisms. (b) Reduction of the noise.
changing the filter. After this, we use thresholding on each projection to obtain the positions of candidate particles in the image. The final 3D positions (Figure A.6, green box labeled "Candidate particles") are determined by thresholding each projection XY , XZ and Y Z to find the particles in each image and crossing the information of the three projections. Once we have computed the 3D positions of all microorganisms in all frames, we are interested in linking these 3D positions in order to find their complete 3D trajectories over time (see Figure A.5).
the 3D positions obtained at each time frame, we use the method in Section A.3 to obtain the full trajectory of each microorganism.
.1 some of the advantages and disadvantages of the Hungarian algorithm. In the following sections, we present how to overcome the three disadvantages: (a) is solved with the multi-level Hungarian method explained in Section A.3.2, (b) is solved with the IN/OUT states of Section A.3.1.1 and finally a solution for (c) is presented in Section A.3.1.2 as a maximum cost restriction.
ADVANTAGES
Finds a global solution for all vertices Cost matrix is versatile Easy to solve, bipartite matching is the simplest of all graph problems DISADVANTAGES Cannot handle missing vertices (a) Cannot handle entering or leaving particles (b) No discrimination of matches even if the cost is very high (c)
FIGURE A. 7 :
7.7, we introduce the IN/OUT states in the cost matrix by adding extra rows and columns. If we are matching the particles in frame f to particles in frame f + 1, we will add as many columns as particles in frame f and as many rows as particles in frame f + 1. This way, all the particles have the possibility to enter/leave the scene. Additionally, this allows us to obtain a square matrix, needed for the matching algorithm, even if the number of particles is not the same in consecutive frames. Change in the cost matrix to include the IN/OUT states. Each particle is represented by a different color. The value of each extra element added is the distance between the particle position and the closest volume boundary.
FIGURE A. 8 :
8A.8(a), the Hungarian algorithm finds a wrong matching since the result is completely altered by the entering/leaving particles. With the introduction of the IN/OUT state feature, the particles are now correctly matched (seeFigure A.8(b)) and the ones which enter/leave the scene are identified as Representation of the particles in frame t 1 (left) and t 2 (right). The lines represent the matchings. (a) Wrongly matched. (b) Correctly matched as a result of the IN/OUT state feature.
S 1 ,
1S 2 , ..., S N , with the condition that the system can only be in one of the states at any given time. The only observable variables are the sequence of symbols O = o 1 , o 2 , ..., o M produced by a set of stochastic processes. Every HMM can be defined by the triple λ = (Π, A, B). Π = {π i } is the vector of initial state probabilities. Each transition from S i to S j can occur with a probability of a ij , where j a ij = 1. A = {a ij } is the state transition matrix. In addition, each state S i generates an output o k with a probability distribution b ik = P (o k |S i ). B = {b ik } is the emission matrix. There are three main problems related to HMMs: 1. The evaluation problem: for a sequence of observations O, compute the probability P (O|λ) that an HMM λ generated O. This is solved using the Forward-Backward algorithm. 2. The estimation problem: given O and an HMM λ, recover the most likely state sequence S 1 , S 2 , ..., S N that generated O. Problem 2 is solved by the Viterbi algorithm, a dynamic programming algorithm that computes the most likely sequence of hidden states in O(N 2 T ) time.3. The optimization problem: find the parameters of the HMM λ which maximize P (O|λ) for some output sequence O. A local maximum likelihood can be derived efficiently using the Baum-Welch algorithm.
A.1(b) is an example of the Orientation pattern. This pattern typically occurs in solution and far away from surfaces. The most important characteristics of the pattern are the high swimming speed (a mean of approximately 150 µ m/s) and a straight swimming motion with moderate turning angles. Wobbling. Trajectory 2 shows the Wobbling pattern and its main characteristic is a much slower mean velocity of around 50 µ m/s. The spores assigned to the pattern often change their direction of movement and only swim in straight lines for very short distances, which leads to zig-zag trajectories. Gyration. Trajectory 3 is an example of the Gyration pattern. This pattern is extremely important for the exploration of surfaces, as occasional surface contacts are observable. The behavior in solution is similar to the Orientation pattern. Since in this pattern spores often switch between swimming towards and away from the surfaces, it can be interpreted as a pre-stage to surface probing. Intensive surface probing and Spinning. Trajectory 4 is an example of the Spinning pattern, which involves swimming in circles close to the surface within a very limited region. After a certain exploration time, the spores can either permanently attach to the surface or start swimming in circular patterns again looking for a better position. This motion is characterized by decreased mean velocities of about 30 µ m/s in combination with a higher tendency to change direction (see Figure A.1(b), case 4).
use Equation (A.3), where J is the final symbol we are looking for, J 1..4 are the symbols for each of the features, ranged [1..N J 1..4 ], where N J 1..4 are the number of symbols per feature.
FIGURE
A.11: (a) Combined HMM created to include changes between patterns within one trajectory. (b) Transition matrix of the combined HMM can see a representation of the combined model, while the design of the transition matrix is depicted in Figure A.11(b). The four individual HMMs for each of the patterns are placed in parallel (blue). In order to deal with transitions, we create two special states: the START and the SWITCH state.
. 2 :
2.12: (a) 3 separate trajectories are detected with the standard Hungarian (blue dashed line). Merged trajectory detected with our method (with a smoothing term, red line). Missing data spots marked by arrows. (b), (c) Ground truth trajectories (blue dashed line). Trajectories automatically detected with our method (red line).Comparison of the number of particles detected by thresholding, by the multi-level Hungarian and the ground truth for the 6 examined sequences (S1-S6).
.13: Comparative experiments with a simulation of 15 randomly moving particles and an increasing number of missing data from 2% to 10% (N=100 repetitions of the experiment are performed and average results shown). Compared methods: standard Hungarian (black), Multi-level Hungarian (blue), Linear Programming with maximum matching distance of 1 frame (cyan) and Linear Programming 2 levels (pink).
FIGURE
A.15(a). As we can see, the MLH and LP 2 levels obtain much longer trajectories. Obtaining long trajectories is specially important for conducting accurate motion analysis, as will be presented in the experiments in the next sections. To this end, we plot inFigureA.15(b) the number of useful trajectories obtained by each method. A useful trajectory for motion analysis is defined as having A.14: Computational time vs. number of objects to be tracked. Compared methods: standard Hungarian (black), Multi-level Hungarian (blue), Linear Programming with maximum matching distance of 1 frame (cyan) and Linear Programming 2 levels (pink). .15: Experiments on real data comparing 4 methods: standard Hungarian (black), Multi-level Hungarian (blue), Linear Programming with maximum matching distance of 1 frame (cyan) and Linear Programming 2 levels (pink).
FIGURE
A.16: Classification rate for parameters N = 4, N v = 4, N α = 3, N β = 3 and N D = 3. On each experiment, one of the features is not used. In the last experiment all features are used. The first experiment that we conduct (see Figure A.16) is to determine the effect of each parameter on the recognition of the patterns. The number of symbols and states can only be determined empirically, since they depend heavily on the amount of training data. In our experiments, we found the best set of parameters to be N = 4, N v = 4, N α = 3, N β = 3 and N D = 3 for which we obtain a classification rate of 83.86%.
FIGURE
mation for classification. But if N α = N β = 1, the rate goes down to 79.69%. That means that we need one of the two measures for correct classification. The final set of parameters used for all experiments is: N = 4, N v = 4, N α = 1, N β = 3 and N D = 3, for which we obtain a classification rate of 83.5%. This rate is very close to the result with N α = 3, with the advantage that we now use fewer symbols to represent the same information. Several tests lead us to choose N = 4 number of states of the HMM. A.17: Confusion matrix; parameters N = 4, N v = 4, N α = 1, N β = 3 and N D = 3.
FIGURE
A.18: (a) Wobbling (pattern 2) misclassified as Spinning (4). (b) Gyration (3) misclassified as Orientation(1). Color coded according to speed as inFigure A.1(b)
( 4 )
4. Both motion patterns have similar speed values and the only truly differentiating characteristics are the depth and the angle α. Since we use 3 symbols for depth, the fact that the microorganism touches the surface or swims near the surface leads to the same classification. That is the case of Figure A.18(a), in which the model chooses pattern Spinning (4), because the speed is very low (dark blue), and sometimes the speed in the Wobbling pattern can be a little higher (light blue).As commented in section A.4.2, Gyration (3) and Orientation (1) are two linked patterns. The behavior of Gyration in solution is similar to the Orientation pattern, that is why the misclassification shown inFigure A.18(b)can happen. In this case, since the microorganism does not interact with the surface and the speed of the pattern is high (red color), the model detects it as an Orientation pattern. We note that this pattern is difficult to classify, even for a trained expert, since the transition from Orientation into Gyration usually occurs gradually as spores swim towards the surface and interrupt the swimming pattern (which is very similar to the Orientation pattern) by short surface contacts.
FIGURE
A.19(b), color coded according to classification, we can see how the HMM A.20: Complete volume with patterns: Orientation (1, red), Wobbling (2, green), Gyration (3, yellow). The Spinning (4) pattern is not present in this sequence.
.2. There are two main components: the detector and the tracker. State-of-the-art detectors are discussed in Section 2.3, while the tracker is the main focus of the thesis. As we will see later on in this chapter, detectors are not perfect and often return false alarms or miss pedestrians.Video sequence
Detector
Detections
per frame
.
.
.
Tracker
Object
detection
Data
association
Final
trajectories
FIGURE 2.2: Tracking-by-detection paradigm. Firstly, an independent detector is ap-
plied to all image frames to obtain likely pedestrian detections. Secondly, a tracker is
run on the set of detections to perform data association, i.e., link the detections to obtain
full trajectories.
.4.Weighted vote
into spatial and
orientation cells
Contrast
normalization
over blocks
Collect HOG's
over detection
windows
Linear
SVM
Compute
gradients
Input
image
Person/
non-person
classifier
Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) 1063-6919/05 $20.00 © 2005 IEEE
Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) 1063-6919/05 $20.00 © 2005 IEEE
Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) 1063-6919/05 $20.00 © 2005 IEEE
criminatively Trained, Multiscale, Deformable Part Modelzwalb
hicago
go.edu
David McAllester
Toyota Technological Institute at Chicago
[email protected]
Deva Ramanan
TTI-C and UC Irvine
[email protected]
Abstract
bes a discriminatively trained, multi-
t model for object detection. Our sys-
old improvement in average precision
ance in the 2006 PASCAL person de-
lso outperforms the best results in the
out of twenty categories. The system
rmable parts. While deformable part
quite popular, their value had not been
cult benchmarks such as the PASCAL
m also relies heavily on new methods
ining. We combine a margin-sensitive
ining hard negative examples with a
tent SVM. A latent SVM, like a hid-
non-convex training problem. How-
s semi-convex and the training prob-
nce latent information is specified for
. We believe that our training meth-
ake possible the effective use of more
ch as hierarchical (grammar) models
latent three dimensional pose.
roblem of detecting and localizing ob-
gory, such as people or cars, in static
veloped a new multiscale deformable
this problem. The models are trained
e procedure that only requires bound-
positive examples. Using these mod-
since all rows of A with indices outside of B will not contribute to the dot product. Since A B is invertible, we can then write λ B = c A −1 B . From this, two theorems emerge.
TABLE 3 .
31: Possible combinations of properties of the primal and dual problems.
TABLE 3 .
3Now we have to formally define the conditions under which the Bellman-Ford algorithm terminates (converges).2: Distances com-
puted by the Bellman-Ford
algorithm.
Table 6 .
61, our algorithm [6]
DA TA DP
TP miss
Zhang et al. [27] (1)
68.9 65.8 60.6 60.0 28.1
Greedy tracking-reconstruction (2) 51.9 49.4 56.1 54.4 31.6
Greedy reconstruction-tracking (2) 64.6 57.9 57.8 56.8 26.8
Tracking-reconstruction (2)
66.7 62.7 59.5 57.9 24.0
Reconstruction-tracking (2)
69.7 65.7 61.2 60.2 25.1
Leal-Taixé et al. [6] (2)
78.0 76 62.6 60
16.5
Tracking-reconstruction (3)
48.5 46.5 51.1 50.3
20
Reconstruction-tracking (3)
56.6 51.3 54.5 52.8 23.5
Leal-Taixé et al. [6] (3)
73.1 71.4 55.0 53.4 12.9
Berclaz et al. [28] (5)
76
75
62
62
−
TABLE 6 .
61: PETS2009 L1 sequence. Comparison of several methods tracking on a variable number of cameras (indicated in parenthesis).
2 .
2The holographic microscope requires only a divergent wavefront which is produced by diffraction of laser light from a pinhole. A CCD chip finally captures the hologram. The holographic microscope setup follows directly Gabors initial idea[137] and has been implemented for laser radiation by Xu et al.[138]. A hologram recorded without the presence of particles, called the source, is subtracted from each hologram. This is used to reduce the constant illumination background and other artifacts; there are filtering FIGURE A.2: Schematic setup for a digital in-line holographic experiment consisting of the laser, a spatial filter to create the divergent light cone, the objects of interest (e.g. microorganisms) and a detector which records the hologram.Laser beam
CCD detector
Hologram
Microorganisms
Pinhole
that solves the bipartite graph matching Projections f iltered with LoG and thresholded FIGURE A.6: Diagram of the algorithm described in Section A.3.2.Candidate
Particles
Adding iteration
Deleting iteration
changes>0
YES
changes>0
NO
YES
Final
Particles
Find particle
position
NO
Standard
Hungarian
Trajectories
Multi-level Hungarian
t
t+1
t+2
t-1
t-2
TABLE A .
A1: Summary of the advantages and disadvantages of the Hungarian algorithm.
Let the algorithm converge until no particles are added. (Section A.3.2.2). • On the same table and given some conditions, erase the outliers. Let the algorithm converge until no particles are deleted. (Section A.3.2.2).
FIGURE A.9: Represented frames: [i-2,i-1,i,i+1,i+2]. Levels of the multi-level Hungarian.• Level 5: Matches particles in frame i ± 1 with frame i ± 2.A.3.2.2 Conditions to add/delete particlesOnce all the levels are applied hierarchically, a table with the matching information is created. The table has a column for each of the 5 frames from [i−2 . . . i+2] and a row for each detected trajectory, as shown in Figure A.10. This table will be used to interpolate missing detections and delete false alarms. To change the table information, we use two iterations: the adding iteration and the deleting iteration, which appear in Figure A.6 as blue boxes. During the adding iteration, we look for empty cells in the table where there is likely to be a particle. A new particle position is added if, and only if, two conditions are met: 1. The trajectory (row) consists of at least 3 particles. Trajectories have continuity while noise points do not.?
?
H1
H1
H2
H2
H5
H5
H3
H4
H4
?
.3, where we can clearly see that the average length of a trajectory is greatly improved with the multi-level Hungarian, which is crucial since long trajectories give us more information on the behavior of the particles.TABLE A.3: Comparison of the trajectories' average length.S1 S2 S3 S4 S5 S6
Set A 3
5
5
4
6
7
Set B 19 31 27 23 38 23
Set C 58 54 54 70 126 105
Missing (%) 8.9 20.7 19.1 23.6 11.5 12.9 Extra (%) 54.9 34.1 46.5 13.3 25.8 74.6 TABLE A.4: Missing labeled and extra automatic particles.S1
S2
S3
S4
S5
S6
. Introduction
. Introduction
Chapter 2. Tracking-by-Detection
Acknowledgements!!"" " !"" !#"" !$"" !%"" !&"" " &"" %"" $"" #"" Finally, we can obtain the probability of each transition (e.g. from Orientation to Spinning) for a given dataset under study. This is extremely useful for experts to understand the behavior of a certain microorganism under varying conditions.
Learning an image-based motion context for multiple people tracking. L Leal-Taixé, M Fenzi, A Kuznetsova, B Rosenhahn, S Savarese, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). L. Leal-Taixé, M. Fenzi, A. Kuznetsova, B. Rosenhahn, and S. Savarese. Learning an image-based motion context for multiple people tracking. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
Multitarget tracking with context from interaction feature strings. L Leal-Taixé, M Fenzi, A Kuznetsova, B Rosenhahn, S Savarese, IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPR). SUNw: Scene Understanding workshop. L. Leal-Taixé, M. Fenzi, A. Kuznetsova, B. Rosenhahn, and S. Savarese. Multi- target tracking with context from interaction feature strings. IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPR). SUNw: Scene Under- standing workshop., 2014.
Simulation and Visual Analysis of Crowds: A multidisciplinary perspective, chapter Pedestrian interaction in tracking: the social force model and global optimization methods. The International Series in Video Computing. L Leal-Taixé, B Rosenhahn, Modeling, SpringerBerlin HeidelbergL. Leal-Taixé and B. Rosenhahn. Modeling, Simulation and Visual Analysis of Crowds: A multidisciplinary perspective, chapter Pedestrian interaction in tracking: the social force model and global optimization methods. The International Series in Video Computing. Springer Berlin Heidelberg, 2013.
Everybody needs somebody: Modeling social and grouping behavior on a linear programming multiple people tracker. L Leal-Taixé, G Pons-Moll, B Rosenhahn, IEEE International Conference on Computer Vision (ICCV) Workshops. 1st. L. Leal-Taixé, G. Pons-Moll, and B. Rosenhahn. Everybody needs somebody: Modeling social and grouping behavior on a linear programming multiple peo- ple tracker. IEEE International Conference on Computer Vision (ICCV) Workshops. 1st
Workshop on Modeling, Simulation and Visual Analysis of Large Crowds. Workshop on Modeling, Simulation and Visual Analysis of Large Crowds, pages 120- 127, 2011.
Outdoor and Large-Scale Real-World Scene Analysis. L Leal-Taixé, G Pons-Moll, B Rosenhahn, chapter Exploiting pedestrian interaction via global optimization and social behaviors. 74741L. Leal-Taixé, G. Pons-Moll, and B. Rosenhahn. Outdoor and Large-Scale Real-World Scene Analysis, volume 7474 of Lecture Notes in Computer Science, chapter Exploit- ing pedestrian interaction via global optimization and social behaviors, pages 1-
Branch-and-price global optimization for multi-view multi-target tracking. L Leal-Taixé, G Pons-Moll, B Rosenhahn, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). L. Leal-Taixé, G. Pons-Moll, and B. Rosenhahn. Branch-and-price global optimiza- tion for multi-view multi-target tracking. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1987-1994, 2012.
Three dimensional tracking of exploratory behavior of barnacle cyprids using stereoscopy. S Maleschlijski, G H Sendra, A Di Fino, L Leal-Taixé, I Thome, A Terfort, N Aldred, M Grunze, A S Clare, B Rosenhahn, A Rosenhahn, Biointerphases. 750S. Maleschlijski, G. H. Sendra, A. Di Fino, L. Leal-Taixé, I. Thome, A. Terfort, N. Aldred, M. Grunze, A. S. Clare, B. Rosenhahn, and A. Rosenhahn. Three di- mensional tracking of exploratory behavior of barnacle cyprids using stereoscopy. Biointerphases, 7(50), August 2012.
A stereoscopic approach for three dimensional tracking of marine biofouling microorganisms. S Maleschlijski, L Leal-Taixé, S Weisse, A Di Fino, N Aldred, A S Clare, G H Sendra, B Rosenhahn, A Rosenhahn, Microscopic Image Analysis with Applications in Biology (MIAAB). S. Maleschlijski, L. Leal-Taixé, S. Weisse, A. Di Fino, N. Aldred, A. S. Clare, G. H. Sendra, B. Rosenhahn, and A. Rosenhahn. A stereoscopic approach for three di- mensional tracking of marine biofouling microorganisms. Microscopic Image Anal- ysis with Applications in Biology (MIAAB), 2011.
Video Processing and Computational Video, chapter Understanding what we cannot see: automatic analysis of 4D digital in-line holography data. L Leal-Taixé, M Heydt, A Rosenhahn, B Rosenhahn, Lecture Notes in Computer Science. SpringerL. Leal-Taixé, M. Heydt, A. Rosenhahn, and B. Rosenhahn. Video Processing and Computational Video, chapter Understanding what we cannot see: automatic anal- ysis of 4D digital in-line holography data, pages 52-76. Lecture Notes in Com- puter Science. Springer Berlin Heidelberg, 2011.
Automatic tracking of swimming microorganisms in 4d digital in-line holography data. L Leal-Taixé, M Heydt, A Rosenhahn, B Rosenhahn, IEEE Workshops on Motion and Video Computing (WMVC). L. Leal-Taixé, M. Heydt, A. Rosenhahn, and B. Rosenhahn. Automatic tracking of swimming microorganisms in 4d digital in-line holography data. IEEE Workshops on Motion and Video Computing (WMVC), pages 1-8, 2009.
Classification of swimming microorganisms motion patterns in 4d digital in-line holography data. L Leal-Taixé, M Heydt, S Weisse, A Rosenhahn, B Rosenhahn, German Conference on Pattern Recognition (GCPR). L. Leal-Taixé, M. Heydt, S. Weisse, A. Rosenhahn, and B. Rosenhahn. Classifica- tion of swimming microorganisms motion patterns in 4d digital in-line hologra- phy data. German Conference on Pattern Recognition (GCPR), pages 283-292, 2010.
Real-time sign language recognition using a consumer depth camera. A Kuznetsova, L Leal-Taixé, B Rosenhahn, IEEE International Conference on Computer Vision (ICCV) Workshops. 3rd Workshop on Consumer Depth Cameras for Computer Vision (CDC4CV). A. Kuznetsova, L. Leal-Taixé, and B. Rosenhahn. Real-time sign language recog- nition using a consumer depth camera. IEEE International Conference on Computer Vision (ICCV) Workshops. 3rd Workshop on Consumer Depth Cameras for Computer Vision (CDC4CV), 2013.
Class generative models based on feature regression for pose estimation of object categories. M Fenzi, L Leal-Taixé, B Rosenhahn, J Ostermann, Computer Vision and Pattern Recognition (CVPR). M. Fenzi, L. Leal-Taixé, B. Rosenhahn, and J. Ostermann. Class generative models based on feature regression for pose estimation of object categories. IEEE Confer- ence on Computer Vision and Pattern Recognition (CVPR), pages 755-762, 2013.
3d object recognition and pose estimation for multiple objects using multi-prioritized ransac and model updating. M Fenzi, R Dragon, L Leal-Taixé, B Rosenhahn, J Ostermann, German Conference on Pattern Recognition (GCPR). M. Fenzi, R. Dragon, L. Leal-Taixé, B. Rosenhahn, and J. Ostermann. 3d ob- ject recognition and pose estimation for multiple objects using multi-prioritized ransac and model updating. German Conference on Pattern Recognition (GCPR), pages 123-133, 2012.
Outdoor and Large-Scale Real-World Scene Analysis. G Pons-Moll, L Leal-Taixé, , , J Gall, B Rosenhahn, chapter Data-driven manifold for outdoor motion capture. Berlin HeidelbergSpringer7474G. Pons-Moll, L. Leal-Taixé, , J. Gall, and B. Rosenhahn. Outdoor and Large-Scale Real-World Scene Analysis, volume 7474 of Lecture Notes in Computer Science, chap- ter Data-driven manifold for outdoor motion capture, pages 305-328. Springer Berlin Heidelberg, 2012.
Outdoor human motion capture using inverse kinematics and von misesfisher sampling. G Pons-Moll, A Baak, J Gall, L Leal-Taixé, M Mueller, H.-P Seidel, B Rosenhahn, IEEE International Conference on Computer Vision (ICCV). G. Pons-Moll, A. Baak, J. Gall, L. Leal-Taixé, M. Mueller, H.-P.Seidel, and B. Rosen- hahn. Outdoor human motion capture using inverse kinematics and von mises- fisher sampling. IEEE International Conference on Computer Vision (ICCV), pages 1243-1250, 2011.
Efficient and robust shape matching for model based human motion capture. G Pons-Moll, L Leal-Taixé, T Truong, B Rosenhahn, German Conference on Pattern Recognition (GCPR). G. Pons-Moll, L. Leal-Taixé, T. Truong, and B. Rosenhahn. Efficient and robust shape matching for model based human motion capture. German Conference on Pattern Recognition (GCPR), pages 416-425, 2011.
Automatic segmentation of arteries in multi-stain histology images. L Leal-Taixé, A U Coskun, B Rosenhahn, D Brooks, World Congress on Medical Physics and Biomedical Engineering. 254L. Leal-Taixé, A. U. Coskun, B. Rosenhahn, and D. Brooks. Automatic segmenta- tion of arteries in multi-stain histology images. World Congress on Medical Physics and Biomedical Engineering, 25(4):2000-2003, 2009.
Outdoor and Large-Scale Real-World Scene Analysis. F Dellaert, J.-M Frahm, M Pollefeys, B Rosenhahn, L Leal-Taixé, Lecture Notes in Computer Science. SpringerF. Dellaert, J.-M. Frahm, M. Pollefeys, B. Rosenhahn, and L. Leal-Taixé, editors. Outdoor and Large-Scale Real-World Scene Analysis. Lecture Notes in Computer Sci- ence. Springer Berlin Heidelberg, April 2012.
Pets 2009 dataset: Performance and evaluation of tracking and surveillance. J M Ferryman, J.M. Ferryman. Pets 2009 dataset: Performance and evaluation of tracking and surveillance, 2009. URL http://www.cvg.rdg.ac.uk/PETS2009/.
Stable multi-target tracking in real-time surveillance video. B Benfold, I Reid, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). B. Benfold and I. Reid. Stable multi-target tracking in real-time surveillance video. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3457- 3464, 2011.
Density-aware person detection and tracking in crowds. M Rodriguez, I Laptev, J Sivic, J Y Audibert, IEEE International Conference on Computer Vision (ICCV). M. Rodriguez, I. Laptev, J. Sivic, and J. Y. Audibert. Density-aware person de- tection and tracking in crowds. IEEE International Conference on Computer Vision (ICCV), pages 2423-2430, 2011.
Tracking in unstructured crowded scenes. M Rodriguez, S Ali, T Kanade, IEEE International Conference on Computer Vision (ICCV). M. Rodriguez, S. Ali, and T. Kanade. Tracking in unstructured crowded scenes. IEEE International Conference on Computer Vision (ICCV), pages 1389-1396, 2009.
Unsupervised bayesian detection of independent motion in crowds. G J Brostow, R Cipolla, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). G. J. Brostow and R. Cipolla. Unsupervised bayesian detection of independent motion in crowds. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 594-601, 2006.
Floor fields for tracking in high density crowd scenes. S Ali, M Shah, European Conference on Computer Vision (ECCV). S. Ali and M. Shah. Floor fields for tracking in high density crowd scenes. Euro- pean Conference on Computer Vision (ECCV), pages 1-14, 2008.
Data-driven crowd analysis in videos. M Rodriguez, J Sivic, I Laptev, J Y Audibert, IEEE International Conference on Computer Vision (ICCV). M. Rodriguez, J. Sivic, I. Laptev, and J.Y. Audibert. Data-driven crowd analysis in videos. IEEE International Conference on Computer Vision (ICCV), pages 1235-1242, 2011.
Global data association for multi-object tracking using network flows. L Zhang, Y Li, R Nevatia, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). L. Zhang, Y. Li, and R. Nevatia. Global data association for multi-object tracking using network flows. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1-8, 2008.
Multiple object tracking using kshortest paths optimization. J Berclaz, F Fleuret, E Türetken, P Fua, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). 33J. Berclaz, F. Fleuret, E. Türetken, and P. Fua. Multiple object tracking using k- shortest paths optimization. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 33(9):1806-1819, 2011.
You'll never walk alone: modeling social behavior for multi-target tracking. S Pellegrini, A Ess, K Schindler, L Van Gool, IEEE International Conference on Computer Vision (ICCV). S. Pellegrini, A. Ess, K. Schindler, and L. van Gool. You'll never walk alone: mod- eling social behavior for multi-target tracking. IEEE International Conference on Computer Vision (ICCV), pages 261-268, 2009.
A new approach to linear filtering and prediction problems. R E Kalman, Transactions of the ASME -Journal of Basic Engineering. 82Series DR. E. Kalman. A new approach to linear filtering and prediction problems. Trans- actions of the ASME -Journal of Basic Engineering, 82(Series D):35-45, 1960.
Background modeling using mixture of gaussians for foreground detection -a survey. T Bouwmans, F El Baf, B Vachon, Recent Patents on Computer Science. 3T. Bouwmans, F. El Baf, and B. Vachon. Background modeling using mixture of gaussians for foreground detection -a survey. Recent Patents on Computer Science, 3:219-237, 2008.
Histograms of oriented gradients for human detection. N Dalal, B Triggs, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 886-893, 2005.
A discriminatively trained, multiscale, deformable part model. P Felzenszwalb, D Mcallester, D Ramanan, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). P. Felzenszwalb, D. McAllester, and D. Ramanan. A discriminatively trained, multiscale, deformable part model. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1-8, 2008.
Cascade object detection with deformable part models. P Felzenszwalb, R Girshick, D Mcallester, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). P. Felzenszwalb, R. Girshick, and D. McAllester. Cascade object detection with de- formable part models. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2241-2248, 2010.
Part-based multipleperson tracking with partial occlusion handling. G Shu, A Dehghan, O Oreifej, E Hand, M Shah, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). G. Shu, A. Dehghan, O. Oreifej, E. Hand, and M. Shah. Part-based multiple- person tracking with partial occlusion handling. IEEE Conference on Computer Vi- sion and Pattern Recognition (CVPR), pages 1815-1821, 2012.
Monocular 3d scene understanding with explicit occlusion reasoning. C Wojek, S Walk, S Roth, B Schiele, IEEE Conference on Computer Vision and Pattern Recognition. C. Wojek, S. Walk, S. Roth, and B. Schiele. Monocular 3d scene understanding with explicit occlusion reasoning. IEEE Conference on Computer Vision and Pattern Recognition, pages 1993-2000, 2011.
Hough forests for object detection, tracking, and action recognition. J Gall, A Yao, N Razavi, L Van Gool, V Lempitsky, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). 3311J. Gall, A. Yao, N. Razavi, L. van Gool, and V. Lempitsky. Hough forests for object detection, tracking, and action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 33(11):2188-2202, 2011.
Robust object detection with interleaved categorization and segmentation. B Leibe, A Leonardis, B Schiele, International Journal of Computer Vision (IJCV). 771-3B. Leibe, A. Leonardis, and B. Schiele. Robust object detection with interleaved categorization and segmentation. International Journal of Computer Vision (IJCV), 77(1-3):259-289, 2008.
Improving an object detector and extracting regions using superpixels. G Shu, A Dehghan, M Shah, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). G. Shu, A. Dehghan, and M. Shah. Improving an object detector and extracting regions using superpixels. IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR), pages 3721-3727, 2013.
Monocular pedestrian detection: survey and experiments. M Enzweiler, D M Gavrila, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). 31M. Enzweiler and D. M. Gavrila. Monocular pedestrian detection: survey and ex- periments. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 31(12):2179-2195, 2008.
Object detection with discriminatively trained part based models. P Felzenszwalb, R Girshick, D Mcallester, D Ramanan, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). 32P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part based models. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 32(9):1627-1645, 2010.
An improved adaptive background mixture model for real-time tracking with shadow detection. P Kadewtrakupong, R Bowden, 2nd European Workshop on Advanced Video-Based Surveillance Systems (AVBS). 2P. KadewTraKuPong and R. Bowden. An improved adaptive background mixture model for real-time tracking with shadow detection. 2nd European Workshop on Advanced Video-Based Surveillance Systems (AVBS), 2:135-144, 2001.
Support-vector networks. C Cortes, V N Vapnik, Machine Learning. 20C. Cortes and V. N. Vapnik. Support-vector networks. Machine Learning, 20(3): 273-297, 1995.
C M Bishop, Pattern Recognition and Machine Learning. New York, NY, USASpringer-Verlag New York, IncC.M. Bishop. Pattern Recognition and Machine Learning. Springer-Verlag New York, Inc., New York, NY, USA, October 2007.
Pictorial structures for object recognition. P Felzenszwalb, D Huttenlocher, International Journal of Computer Vision (IJCV). 611P. Felzenszwalb and D. Huttenlocher. Pictorial structures for object recognition. International Journal of Computer Vision (IJCV), 61(1):55-79, 2005.
Articulated pose estimation with flexible mixtures of parts. Y Yang, D Ramanan, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Y. Yang and D. Ramanan. Articulated pose estimation with flexible mixtures of parts. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1385-1392, 2011.
Taking mobile multi-object tracking to the next level: people, unknown objects, and carried items. D Mitzel, B Leibe, European Conference on Computer Vision (ECCV). D. Mitzel and B. Leibe. Taking mobile multi-object tracking to the next level: peo- ple, unknown objects, and carried items. European Conference on Computer Vision (ECCV), pages 566-579, 2012.
Tracking people and their objects. T Baumgartner, D Mitzel, B Leibe, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). T. Baumgartner, D. Mitzel, and B. Leibe. Tracking people and their objects. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3658-3665, 2013.
Monocular visual scene understanding: Understanding multi-object traffic scenes. C Wojek, S Walk, S Roth, K Schindler, B Schiele, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). 354C. Wojek, S. Walk, S. Roth, K. Schindler, and B. Schiele. Monocular visual scene understanding: Understanding multi-object traffic scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 35(4):882-897, 2013.
Introduction to Linear Optimization. D Bertsimas, J N Tsitsiklis, Athena Scientific. D. Bertsimas and J. N. Tsitsiklis. Introduction to Linear Optimization. Athena Scien- tific, Boston, MA, USA, 1997.
Understanding and Using Linear Programming. J Matousek, B Gaertner, J. Matousek and B. Gaertner. Understanding and Using Linear Programming.
Network flows: Theory, algorithms and applications. R K Ahuja, T L Magnanti, J B Orlin, Prentice HallUpper Saddle River, NJ, USAR.K. Ahuja, T.L. Magnanti, and J.B. Orlin. Network flows: Theory, algorithms and applications. Prentice Hall, Upper Saddle River, NJ, USA, 1993.
New finite pivoting rules for the simplex method. R G Bland, Mathematics of Operations Research. 22R. G. Bland. New finite pivoting rules for the simplex method. Mathematics of Operations Research, 2(2):103-107, 1977.
Theory of Linear and Integer Programming. A Schrijver, Wiley Series in Discrete Mathematics and Optimization. WileyA. Schrijver. Theory of Linear and Integer Programming. Wiley Series in Discrete Mathematics and Optimization. Wiley, Hoboken, NJ, USA, June 1998.
How good is the simplex algorithm? Inequalities. V Klee, G J Minty, 3V. Klee and G. J. Minty. How good is the simplex algorithm? Inequalities, 3:159- 175, 1972.
A quasi-polynomial bound for the diameter of graphs of polyhedra. G Kalai, D J Kleitman, Proceedings of the American Mathematical Society. 26G. Kalai and D. J. Kleitman. A quasi-polynomial bound for the diameter of graphs of polyhedra. Proceedings of the American Mathematical Society, 26:315-315, 1992.
Matrixok kombinatorius tulajdonságairól. J Egerváry, on combinatorial properties of matricesJ. Egerváry. Matrixok kombinatorius tulajdonságairól [on combinatorial proper- ties of matrices].
Gráfok és mátrixok. D König, graphs and matricesD. König. Gráfok és mátrixok [graphs and matrices].
Globally-optimal greedy algorithms for tracking a variable number of objects. H Pirsiavash, D Ramanan, C C Fowlkes, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). H. Pirsiavash, D. Ramanan, and C.C. Fowlkes. Globally-optimal greedy algo- rithms for tracking a variable number of objects. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1201-1208, 2011.
Finding the k-shortest loopless paths in a network. J Y Yen, Management Science. 1711J. Y. Yen. Finding the k-shortest loopless paths in a network. Management Science, 17(11):712-716, 1971.
Disjoint paths in a network. J W Suurballe, Networks. 42J.W. Suurballe. Disjoint paths in a network. Networks, 4(2):125-145, 1974.
Maximal flow through a network. L R Ford, D R Fulkerson, Canadian Journal of Mathematics. 8L.R. Ford and D.R. Fulkerson. Maximal flow through a network. Canadian Journal of Mathematics, 8:399-404, 1956.
Linear programming and extensions. G B Dantzig, Princenton. Princeton University PressG.B. Dantzig. Linear programming and extensions. Princeton University Press, Prin- centon, NJ, USA, 1963.
Nonlinear bayesian estimation using gaussian sum approximations. D Alspach, H Sorenson, IEEE Transactions on Automatic Control. 174D. Alspach and H. Sorenson. Nonlinear bayesian estimation using gaussian sum approximations. IEEE Transactions on Automatic Control, 17(4):439-448, 1972.
A new extension of the kalman filter to nonlinear systems. S Julier, J Uhlmann, International Symposium on Aerospace/Defence Sensing, Simulation and Controls. S. Julier and J. Uhlmann. A new extension of the kalman filter to nonlinear sys- tems. International Symposium on Aerospace/Defence Sensing, Simulation and Con- trols, pages 182-193, 1997.
A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. M S Arulampalam, S Maskell, N Gordon, T Clapp, IEEE Transactions on Signal Processing. 502M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp. A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. IEEE Transactions on Signal Processing, 50(2):174-188, 2002.
The hungarian method for the assignment problem. Hw, Kuhn, Naval Research Logistics. 21-2HW. Kuhn. The hungarian method for the assignment problem. Naval Research Logistics, 2(1-2):83-97, 1955.
Algorithms for the assignment and transportation problems. J Munkres, Journal of the Society of Industrial and Applied Mathematics. 51J. Munkres. Algorithms for the assignment and transportation problems. Journal of the Society of Industrial and Applied Mathematics, 5(1):32-38, 1957.
Robust tracking-by-detection using a detector confidence particle filter. M D Breitenstein, F Reichlin, B Leibe, E Koller-Meier, L Van Gool, IEEE International Conference on Computer Vision (ICCV). M.D. Breitenstein, F. Reichlin, B. Leibe, E. Koller-Meier, and L. van Gool. Robust tracking-by-detection using a detector confidence particle filter. IEEE International Conference on Computer Vision (ICCV), pages 1515-1522, 2009.
Probabilistic data association methods for tracking complex visual objects. C Rasmussen, G D Hager, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). 23C. Rasmussen and G. D. Hager. Probabilistic data association methods for track- ing complex visual objects. IEEE Transactions on Pattern Analysis and Machine In- telligence (TPAMI), 23(6):560-576, 2001.
Advances in Intelligent Signal Processing and Data Mining, chapter A Sequential Monte Carlo Method for Multi-Target Tracking with the Intensity Filter. M Schikora, W Koch, R L Streit, D Cremers, SpringerBerlin HeidelbergM. Schikora, W. Koch, R. L. Streit, and D. Cremers. Advances in Intelligent Signal Processing and Data Mining, chapter A Sequential Monte Carlo Method for Multi- Target Tracking with the Intensity Filter. Springer Berlin Heidelberg, 2012.
Mcmc-based particle filtering for tracking a variable number of interacting targets. Z Khan, T Balch, F Dellaert, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). 27Z. Khan, T. Balch, and F. Dellaert. Mcmc-based particle filtering for tracking a variable number of interacting targets. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 27(11):1805-1819, 2005.
M Betke, D E Hirsh, A Bagchi, N I Hristov, N C Makris, T H Kunz, Tracking large variable number of objects in clutter. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). M. Betke, D.E. Hirsh, A. Bagchi, N.I. Hristov, N.C. Makris, and T.H. Kunz. Track- ing large variable number of objects in clutter. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1-8, 2007.
A mobile vision system for robust multi-person tracking. A Ess, B Leibe, K Schindler, L Van Gool, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). A. Ess, B. Leibe, K. Schindler, and L. van Gool. A mobile vision system for robust multi-person tracking. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1-8, 2008.
Multi-target tracking -linking identities using bayesian network inference. P Nillius, J Sullivan, S Carlsson, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). P. Nillius, J. Sullivan, and S. Carlsson. Multi-target tracking -linking identities using bayesian network inference. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2187-2194, 2006.
Game-theoretic multiple target tracking. M Yang, T Yu, Y Wu, IEEE International Conference on Computer Vision (ICCV). M. Yang, T. Yu, and Y. Wu. Game-theoretic multiple target tracking. IEEE Interna- tional Conference on Computer Vision (ICCV), pages 1-8, 2007.
Robust people tracking with global trajectory optimization. J Berclaz, F Fleuret, P Fua, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). J. Berclaz, F. Fleuret, and P. Fua. Robust people tracking with global trajectory optimization. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 744-750, 2006.
Large-scale integer linear programming for orientation-preserving 3d shape matching. T Windheuser, U Schlickewei, F R Schmidt, D Cremers, Eurographics Proceedings Symposium Geometry Processing. T. Windheuser, U. Schlickewei, F. R. Schmidt, and D. Cremers. Large-scale integer linear programming for orientation-preserving 3d shape matching. Eurographics Proceedings Symposium Geometry Processing, pages 1471-1480, 2011.
Geometrically consistent elastic matching of 3d shapes: A linear programming solution. T Windheuser, U Schlickewei, Frank R Schmidt, D Cremers, IEEE International Conference on Computer Vision (ICCV). T. Windheuser, U. Schlickewei, Frank R. Schmidt, and D. Cremers. Geometrically consistent elastic matching of 3d shapes: A linear programming solution. IEEE International Conference on Computer Vision (ICCV), pages 2134-2141, 2011.
Curvature regularity for region-based image segmentation and inpainting: A linear programming relaxation. T Schoenemann, F Kahl, D Cremers, IEEE International Conference on Computer Vision (ICCV). T. Schoenemann, F. Kahl, and D. Cremers. Curvature regularity for region-based image segmentation and inpainting: A linear programming relaxation. IEEE In- ternational Conference on Computer Vision (ICCV), pages 17-23, 2009.
Model based pose estimator using linearprogramming. M Ben-Ezra, S Peleg, M Werman, European Conference on Computer Vision (ECCV). M. Ben-Ezra, S. Peleg, and M. Werman. Model based pose estimator using linear- programming. European Conference on Computer Vision (ECCV), pages 267-281, 2000.
A linear programming approach for multiple object tracking. H Jiang, S Fels, J J Little, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). H. Jiang, S. Fels, and J.J. Little. A linear programming approach for multiple object tracking. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1-8, 2007.
Globally optimal multi-target tracking on an hexagonal lattice. A Andriyenko, K Schindler, European Conference on Computer Vision (ECCV). A. Andriyenko and K. Schindler. Globally optimal multi-target tracking on an hexagonal lattice. European Conference on Computer Vision (ECCV), pages 466-479, 2010.
Multi-target tracking by continuous energy minimization. A Andriyenko, K Schindler, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). A. Andriyenko and K. Schindler. Multi-target tracking by continuous energy min- imization. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1265-1272, 2011.
Discrete-continuous optimization for multi-target tracking. A Andriyenko, K Schindler, S Roth, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). A. Andriyenko, K. Schindler, and S. Roth. Discrete-continuous optimization for multi-target tracking. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1926-1933, 2012.
Efficient track linking methods for track graphs using network-flow and set-cover techniques. Z Wu, T H Kunz, M Betke, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Z. Wu, T.H. Kunz, and M. Betke. Efficient track linking methods for track graphs using network-flow and set-cover techniques. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1185-1192, 2011.
Social force model for pedestrian dynamics. D Helbing, P Molnár, Physical Review E. 51D. Helbing and P. Molnár. Social force model for pedestrian dynamics. Physical Review E, 51:4282-4286, 1995.
Abnormal crowd behavior detection using social force model. R Mehran, A Oyama, M Shah, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). R. Mehran, A. Oyama, and M. Shah. Abnormal crowd behavior detection using social force model. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 935-942, 2009.
Controlling individual agents in highdensity crowd simulation. N Pelechano, J M Allbeck, N I Badler, Eurographics/ACM SIGGRAPH Symposium on Computer Animation. N. Pelechano, J.M. Allbeck, and N.I. Badler. Controlling individual agents in high- density crowd simulation. Eurographics/ACM SIGGRAPH Symposium on Computer Animation, pages 99-108, 2007.
Destination flow for crowd simulation. S Pellegrini, J Gall, L Sigal, L Van Gool, European Conference on Computer Vision (ECCV) Workshops. 3rd Workshop on Analysis and Retrieval of Tracked Events and Motion in Imagery Streams. S. Pellegrini, J. Gall, L. Sigal, and L. van Gool. Destination flow for crowd simu- lation. European Conference on Computer Vision (ECCV) Workshops. 3rd Workshop on Analysis and Retrieval of Tracked Events and Motion in Imagery Streams, pages 162- 171, 2012.
Learning pedestrian dynamics from the real world. P Scovanner, M F Tappen, IEEE International Conference on Computer Vision (ICCV). P. Scovanner and M.F. Tappen. Learning pedestrian dynamics from the real world. IEEE International Conference on Computer Vision (ICCV), pages 381-388, 2009.
People tracking with human motion predictions from social forces. M Luber, J A Stork, G D Tipaldi, K Arras, IEEE International Conference on Robotics and Automation (ICRA). M. Luber, J.A. Stork, G.D. Tipaldi, and K.O Arras. People tracking with human motion predictions from social forces. IEEE International Conference on Robotics and Automation (ICRA), pages 464-469, 2010.
Automatically detecting the small group structure of a crowd. W Ge, R T Collins, B Ruback, IEEE Workshop on Applications of Computer Vision (WACV). W. Ge, R.T. Collins, and B. Ruback. Automatically detecting the small group struc- ture of a crowd. IEEE Workshop on Applications of Computer Vision (WACV), pages 1-8, 2009.
Multiple target tracking in world coordinate with single, minimally calibrated camera. W Choi, S Savarese, European Conference on Computer Vision (ECCV). W. Choi and S. Savarese. Multiple target tracking in world coordinate with sin- gle, minimally calibrated camera. European Conference on Computer Vision (ECCV), pages 553-567, 2010.
Improving data association by joint modeling of pedestrian trajectories and groupings. S Pellegrini, A Ess, L Van Gool, European Conference on Computer Vision (ECCV). S. Pellegrini, A. Ess, and L. van Gool. Improving data association by joint mod- eling of pedestrian trajectories and groupings. European Conference on Computer Vision (ECCV), pages 452-465, 2010.
K Yamaguchi, A C Berg, L Ortiz, T L Berg, Who are you with and where are you going? IEEE Conference on Computer Vision and Pattern Recognition (CVPR). K. Yamaguchi, A.C. Berg, L.E Ortiz, and T.L. Berg. Who are you with and where are you going? IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1345-1352, 2011.
Nonlinear programming. D Bertsekas, Athena Scientific. 2nd editionD. Bertsekas. Nonlinear programming. Athena Scientific, Boston, MA, USA, 2nd edition, 2004.
Multi-target tracking by lagrangian relaxation to min-cost network flow. A A Butt, R T Collins, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). A. A. Butt and R. T. Collins. Multi-target tracking by lagrangian relaxation to min-cost network flow. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1846-1853, 2013.
Tracking sports players with contextconditioned motion models. J Liu, P Carr, R T Collins, Y Liu, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). J. Liu, P. Carr, R. T. Collins, and Y. Liu. Tracking sports players with context- conditioned motion models. IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR), pages 1830-1837, 2013.
Framework for performance evaluation for face, text and vehicle detection and tracking in video: data, metrics, and protocol. R Kasturi, D Goldgof, P Soundararajan, V Manohar, J Garofolo, M Boonstra, V Korzhova, J Zhang, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). 312R. Kasturi, D. Goldgof, P. Soundararajan, V. Manohar, J. Garofolo, M. Boonstra, V. Korzhova, and J. Zhang. Framework for performance evaluation for face, text and vehicle detection and tracking in video: data, metrics, and protocol. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 31(2):319-336, 2009.
Multiple view geometry in computer vision. R Hartley, A Zisserman, Cambridge University PressCambridge, UK2nd editionR. Hartley and A. Zisserman. Multiple view geometry in computer vision. Cambridge University Press, Cambridge, UK, 2nd edition, 2004.
Multidimensional assignment formulation of data association problems rising from multitarget and multisensor tracking. A B Poore, Computational Optimization and Applications. A.B. Poore. Multidimensional assignment formulation of data association prob- lems rising from multitarget and multisensor tracking. Computational Optimization and Applications, 1994.
Tracking-reconstruction or reconstruction-tracking? comparison of two multiple hypothesis tracking approaches to interpret 3d object motion from several camera views. Z Wu, N I Hristov, T H Kunz, M Betke, IEEE Workshops on Motion and Video Computing (WMVC). Z. Wu, N.I. Hristov, T.H. Kunz, and M. Betke. Tracking-reconstruction or reconstruction-tracking? comparison of two multiple hypothesis tracking ap- proaches to interpret 3d object motion from several camera views. IEEE Workshops on Motion and Video Computing (WMVC), pages 1-8, 2009.
Multiple target tracking using spatio-temporal markov chain monte carlo data association. Q Yu, G Medioni, I Cohen, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Q. Yu, G. Medioni, and I. Cohen. Multiple target tracking using spatio-temporal markov chain monte carlo data association. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1-8, 2007.
Tracking multiple people under global appearance constraints. H B Shitrit, J Berclaz, F Fleuret, P Fua, IEEE International Conference on Computer Vision (ICCV). H.B. Shitrit, J. Berclaz, F. Fleuret, and P. Fua. Tracking multiple people under global appearance constraints. IEEE International Conference on Computer Vision (ICCV), pages 137-144, 2011.
Mrf optimization via dual decomposition: Message-passing revisited. N Komodakis, N Paragios, G Tziritas, IEEE International Conference on Computer Vision (ICCV). N. Komodakis, N. Paragios, and G. Tziritas. Mrf optimization via dual decompo- sition: Message-passing revisited. IEEE International Conference on Computer Vision (ICCV), pages 1-8, 2007.
Feature correspondence via graph matching: models and global optimization. L Torresani, V Kolmogorov, C Rother, European Conference on Computer Vision (ECCV). L. Torresani, V. Kolmogorov, and C. Rother. Feature correspondence via graph matching: models and global optimization. European Conference on Computer Vi- sion (ECCV), pages 596-609, 2008.
Decomposition principle for linear programs. G B Dantzig, P Wolfe, Operations Research. 8G. B. Dantzig and P. Wolfe. Decomposition principle for linear programs. Opera- tions Research, 8:101-111, 1960.
Massively parallel dantzig-wolfe decomposition applied to traffic flow scheduling. J Rios, K Ross, Journal of Aerospace Computing, Information, and Communication. 71J. Rios and K. Ross. Massively parallel dantzig-wolfe decomposition applied to traffic flow scheduling. Journal of Aerospace Computing, Information, and Communi- cation, 7(1):4105-4118, 2010.
A decomposition method for quadratic zero-one programming. P Chardaire, A Sutter, Management Science. 414P. Chardaire and A. Sutter. A decomposition method for quadratic zero-one pro- gramming. Management Science, 41(4):704-712, 1995.
Branch-and-price: column generation for solving huge integer programs. C Barnhart, E L Johnson, G L Nemhauser, M W P Savelsbergh, P H Vance, Operations Research. 463C. Barnhart, E.L. Johnson, G.L. Nemhauser, M.W.P. Savelsbergh, and P.H. Vance. Branch-and-price: column generation for solving huge integer programs. Opera- tions Research, 46(3):316-329, 1996.
Hypergraphs for joint multi-view reconstruction and multi-object tracking. M Hofmann, D Wolf, G Rigoll, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). M. Hofmann, D. Wolf, and G. Rigoll. Hypergraphs for joint multi-view recon- struction and multi-object tracking. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3650-3657, 2013.
Dantzig-wolfe decomposition solver. J Rios, J. Rios. Dantzig-wolfe decomposition solver. URL http://sourceforge.net/ projects/dwsolver/.
Humaneva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion. L Sigal, A O Balan, M J Black, International Journal of Computer Vision (IJCV). 871L. Sigal, A.O. Balan, and M.J. Black. Humaneva: Synchronized video and mo- tion capture dataset and baseline algorithm for evaluation of articulated human motion. International Journal of Computer Vision (IJCV), 87(1):4-27, 2010.
Cell population tracking and lineage construction with spatiotemporal context. K Li, E Miller, M Chen, T Kanade, L E Weiss, P G Campbell, Medical Image Analysis. 125K. Li, E. Miller, M. Chen, T. Kanade, L.E. Weiss, and P.G. Campbell. Cell popula- tion tracking and lineage construction with spatiotemporal context. Medical Image Analysis, 12(5):546-566, October 2008.
Swimming with protists: perception, motility and flagellum assembly. Ml, N Ginger, P G Portman, Mckean, Nature Reviews Microbiology. 611ML. Ginger, N. Portman, and PG. McKean. Swimming with protists: perception, motility and flagellum assembly. Nature Reviews Microbiology, 6(11):838-850, 2008.
Biofilms as complex differentiated communities. P Stoodley, K Sauer, D G Davies, J W Costerton, Annual Review of Microbiology. 56P. Stoodley, K. Sauer, DG. Davies, and JW. Costerton. Biofilms as complex differ- entiated communities. Annual Review of Microbiology, 56:187-209, 2002.
Digital in-line holography as a 3d tool to study motile marine organisms during their exploration of surfaces. M Heydt, A Rosenhahn, M Grunze, M Pettitt, M E Callow, J A Callow, The Journal of Adhesion. 835M. Heydt, A. Rosenhahn, M. Grunze, M. Pettitt, M. E. Callow, and J. A. Callow. Digital in-line holography as a 3d tool to study motile marine organisms during their exploration of surfaces. The Journal of Adhesion, 83(5):417-430, 2007.
Advanced nanostructures for the control of biofouling: The fp6 eu integrated project ambio. A Rosenhahn, T Ederth, M E Pettitt, Biointerphases. 31A. Rosenhahn, T. Ederth, and ME. Pettitt. Advanced nanostructures for the con- trol of biofouling: The fp6 eu integrated project ambio. Biointerphases, 3(1):IR1- IR5, 2008.
3d tracking of motile bacteria near a solid planar surface. Pd, R M Frymier, H C Ford, P T Berg, Cummings, Proceedings of the National Academy of Sciences of the United States of America. the National Academy of Sciences of the United States of America92PD. Frymier, RM. Ford, HC. Berg, and PT. Cummings. 3d tracking of motile bac- teria near a solid planar surface. Proceedings of the National Academy of Sciences of the United States of America, 92(13):6195-6199, 1995.
3d recording and measurement of swimming paths of microorganisms with 2 synchronized monochrome cameras. Sa, S Baba, M Inomata, Y Ooya, A Mogami, Izumikurotani, Review of Scientific Instruments. 622SA. Baba, S. Inomata, M. Ooya, Y. Mogami, and A. Izumikurotani. 3d recording and measurement of swimming paths of microorganisms with 2 synchronized monochrome cameras. Review of Scientific Instruments, 62(2):540-541, 1991.
. E R Weeks, J C Crocker, A C Levitt, A Schofield, D A Weitz, 2873d direct imaging of structural relaxation near the colloidal glass transition. ScienceER. Weeks, JC. Crocker, AC. Levitt, A. Schofield, and DA. Weitz. 3d direct imaging of structural relaxation near the colloidal glass transition. Science, 287(5452):627- 631, 2000.
Tracking movement in cell biology. K Miura, Advances in Biochemical Engineering/Biotechnology. 95K. Miura. Tracking movement in cell biology. Advances in Biochemical Engineer- ing/Biotechnology, 95:267-295, 2005.
A novel computation approach for simultaneous tracking and feature extraction of c. elegans populations in fluid environments. G Tsechpenakis, L Bianchi, D Metaxas, M Driscoll, IEEE Transactions on Biomedical Engineering. 555G. Tsechpenakis, L. Bianchi, D. Metaxas, and M. Driscoll. A novel computation approach for simultaneous tracking and feature extraction of c. elegans popula- tions in fluid environments. IEEE Transactions on Biomedical Engineering, 55(5): 1539-49, May 2008.
Lagrangian particle tracking in 3d via single-camera in-line digital holography. J Lu, J P Fugal, H Nordsiek, E W Saw, R Shaw, W Yang, New Journal of Physics. 10J. Lu, J.P. Fugal, H. Nordsiek, E.W. Saw, R.A Shaw, and W. Yang. Lagrangian particle tracking in 3d via single-camera in-line digital holography. New Journal of Physics, 10, 2008.
Random walks in biology. H C Berg, Princeton University PressPrinceton, NJ, USA2nd editionH.C. Berg. Random walks in biology. Princeton University Press, Princeton, NJ, USA, 2nd edition, 1993.
Pca learning for sparse high-dimensional data. D C Hoyle, M Rattay, Europhysics Letters. 621D.C. Hoyle and M. Rattay. Pca learning for sparse high-dimensional data. Euro- physics Letters, 62(1):117-123, 2003.
Trajectory analysis and semantic region modeling using a nonparametric bayesian model. X Wang, E Grimson, IEEE Conference on Computer Vision and Pattern Recognition. X. Wang and E. Grimson. Trajectory analysis and semantic region modeling using a nonparametric bayesian model. IEEE Conference on Computer Vision and Pattern Recognition, pages 1-8, 2008.
Gene selection for cancer classification using support vector machines. I Guyon, J Weston, S Barnhill, V Vapnik, Machine Learning. 46I. Guyon, J. Weston, S. Barnhill, and V. Vapnik. Gene selection for cancer classifi- cation using support vector machines. Machine Learning, 46(1-3):389-442, 2004.
Machine learning for biological trajectory classification applications. I F Sbalzariniy, J Theriot, P Koumoutsakos, Center for Turbulence Research. Proceedings of the Summer Program. I.F. Sbalzariniy, J. Theriot, and P. Koumoutsakos. Machine learning for biological trajectory classification applications. Center for Turbulence Research. Proceedings of the Summer Program., pages 305-316, 2002.
A tutorial on hidden markov models and selected applications in speech recognition. L R Rabiner, Proceedings of the IEEE. 772L.R. Rabiner. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257-286, 1989.
Off-line handwritten word recognition using a hidden markov model type stochastic network. M Y Chen, A Kundu, J Zhou, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). 16M.Y. Chen, A. Kundu, and J. Zhou. Off-line handwritten word recognition using a hidden markov model type stochastic network. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 16(5):481-496, 1994.
Hidden markov models for face recognition. A V Nefian, M H Hayes, International Conference on Acoustics, Speech, and Signal Processing (ICASSP). 5A.V. Nefian and M.H.Hayes. Hidden markov models for face recognition. Interna- tional Conference on Acoustics, Speech, and Signal Processing (ICASSP), 5:2721-2724, 1998.
Recognizing human action in time-sequential images using hidden markov model. J Yamato, J Ohya, K Ishii, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). J. Yamato, J. Ohya, and K. Ishii. Recognizing human action in time-sequential im- ages using hidden markov model. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 379-385, 1992.
Discovery and segmentation of activities in video. M Brand, V Kettnaker, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). 228M. Brand and V. Kettnaker. Discovery and segmentation of activities in video. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 22(8):844- 851, 2000.
A new microscopic principle. D Gabor, Nature. 1618777D. Gabor. A new microscopic principle. Nature, 161(8):777, 1948.
Digital in-line holography for biological applications. Wb, M H Xu, I A Jericho, H J Meinertzhagen, Kreuzer, Proceedings of the National Academy of Sciences of the United States of America. the National Academy of Sciences of the United States of America98WB. Xu, MH. Jericho, IA. Meinertzhagen, and HJ. Kreuzer. Digital in-line holog- raphy for biological applications. Proceedings of the National Academy of Sciences of the United States of America, 98(20):11301-11305, 2001.
Digital crossed-beam holography for in situ imaging of athmospheric particles. S M Raupach, H J Vossing, J Curtius, S Borrman, Journal of Optics A: Pure and Applied Optics. 8S.M.F Raupach, H.J. Vossing, J. Curtius, and S. Borrman. Digital crossed-beam holography for in situ imaging of athmospheric particles. Journal of Optics A: Pure and Applied Optics, 8:796-806, 2006.
Practical methods for automated reconstruction and characterization of particles in digital in-line holograms. J Fugal, T Schulz, R Shaw, Measurement Science and Technology. 20775501J. Fugal, T. Schulz, and R. Shaw. Practical methods for automated reconstruction and characterization of particles in digital in-line holograms. Measurement Science and Technology, 20(7):075501, 2009.
Analysis of holographic microscopy data to quantitatively investigate three dimensional settlement dynamics of algal zoospores in the vicinity of surfaces. M Heydt, P Divós, M Grunze, A Rosenhahn, The European Physical Journal E: Soft Matter and Biological Physics. 302M. Heydt, P. Divós, M. Grunze, and A. Rosenhahn. Analysis of holographic mi- croscopy data to quantitatively investigate three dimensional settlement dynam- ics of algal zoospores in the vicinity of surfaces. The European Physical Journal E: Soft Matter and Biological Physics, 30(2):141-148, 2009.
Digital in-line holography: 4d imaging and tracking of microstructures and organisms in microfluidics and biology. J Garcia-Sucerquia, W Xu, S K Jericho, M H Jericho, I Tamblyn, H J Kreuzer, Proceedings of SPIE. 6026J. Garcia-Sucerquia, W. Xu, S.K. Jericho, M. H. Jericho, I. Tamblyn, and H.J. Kreuzer. Digital in-line holography: 4d imaging and tracking of microstructures and organisms in microfluidics and biology. Proceedings of SPIE, 6026:267-275, 2006.
Swimming speed of three species of alexandrium (dinophyceae) as determined by digital in-line holography. N I Lewis, W Xu, S K Jericho, H J Kreuzer, M H Jericho, A D Cembella, Phycologia. 451N. I. Lewis, W. Xu, S.K. Jericho, H.J. Kreuzer, M.H. Jericho, and A.D. Cembella. Swimming speed of three species of alexandrium (dinophyceae) as determined by digital in-line holography. Phycologia, 45(1):61-70, 2006.
Digital holographic microscopy reveals prey-induced changes in swimming behavior of predatory dinoflagellates. J Sheng, E Malkiel, J Katz, J Adolf, R Belas, A R Place, Proceedings of the National Academy of Sciences of the United States of America. the National Academy of Sciences of the United States of America104J. Sheng, E. Malkiel, J. Katz, J. Adolf, R. Belas, and A.R. Place. Digital holographic microscopy reveals prey-induced changes in swimming behavior of predatory dinoflagellates. Proceedings of the National Academy of Sciences of the United States of America, 104(44):17512-17517, 2007.
A dinoflagellate exploits toxins to immobilize prey prior to ingestion. J Sheng, E Malkiel, J Katz, J E Adolf, A R Place, Proceedings of the National Academy of Sciences of the United States of America. the National Academy of Sciences of the United States of America107J. Sheng, E. Malkiel, J. Katz, J.E. Adolf, and A.R. Place. A dinoflagellate exploits toxins to immobilize prey prior to ingestion. Proceedings of the National Academy of Sciences of the United States of America, 107(5):2082-2087, 2010.
In situ underwater electronic holographic camera for studies of plankton. H Y Sun, D C Hendry, M A Player, J Watson, IEE Journal of Oceanic Engineering. 322HY. Sun, DC. Hendry, MA. Player, and J. Watson. In situ underwater electronic holographic camera for studies of plankton. IEE Journal of Oceanic Engineering, 32 (2):373-382, 2007.
Scale-space theory in computer vision. T Lindeberg, Kluwer Academic PublishersNorwell, MA, USAT. Lindeberg. Scale-space theory in computer vision. Kluwer Academic Publishers, Norwell, MA, USA, 1994.
Munkres' assignment algorithm; modified for rectangular matrices. R A Pilgrim, R.A. Pilgrim. Munkres' assignment algorithm; modified for rectangular ma- trices, 2009. URL http://csclab.murraystate.edu/bob.pilgrim/445/ munkres.html.
Special purpose computer for digital holographic particle tracking velocimetry. N Masuda, T Ito, K Kayama, H Kono, S Satake, T Kunugi, K Sato, Optics Express. 142N. Masuda, T. Ito, K. Kayama, H. Kono, S. Satake, T. Kunugi, and K. Sato. Special purpose computer for digital holographic particle tracking velocimetry. Optics Express, 14(2):587-592, 2006.
Qualitative and quantitative studies of the swimming behaviour of hincksia irregularis (phaeophyceae) spores: ecological implications and parameters for quantitative swimming assays. K Iken, C D Amsler, S R Greer, J B Mcclintock, Phycologia. 404K. Iken, C.D. Amsler, S.R. Greer, and J.B. McClintock. Qualitative and quanti- tative studies of the swimming behaviour of hincksia irregularis (phaeophyceae) spores: ecological implications and parameters for quantitative swimming assays. Phycologia, 40(4):359-366, 2001.
|
[] |
[
"Condensation and Evaporation of Boson Stars",
"Condensation and Evaporation of Boson Stars"
] |
[
"James Hung-Hsu Chan [email protected] \nDepartment of Astrophysics\nCentral Park West and 79th Street\nAmerican Museum of Natural History\n10024-5192NYUSA\n\nDepartment of Physics and Astronomy\nLehman College of the CUNY\n10468BronxNYUSA\n\nInstitute of Physics\nLaboratory of Astrophysique\nÉcole Polytechnique Fédérale de Lausanne (EPFL)\nObservatoire de Sauverny\n1290VersoixSwitzerland\n",
"Sergey Sibiryakov [email protected] \nDepartment of Physics & Astronomy\nMcMaster University\nL8S 4M1HamiltonOntarioCanada\n\nPerimeter Institute for Theoretical Physics\nN2L 2Y5WaterlooOntarioCanada\n",
"Wei Xue [email protected] \nDepartment of Physics\nUniversity of Florida\n32611GainesvilleFLUSA\n"
] |
[
"Department of Astrophysics\nCentral Park West and 79th Street\nAmerican Museum of Natural History\n10024-5192NYUSA",
"Department of Physics and Astronomy\nLehman College of the CUNY\n10468BronxNYUSA",
"Institute of Physics\nLaboratory of Astrophysique\nÉcole Polytechnique Fédérale de Lausanne (EPFL)\nObservatoire de Sauverny\n1290VersoixSwitzerland",
"Department of Physics & Astronomy\nMcMaster University\nL8S 4M1HamiltonOntarioCanada",
"Perimeter Institute for Theoretical Physics\nN2L 2Y5WaterlooOntarioCanada",
"Department of Physics\nUniversity of Florida\n32611GainesvilleFLUSA"
] |
[] |
Axion-like particles, including the QCD axion, are well-motivated dark matter candidates. Numerical simulations have revealed coherent soliton configurations, also known as boson stars, in the centers of axion halos. We study evolution of axion solitons immersed into a gas of axion waves with Maxwellian velocity distribution. Combining analytical approach with controlled numerical simulations we find that heavy solitons grow by condensation of axions from the gas, while light solitons evaporate. We deduce the parametric dependence of the soliton growth/evaporation rate and show that it is proportional to the rate of the kinetic relaxation in the gas. The proportionality coefficient is controlled by the product of the soliton radius and the typical gas momentum or, equivalently, the ratio of the gas and soliton virial temperatures. We discuss the asymptotics of the rate when this parameter is large or small.
| null |
[
"https://arxiv.org/pdf/2207.04057v1.pdf"
] | 250,426,463 |
2207.04057
|
73ca4ae50071f5eed4db03a7c5dc63d29c797f21
|
Condensation and Evaporation of Boson Stars
8 Jul 2022
James Hung-Hsu Chan [email protected]
Department of Astrophysics
Central Park West and 79th Street
American Museum of Natural History
10024-5192NYUSA
Department of Physics and Astronomy
Lehman College of the CUNY
10468BronxNYUSA
Institute of Physics
Laboratory of Astrophysique
École Polytechnique Fédérale de Lausanne (EPFL)
Observatoire de Sauverny
1290VersoixSwitzerland
Sergey Sibiryakov [email protected]
Department of Physics & Astronomy
McMaster University
L8S 4M1HamiltonOntarioCanada
Perimeter Institute for Theoretical Physics
N2L 2Y5WaterlooOntarioCanada
Wei Xue [email protected]
Department of Physics
University of Florida
32611GainesvilleFLUSA
Condensation and Evaporation of Boson Stars
8 Jul 2022Prepared for submission to JCAP
Axion-like particles, including the QCD axion, are well-motivated dark matter candidates. Numerical simulations have revealed coherent soliton configurations, also known as boson stars, in the centers of axion halos. We study evolution of axion solitons immersed into a gas of axion waves with Maxwellian velocity distribution. Combining analytical approach with controlled numerical simulations we find that heavy solitons grow by condensation of axions from the gas, while light solitons evaporate. We deduce the parametric dependence of the soliton growth/evaporation rate and show that it is proportional to the rate of the kinetic relaxation in the gas. The proportionality coefficient is controlled by the product of the soliton radius and the typical gas momentum or, equivalently, the ratio of the gas and soliton virial temperatures. We discuss the asymptotics of the rate when this parameter is large or small.
Introduction
QCD axion [1][2][3][4][5][6][7][8][9] and axion-like particles [10][11][12][13], are widely discussed in the literature as well-motivated dark matter (DM) candidates. The QCD axion, originally suggested as a solution to the strong CP problem [14,15], was soon realized [7] to be produced in the early universe and behave as cold dark matter after the QCD phase transition endowing it with the mass. The requirement that the QCD axion accounts for all of DM leads to a preferred mass window 1 m aQCD ∼ 10 −6 ÷ 10 −4 eV.
Axion-like particles with broad range of masses and very weak coupling to the Standard Model naturally arise in many beyond Standard Model scenarios and string theory [10,16]. For brevity, we will refer to DM made of such particles as axion DM. Particularly interesting is the case of ultralight (also called "fuzzy") DM with mass m a ∼ 10 −22 ÷ 10 −19 eV [17]. The de Broglie wavelength of such ultralight particle corresponding to virial velocity in a galactic halo, 2 λ a = 2π mv a ∼ 1.2 × m a 10 −22 eV is comparable to the typical cosmological and astrophysical distances. Due to this property, ultralight dark matter exhibits rich phenomenology affecting various cosmological observables 1 The mass can be smaller in scenarios where Peccei-Quinn symmetry is never restored after inflation. 2 Throughout the paper we use the system of units = c = 1. and galactic dynamics [11][12][13]. The analysis of Lyman-α forest [18][19][20], galactic rotation curves [21,22], halo profiles of dwarf galaxies [23,24] and subhalo population in the Milky Way [25] strongly disfavor DM lighter than 10 −21 eV. Dynamical heating of stars by ultralight DM in ultrafaint dwarf galaxies has been used to infer tighter constraints m a 10 −19 eV [26,27]. A distinctive feature of axion DM is its huge occupation numbers (phase-space density) which are allowed because axions are bosons, (1. 2) This implies that, rather than behaving as a collection of individual particles, axion DM is best described by a coherent classical scalar field with the scattering rate of axions increased due to the Bose enhancement. Typically, in the study of structure formation all axion interactions besides gravity can be neglected resulting in a universal wave dynamics described by Schrödinger-Poisson equations [13]. The dependence of these equations on the axion mass can be taken into account by a simple rescaling, and thus they apply to any axion DM as long as f k 1. The Schrödinger-Poisson system admits a spherically symmetric localized solution known as axion soliton or boson star 3 [28]. All axions comprising the soliton are in the same state which is the ground state of the gravitational potential and hence the soliton can be viewed as inhomogeneous Bose-Einstein condensate sustained by its own gravity [29]. Numerical simulations of axion DM have revealed formation of boson stars in the centers of virialized axion halos (also known as miniclusters [30,31] in the case of QCD axion). This phenomenon was observed in the cosmological setting [32][33][34][35], in numerical experiments with halos created by collisions of several seed solitons [36][37][38], and in the kinetic relaxation regime [39]. It was also found that if the soliton is artificially removed from the halo, evolution readily reinstates it back [40].
Thus, presence of a solitonic core appears to be a generic feature of an axion halo. The rest of the halo represents a cloud of randomly moving wavepackets with the velocities roughly following the Maxwellian distribution and the average density fitted by the NFW profile [41], similarly to the usual cold DM. It is natural to ask how the soliton interacts with this environment. Refs. [42][43][44][45] showed that interference between the soliton and wavepackets leads to oscillations of its density and to a random walk of the soliton center around the halo center of mass. Further, an interesting correlation between the soliton mass and the mass of its host halo has been established in cosmological numerical simulations [32,36] and confirmed in [33,42]. This relation can be rephrased as equality between the virial temperatures of the soliton and the host halo. While this relation may appear intuitive, the physical mechanism behind it remains unclear. It is not reproduced by simulations starting from non-cosmological initial conditions [37,38,46], whereas more recent cosmological simulations [35,46,47] indicate that it is subject to a large scatter, perhaps due to different merger histories of different halos. The results of Ref. [39] disfavor a potential interpretation of the soliton-host halo relation as a condition for kinetic equilibrium. Indeed, it was observed that, once formed, the solitons continue to grow by condensation of axions from the surrounding gas. On the other hand, Refs. [42,48] argue that this growth slows down when the soliton becomes heavy enough to heat up the inner part of the halo and, given the finite time of the simulations, this can explain the observed correlation. The mass of the soliton can be also significantly affected by baryonic matter, typically leading to its increase [49,50].
Boson stars give rise to important signatures opening up various opportunities for future discovery or constraints on axion DM. In the case of fuzzy DM, they are expected to play a prominent role in galactic dynamics modifying the rotation curves [21,22] and heating the stars in the central regions through oscillations and random walk [26,51,52]. When axion self-interaction is included, they become unstable if their mass exceeds a certain threshold and collapse producing bursts of relativistic axions [53]. Further allowing for possible axion coupling to photons, they can be sources of radio emission [54][55][56]. Presence or absence of boson stars in axion miniclusters can have important implications for lensing searches [57]. Very dense boson stars made of inflaton field get produced in inflationary models with delayed reheating opening a potentially rich phenomenology, such as seeding primordial black holes or contributing into stochastic high-frequency gravitational wave background [58].
The dynamical range achievable in axion DM simulations is severely limited by the computational costs (see the discussion in [35]). This calls for better theoretical understanding of the physical laws governing the evolution of boson stars in various environments which would allow their extrapolation outside of the parameter regions explored in simulations. In the present paper we make a step in this direction by studying the evolution of a boson star immersed in a box filled with homogeneous axion gas. Focusing on this setup allows us to get rid of the uncertainties related to the dynamics of the halo and keep under control the gas density and its velocity distribution. The latter is chosen to be Maxwellian at the initial moment of time. Similar setup was employed in Ref. [39] to study the formation of the soliton in the process of the gas kinetic relaxation. By contrast, we do not assume the soliton to be formed from the gas and simply add it in the initial conditions of our simulations. In this way we are able to explore a wide range of soliton masses corresponding different ratios between the soliton virial temperature T s and the temperature of the gas T g .
The key quantity that we are interested in is the rate of change of the soliton mass,
Γ s = 1 M s dM s dt . (1.3)
We study the dependence of this quantity on the parameters characterizing the gas and the soliton by a combination of analytical and numerical methods. We find that the solitons with T s /T g 0.1 grow by absorbing particles from the gas. For fixed gas parameters, the growth rate is essentially constant in the range 0.1 T s /T g 1, whereas at T s /T g 1 it decreases as (T s /T g ) −n/2 with n = 2 ÷ 4.
Interestingly, we find that if T s /T g 0.08, the soliton evaporates, the time scale of this process being parametrically shorter than the relaxation time. This does not contradict previous results on soliton formation from the gas by kinetic relaxation [39]. Indeed, by running the simulations longer than the evaporation of the initial soliton we observe after a while the birth of a new soliton with T s /T g 0.1, in agreement with [39]. It is worth stressing the difference between the soliton evaporation and tidal disruption by large-scale gradients of the halo gravitational field [59]. This is clear already from the fact that there is no halo in our setup. Moreover, the qualitative direction of the process -evaporation vs. condensation -is entirely determined by the soliton and gas temperatures and does not depend on the density contrast between them. 4 The paper is organized as follows. In section 2 we introduce our framework and review the relevant properties of the soliton solution to the Schrödinger-Poisson equations. In section 3 we address the computation of the soliton growth/evaporation rate formulating it as a quantum-mechanical scattering problem. We consider separately the cases of light (cold, T s /T g 1) and heavy (hot, T s /T g 1) solitons and employ various approximations to estimate the rate analytically. In section 4 we describe our numerical simulations, extract the soliton growth rate from them and compare it to the analytic predictions. In section 5 we discuss the implications of our results and compare to other works. Three appendices contain auxiliary material. In appendix A we provide an alternative derivation of the soliton growth rate using only classical equations of motion. In appendix B we describe a suit of simulations reproducing the setup of Ref. [39] where the soliton forms from the gas spontaneously due to kinetic relaxation. Appendix C contains additional details about our numerical procedure.
Soliton Wavefunction and Axion Gas
Non-relativistic axions with mass m are described by a complex scalar field ψ obeying the Schrödinger-Poisson equations,
i∂ t ψ + ∆ψ 2m − mΦψ = 0 , (2.1a) ∆Φ = 4πGm |ψ| 2 , (2.1b)
where G is the gravitational coupling, Φ is the Newton potential and ∆ denotes the Laplacian.
The square of the field gives the particle number density, |ψ(t, x)| 2 = n(t, x). Equations (2.1) are invariant under scaling transformations,
ψ →ψ(t, x) = Λ 3 ψ(Λ 1 t, Λ 2 x) , Φ →Φ(t, x) = Λ 2 1 Λ 2 2 Φ(Λ 1 t, Λ 2 x) , (2.2a) m →m = Λ 2 2 Λ 1 m , G →G = Λ 3 1 Λ 2 2 Λ 2 3 G , (2.2b)
where Λ 1,2,3 are arbitrary parameters. A one-parameter family of these transformations that leaves m and G invariant connects different solutions for a given axion; the transformations that change the mass, but not G, allow one to map between solutions for axions with different masses; finally, the rescaling of G provides a freedom in the choice of units which is handy in numerical simulations. The system (2.1) admits periodic spherically symmetric solutions of the form,
ψ s (t, x) = χ(|x|)e −iEst .(2.
3)
The corresponding density ρ s (x) = m|χ(|x|)| 2 is time-independent and localized in space, hence these solutions are called solitons. E s represents the binding energy (chemical potential) of axions in the soliton and is negative. There is a continuous family of solitons differing by their mass M s and related by the subgroup of the scaling transformations (2.2) that leave m and G fixed. Using this symmetry, the soliton wavefunction can be written as where k s is the scaling parameter characterizing the soliton width. By the uncertainty relation, it sets the typical momentum of particles comprising the soliton. The dimensionless function χ 0 (ξ) describes the "standard soliton" normalized by the condition
χ(x) = k 2 s √ 4πGm 3 χ 0 (k s x) ,(2.χ 0 (0) = 1 . (2.5a)
It solves the eigenvalue problem following from the Schrödinger-Poisson system,
χ 0 + 2 ξ χ 0 = 2(Φ 0 − ε 0 )χ 0 , (2.5b) Φ 0 + 2 ξ Φ 0 = χ 2 0 , (2.5c)
where Φ 0 (ξ) is the standard soliton gravitational potential and ε 0 is its binding energy. Fig. 1 shows the function χ 0 (ξ) obtained by numerically solving eqs. (2.5). It is well approximated by an analytic fit,
χ 0,fit = 1 + c 0 ξ 2 −4 , c 0 = 0.0539 ,(2.6)
also shown in the figure. The fit differs from the exact solution only at the tail where the exact solution falls off exponentially, whereas the fit behaves as a power-law. The standard soliton is characterized by the following dimensionless quantities:
ε 0 = −0.692 binding energy , (2.7a) µ 0 = 4π ∞ 0 dξ ξ 2 χ 2 0 (ξ) = 25.9 total mass , (2.7b) ξ 0 = 1.299 half-density radius, |χ 0 (ξ 0 )| 2 = 1/2 . (2.7c)
The corresponding values for a general soliton are obtained by rescaling,
E s = ε 0 k 2 s m , M s = µ 0 k s 4πGm 2 , r s = ξ 0 k s ,(2.8)
and its density profile can be approximated as
ρ s (x) ≈ ρ s, peak 1 + c s (|x|/r s ) 2 8 , ρ s, peak = k 4 s 4πGm 2 , c s = 0.091 . (2.9)
Note that the width of the soliton is inversely proportional to its mass. Accordingly, the peak density is proportional to the fourth power of the mass. The total energy of the soliton consists of kinetic and potential parts,
E s = E s,kin + E s,pot = d 3 x |∇ψ s | 2 2m + mΦ s |ψ s | 2 2 .
(2.10)
Using the Schrödinger-Poisson equations one can show that they obey the virial theorem, E s = −E s,kin = E s,pot /2, and
E s = M s E s 3m . (2.11)
It is instructive to introduce the soliton virial temperature,
T s = 2mE s,kin 3M s = − 2 9 E s . (2.12)
Using eqs. (2.8) one obtains alternative expressions,
T s = 0.154 k 2 s m = 0.259 mr 2 s . (2.13)
We are interested to study how the mass of the soliton varies due to its interaction with a gas of axion waves. We assume the gas to fill a box of size L r s .
(2.14)
Far away from the soliton, it is described by a collection of plane waves, 5
ψ g (t, x) = 1 L 3/2 k a k e −i k 2 2m t+ikx , |x| r s . (2.15)
We choose the occupation numbers to follow the Maxwell distribution, consistent with the velocity distribution in a DM halo,
f k ≡ |a k | 2 = f g e −k 2 /k 2 g ,(2.16)
where k g sets the characteristic momentum of particles in the gas. The normalization f g is related to the gas density as
f g = (4π) 3/2 mk 3 g ρ g . (2.17)
Validity of the classical description requires f g 1. The phases of the amplitudes a k are assumed to be random.
Using k g we can define an effective gas temperature,
T g = k 2 g 2m . (2.18)
To avoid confusion, we stress that this is not a true thermodynamic temperature since eq. (2.16) is not an equilibrium distribution of the boson gas which should follow the Bose-Einstein formula. However, the latter cannot be reached within the classical field theory. Rather, as demonstrated in Ref. [39], a homogeneous axion gas with initial distribution (2.16) will evolve towards the Rayleigh-Jeans occupation numbers diverging at low k. This relaxation proceeds on the time scale 19) and culminates in the spontaneous formation of a soliton. We neglect the change of the gas distribution in our theoretical considerations and discuss the validity of this simplification later on. Numerically, we observe that the Maxwell distribution appears to get reinstated in the gas once the soliton is formed. Moreover, in the simulations where the soliton is present for the whole duration, the distribution remains close to Maxwellian at all moments of time.
τ rel = √ 2b k 6 g 12π 3 G 2 m 3 ρ 2 g ln(k g L) , b ≈ 0.9 ,(2.
Being a self-gravitating system, the homogeneous axion gas is unstable with respect to gravitational collapse leading to a halo formation. The corresponding Jeans length is
l J = k g m π 2Gρ g , (2.20)
where we have used that the sound speed in non-relativistic Maxwellian gas is k g /( √ 2m). We avoid this instability by considering the box size smaller than the Jeans length, L < l J .
(2.21)
Note that this condition is compatible with eq. (2.14) since l J can be made arbitrarily large by decreasing the gas density. In practice, however, eq. (2.21) imposes strong limitations on the numerical simulations, see section 4. The total axion field describing a soliton immersed in the gas is given by the sum
ψ(t, x) = ψ s (t, x) + ψ g (t, x) . (2.22)
For this decomposition to be well-defined, the number of particles in the soliton must be much larger than in any other state in the gas,
M s /m f k . (2.23)
To compare the soliton size with the characteristic wavelength of axion waves, we introduce
ν ≡ k g k s = 0.773 r s k g = 0.555 T g T s . (2.24)
Recalling that the mass of the soliton is inversely proportional to its size, we split solitons into three groups: light solitons (ν 1), heavy solitons (ν 1), and median solitons (ν ∼ 1). Note that light solitons are also cold, heavy solitons are hot, whereas median solitons have the same virial temperature as the gas. We are going to see that the evolution of solitons from different groups is dramatically different.
Particle Exchange between Soliton and Gas
Soliton growth rate from wave scattering
Soliton is composed of Bose-Einstein condensate occupying the ground state in its own gravitational potential. Several processes affect the soliton in the axion gas. One of them is the interference of gas waves with the soliton field which leads to fluctuations of its peak density. Another one is elastic scattering of waves on the soliton which endows it with momentum and leads to its Brownian motion. These processes, however, do not change the number of particles in the ground state and are not of interest to us. We focus on the processes that lead to particle exchange between the gas and the soliton and thereby affect the amplitude of the Bose-Einstein condensate. In this section we develop their description using scattering theory. We adopt the language of quantum field theory as the most convenient tool for this task. However, it is important to emphasize that quantum physics is not essential for the soliton-gas interaction. In appendix A we show how the same results can be obtained within purely classical approach.
We start by observing that the Schrödinger-Poisson equations can be derived from the action
S = dtd 3 x iψ * ∂ t ψ + ψ * ∆ψ 2m + Φ∆Φ 8πG − mΦ|ψ| 2 . (3.1)
We decompose the total axion field into the soliton and gas components as in eq. (2.22). At this point we should be more precise about how we perform the split. The spectrum of particle states in the soliton background contains unbound states with wavefunctions becoming plane waves far away from the soliton, as well as bound states in the soliton gravitational potential.
In the literature, the latter are usually interpreted as excitations of the soliton. While this is a valid interpretation, it is more convenient for our purposes to include them into the gas. The physical reason is that no matter whether the state is bound or not, a transfer of particles to it from the ground state will deplete the coherence of the soliton, whereas the inverse process clearly has an opposite effect. Thus, we adopt the following convention: the soliton component refers to coherent particles strictly in the ground state described by the wavefunction (2.3), whereas the gas includes all the rest of particles. Decomposing also the Newton potential into the gravitational potential of the soliton and perturbations, Φ = Φ s +φ, substituting it into eq. (3.1) and keeping only terms containing perturbations, we obtain the gas action,
S g = dtd 3 x iψ * g ∂ t ψ g + ψ * g ∆ψ g 2m −mΦ s |ψ g | 2 + φ∆φ 8πG −mψ * s φψ g −mψ s φψ * g −mφ|ψ g | 2 . (3.2)
In deriving this expression we have used that the soliton fields ψ s , Φ s satisfy the Schrödinger-Poisson equations. Following the rules of quantum field theory, we promote ψ g and φ to second-quantized fields, whereas ψ s , Φ s are treated as c-valued background. The terms linear in ψ g break the phase-rotation symmetry of the axion gas, ψ g → ψ g e iα , and therefore lead to non-conservation of gas particles. Of course, the total number of non-relativistic axions is conserved, meaning that the particles from the gas go into the soliton and vice versa. The last term in eq. (3.2) preserves the gas particle number and describes interactions of axions in the absence of soliton. It is responsible for the kinetic relaxation in a homogeneous gas [39]. Due to energy conservation, a particle can be absorbed or emitted by the soliton only if it exchanges energy with another particle from the gas. This leads us to consider the process g + g → g + s when two gas particles scatter on each other and one of them merges into the soliton, as well as the inverse process s + g → g + g when a particle hits the soliton and kicks out another particle. The Feynman diagrams for these processes are shown in fig. 2. Solid straight lines represent the gas particles, whereas dashed line corresponds to the soliton. Wavy line stands for the "propagator" of the Newton potential which is proportional to the inverse of Laplacian. In the approximation of infinite box size it reads,
k E 1 E 2 E s E 3 + k E 2 E 1 E s E 3 k E s E 3 E 1 E 2 + k E s E 3 E 2 E 1 (a) (b) (c) (d)(t, x) (t , x ) = −i 4πG δ(t − t ) [dk] k 2 e ik(x−x ) ,(3.3)
where we have introduced a shorthand notation for the integration measure
[dk] ≡ d 3 k (2π) 3 . (3.4)
Combining it with the vertices implied by the action (3.2), we obtain the amplitude for the diagram (a) in fig. 2,
A 1s,23 = (2π)δ(E 1 + E 2 − E 3 − E s ) (4πGm 2 ) [dk] k 2 V 1s (k)V 23 (−k) ,(3.5)
with the vertex form factors
V 1s (k) = d 3 x ψ 1 (x)χ(|x|)e ikx , V 23 (k) = d 3 x ψ 2 (x)ψ * 3 (x)e ikx ,(3.6)
where ψ i (x), i = 1, 2, 3, are the wavefunctions of the states with energies E i . The diagram (b) is obtained simply by interchanging the particles 1 and 2, so the total absorption amplitude is A 1s,23 + A 2s, 13 . The emission process -diagrams (c, d) in fig. 2 -is described by the complex conjugate amplitude A * 1s,23 + A * 2s, 13 . The probability that two particles 1 and 2 scatter in the way that one of them merges into soliton in unit time is given by the usual formula,
dp 12→3s dt = (2π)δ(E 1 + E 2 − E 3 − E s ) |A 1s,23 + A 2s,13 | 2 ,(3.7)
where prime denotes the amplitudes stripped off the energy δ-function, 8) and similarly for A 2s, 13 . To obtain the change in the soliton mass, we have to subtract the rate of the inverse process and sum over all states in the gas weighting them with the occupation numbers f i . The weighting takes into account the effect of the Bose enhancement due to non-zero occupation numbers of the initial and final states. This yields,
A 1s,23 = (4πGm 2 ) [dk] k 2 V 1s (k)V 23 (−k) ,(3.Γ s = m M s × 1 2 states 1,2,3 (2π)δ(E 1 +E 2 −E 3 −E s ) f 1 f 2 (1+f 3 ) − (1+f 1 )(1+f 2 )f 3 |A 1s,23 + A 2s,13 | 2 m 2M s states 1,2,3 (2π)δ(E 1 +E 2 −E 3 −E s ) f 1 f 2 − f 1 f 3 − f 2 f 3 |A 1s,23 + A 2s,13 | 2 ,(3.9)
where the factor 1/2 has been inserted to avoid double-counting the pairs of states related by the interchange of particles 1 and 2. In going to the second line we used that the occupation numbers are large and kept only the leading terms quadratic in f i . Equation (3.9) represents the key result of this subsection. It describes the evolution of the soliton mass for arbitrary distribution of the gas particles.
To proceed, we assume that the gas distribution far away from the soliton is controlled by a single characteristic momentum k g as, for example, in the case of the Maxwellian gas (2.16). For the bound states localized near the soliton, the occupation numbers can, in principle, also depend on the soliton properties. These, as discusses in section 2, are determined by a single parameter k s . Thus, we write an Ansatz,
f i = ρ g mk 3 g u mE i k g , k g k s , (3.10)
where ρ g is the density of the gas far away from the soliton, and u is a dimensionless function. Next, it is convenient to rescale the coordinates, momenta, energies and wavefunctions to units associated with the soliton,
x = ξ/k s , k = k s κ , E i = ε i k 2 s m , ψ i (x) = k 3/2 s ϕ i (k s x) . (3.11)
Substituting these rescaled variables into eqs. (3.6), (3.8), (3.9) we obtain,
Γ s = (4πG) 2 m 3 ρ 2 g k 6 g γ s (ν) ,(3.12)
where ν = k g /k s is the parameter introduced in eq. (2.24). The dimensionless function γ s (ν) is computed by summing over the states in the background of the standard soliton of section 2,
γ s (ν) = π µ 0 states 1,2,3 δ(ε 1 +ε 2 −ε 3 −ε 0 ) u 1 u 2 − u 1 u 3 − u 2 u 3 |A 1s,23 + A 2s,13 | 2 ,(3.13)
where ε 0 , µ 0 are numerical coefficients quoted in eq. (2.7) and u i ≡ u(ε i /ν 2 , ν) are rescaled occupation numbers. For the rescaled amplitudes we have
A 1s,23 = [dκ] κ 2 V 1s (κ)V 23 (−κ) , (3.14) V 1s (κ) = d 3 ξ ϕ 1 (ξ)χ 0 (ξ)e iκξ , V 23 (κ) = d 3 ξ ϕ 2 (ξ)ϕ * 3 (ξ)e iκξ ,(3.15)
where χ 0 (ξ) is the standard soliton profile. In section 4 we extract the function γ s (ν) from numerical simulations, whereas in the rest of this section we estimate it analytically for the cases of light and heavy solitons in Maxwellian gas. Before moving on, let us comment on the structure of the eigenfunctions in the soliton background which enter into the calculation of the soliton growth rate through the form factors (3.6) or (3.15) (the details will be presented in a forthcoming publication [60]). First, it is clear from the third term in the action (3.2) that the wavefunctions will be affected by the soliton gravitational potential Φ s . While this effect is small for highly excited unbound states with energies E i |E s |, it becomes important for the states with E i |E s | and gives rise to a discrete spectrum of bound states. Second, an additional modification of the eigenfunctions comes from the term −mψ * s φψ g and its complex conjugate in eq. (3.2). These terms bring qualitatively new features by mixing positive and negative frequencies in the eigenvalue equation [60,61]. As a result, the eigenmodes contain both positive and negative frequency components which can be interpreted as consequence of the Bogoliubov transformation required to diagonalize the Hamiltonian in the presence of the condensate [62]. The negative-frequency part is significant for low lying modes and cannot be discarded. In particular, it is crucial for the existence of zero-energy excitations required by the spontaneously broken translation symmetry. On the other hand, for the modes of the continuous spectrum the negative-frequency component is essentially negligible.
Light soliton
Calculation of γ s (ν) is challenging in general. The task simplifies for the case ν 1 which corresponds to light soliton as defined in section 2. The typical momentum of particles in the gas in this case is much larger than the momentum of particles in the soliton. In other words, the soliton is colder than the gas.
Let us understand which kinematical region gives the dominant contribution into the sum in eq. (3.13). To this aim, consider the amplitude (3.14) and take the particles 2 and 3 to be typical particles in the gas. Since their energies are much higher that the soliton binding energy, their wavefunctions are well described by plane waves with momenta κ 2 , κ 3 which are of order ν. Substituting these into the vertex V 23 we obtain,
V 23 (−κ) = (2π) 3 δ(κ 2 − κ 3 − κ) ,(3.16)
and hence the amplitude
A 1s,23 = V 1s (κ) κ 2 , κ = κ 2 − κ 3 . (3.17)
The denominator enhances the amplitude for soft momentum exchange. However, the exchange cannot be arbitrarily small since the matrix element V 1s (κ) vanishes at κ = 0 due to orthogonality of the wavefunctions ϕ 1 and χ 0 . It can be further shown [60] that a linear in κ contribution also vanishes as a consequence of (spontaneously broken) translation invariance. Thus, V 1s (κ) ∼ κ 2 (3.18) and the pole in the amplitude cancels out. We conclude that the amplitide is maximal at κ ∼ 1 where it is of order 1. The corresponding wavefunction ϕ 1 must be one of the low-lying states with characteristic energy and momentum |ε 1 |, κ 1 ∼ 1. Notice that the amplitude obtained by the interchange of particles 1 and 2 for the same kinematics is suppressed,
A 2s,13 = V 2s (κ 1 − κ 3 ) |κ 1 − κ 3 | 2 ∼ 1 κ 2 3 ∼ 1 ν 2 . (3.19)
We now return to the expression (3.13) and rewrite it in the following form,
γ s (ν) = π µ 0 states 1,2,3 δ(ε 1 + ε 2 − ε 3 − ε 0 ) 2u 1 (u 2 − u 3 )|A 1s,23 | 2 − 2u 2 u 3 |A 1s,23 | 2 + (u 1 u 2 − u 1 u 3 − u 2 u 3 )(A 1s,23 A * 2s,13 + h.c) .
(3.20)
For the preferred kinematics, the first term in brackets is small. Indeed, using the Maxwell distribution for the unbounded states we obtain,
u 2 − u 3 = u 2 1 − e −2(ε 3 −ε 2 )/ν 2 = u 2 1 − e −2(ε 1 −ε 0 )/ν 2 ≈ u 2 2(ε 1 − ε 0 ) ν 2 = O(ν −2 ) , (3.21)
where in the second equality we used the energy conservation. The terms in the second line in eq. (3.20) are also suppressed due to eq. (3.19). Thus, up to corrections of order O(ν −2 ), we have
γ s (ν) = − 2π µ 0 state 1 [dκ 2 ][dκ 3 ]δ ε 1 −ε 0 + κ 2 2 2 − κ 2 3 2 (4π) 3 e −(κ 2 2 +κ 2 3 )/ν 2 |V 1s (κ 2 − κ 3 )| 2 |κ 2 − κ 3 | 4 . (3.22)
Two comments are in order. First, we observe that γ s (ν) is negative. Recalling that it multiplies the rate of the soliton mass change, eq. (3.12), we conclude that the mass of a light soliton decreases -it evaporates. Second, the expression (3.22) does not depend on the occupation number of the low-lying state 1. This is a nice property. Particles from the low-lying energy levels are further upscattered by the gas and eventually become unbound. Calculation of the occupation numbers of these levels presents a nontrivial task. Fortunately, we don't need to know them to determine the soliton evaporation rate in the leading order. The next steps include changing the integration variables to κ = κ 2 − κ 3 and κ + = (κ 2 + κ 3 )/2 and performing the integration over κ + . Discarding suppressed terms, we obtain that γ s is proportional to ν 2 with a numerical coefficient equal to a certain weighted sum over states in the standard soliton background,
γ s (ν) = −C ls ν 2 , C ls = 8π 2 µ 0 state 1 [dκ] κ 5 |V 1s (κ)| 2 . (3.23)
Despite an apparent pole of the integrand at κ → 0, the coefficient C ls is finite due to the property (3.18). Numerical evaluation gives [60],
C ls 3.5 . (3.24)
To summarize, the light solitons evaporate. The change of the soliton mass is dominated by the process of g + s → g + g, with gas particles kicking off axions from the soliton. By considering the soft momentum exchange, we have obtained the leading term in the function γ s (ν) in the evaporation rate, which is proportional to ν 2 with an order-one coefficient.
It is instructive to compare the time scale of evaporation |Γ s | −1 with the relaxation time in the gas (2.19). We see that evaporation is faster than relaxation if ν exceeds the critical values
ν c = 3π ln(k g L) 4 √ 2 b C ls 1.6 , (3.25)
where we have used ln(k g L) ∼ 5. This is close to the threshold for soliton evaporation found in numerical simulations, see section 4. For ν > ν c the relaxation in the gas can be neglected and our assumption of the stability of the Maxwell distribution is well justified.
Heavy soliton
In this section we consider the opposite limit ν 1 corresponding to heavy or hot soliton. The analysis in this case is more complicated, so we content ourselves with semi-qualitative discussion focusing on the overall scaling of the growth rate function γ s with ν. A more detailed study is left for future.
For heavy soliton, the typical energy of gas particles is much smaller than the soliton binding energy which in our dimensionless units is of order one. Then the process with kicking-off particles from the soliton shown on the right of fig. 2 is strongly suppressed since it requires from particle 3 to have order-one energy. We are left with the absorption process given by the diagrams (a, b) on fig. 2 and corresponding to the term proportional to u 1 u 2 in eq. (3.13). This already allows us to conclude that the heavy soliton grows at a strictly positive rate, thereby excluding the possibility of a kinetic equilibrium between the soliton and the gas.
Particles 1 and 2 that participate in the absorption process can belong either to unbound or to bound states. A problem arises because the occupation numbers of the bound states are unknown. In a complete treatment, they must be determined self-consistently from the solution of the Boltzmann equation in the gas. Such analysis is beyond the scope of this paper. Below we focus on the contribution into γ s (ν) coming from the processes when both states 1 and 2 are unbound, assuming that it correctly captures the scaling of the full result with ν. We stress that this assumption must be verified by a detailed study which we postpone to future. We further assume that the occupation numbers of the unbound states are Maxwellian.
Even for unbound sates, the wavefunctions are significantly modified by the long-range Newtonian potential of the soliton which in the dimensionless units has the form,
U (ξ) = − µ 0 4πξ ≡ − β ξ . (3.26)
We can capture its effect by approximating the exact eigenfunctions with the Coulomb wavefunctions,
ϕ κ (ξ) = e i(β/κ)(ln β/κ−1)+iπ/4 Γ 1 − i β κ e πβ/(2κ) e iκξ 1 F 1 i β κ ; 1; i(κξ − iκξ) ,(3.27)
where Γ stands for the gamma-function and 1 F 1 is the confluent hypergeometric (Kummer) function. This solution describes a scattered wave with initial momentum κ. Note that, compared to the standard definition, we have added a phase in eq. (3.27) for later convenience.
For modes with small asymptotic momenta the eigenfunctions simplify,
ϕ κ (ξ) → 2πβ κ J 0 2 β(ξ − nξ) ≡ 1 √ κφ n (ξ) , κ 1 ,(3.28)
where n = κ/κ is the unit vector in the direction of momentum. We observe that the dependence on the absolute value of momentum factorizes. Note that the eigenfunctions get enhanced at κ → 0 which reflects the focusing effect of the Coulomb field. Note also that, despite the small momentum at infinity, the eigenfunctions oscillate with order-one period at ξ ∼ 1, consistent with the fact that particles accelerate to an order-one momentum in the vicinity of the soliton. We now use eq. (3.28) for the gas particles 1 and 2 (but not for the particle 3 which has κ 3 ∼ 1). This yields for the amplitude,
V 1s (κ) = 1 √ κ 1 d 3 ξφ n 1 (ξ)χ 0 (ξ)e iκξ ≡ 1 √ κ 1V 1s (κ) , (3.29a) V 23 (κ) = 1 √ κ 2 d 3 ξφ n 2 (ξ)ϕ * κ 3 (ξ)e iκξ ≡ 1 √ κ 2V 23 (κ) , (3.29b) A 1s,23 = 1 √ κ 1 κ 2 [dκ] κ 2V 1s (κ)V 23 (−κ) ≡ 1 √ κ 1 κ 2Â 1s,23 , (3.29c)
where the hatted quantities do not depend on the absolute values of the momenta κ 1 , κ 2 . We substitute this into the expression for γ s and, upon neglecting ε 1 , ε 2 in the energy δ-function, perform the integration over κ 1 , κ 2 . In this way we obtain,
γ (u) s (ν) = ν 4 (2π) 2 µ 0 dn 1 dn 2 [dκ 3 ] δ κ 2 3 2 + ε 0 |Â 1s,23 +Â 2s,13 | 2 , (3.30)
where the superscript (u) is to remind that we consider only the contribution from unbound states. All quantities inside the integral are ν-independent. Thus we conclude that γ (u) s scales as the fourth power of ν. Assuming that this also holds for the full contribution we write,
γ s (ν) = C hs ν 4 , C hs > 0, at ν → 0 . (3.31)
This implies that the soliton growth slows down with the increase of the soliton mass. We do not attempt to estimate the numerical coefficient C hs . As already mentioned, this would require inclusion of the bound state contribution which is beyond our present scope. Another caveat comes from the fact that the time scale of the heavy soliton growth Γ −1 s happens to be parametrically longer than the gas relaxation time (2.19). On these time scales the gas distribution may evolve away from Maxwellian which we assumed in our derivation. 6 Thus, the formula (3.31) should be taken with a grain of salt. Its comparison with the results of simulations is discussed in the next section.
Wave Simulations
In this section we present our numerical simulations. We first describe the setup. Then we provide three typical examples of simulation runs for heavy, intermediate and light solitons and introduce the procedure which we use to measure the soliton growth rate. Finally, we assemble 195 individual simulation runs to extract the soliton growth/evaporation rates and compare them to the theoretical predictions of the previous section. We focus here on the main suit of simulations where in each run we assign a single soliton surrounded by Maxwellian axion gas as the initial conditions. In appendix B we also report the simulations without the initial soliton where it forms dynamically from the axion gas, as in Ref. [39].
Setup Evolution
We use the scaling transformation (2.2) to convert the Schrödinger-Poisson equations into the following dimensionless form,
i∂ tψ + 1 2 ∆ψ −Φψ = 0 , (4.1a) ∆Φ = |ψ| 2 , (4.1b)
which is equivalent to the choice of units m = 4πG = 1. This system is solved on a cubic lattice of size N with periodic boundary conditions onψ andΦ. We use the residual scaling symmetry to fix the lattice spacing to one, dx = 1. The size of the lattice thus sets the length of the box side and remains a free parameter. We run simulations for three different values N = 128, 256, 512. In what follows we omit tildes over dimensionless quantities. The wavefunction is advanced by the leapfrog integration algorithm (drift-kick-drift) [49,63],
ψ(t + dt, x) = e i ∆ dt/4 · e −i Φ(t+dt/2,x) dt · e i ∆ dt/4 ψ(t, x) . (4.2)
We transform ψ to the momentum space to evolve with e i ∆ dt/4 and ∆ is converted to −k 2 , while the evolution with the gravitational potential, e −i Φ dt , is performed in the real space. Fourier components of the gravitational potential with k = 0 are found from eq. (4.1b),
Φ k = − (|ψ| 2 ) k k 2 , (4.3)
whereas the zero mode is set to vanish, 7 Φ k=0 = 0. We use uniform time step dt = 2/π which is determined by the requirement that the phase difference of a high-momentum mode with k = π between consecutive time slices does not exceed π. To assess the accuracy of the simulations, we monitor the total energy of the axion field in the box,
E = 1 2 k k 2 |ψ k | 2 + 1 2 x Φ(x)|ψ(x)| 2 . (4.4)
We have observed that the energy conservation quickly deteriorates for heavy solitons with sizes comparable to the lattice spacing, r s ∼ 1 (see appendix C.1 for details). In our analysis we only use runs where the energy is conserved with the precision 0.1%.
Initial conditions for axion gas
The gas wavefunction is set up in the initial conditions through its Fourier decomposition,
ψ g (t = 0, x) = 1 N 3/2 k a k · e ik·x ,(4.5)
where the absolute values of the amplitudes a k are taken to follow the Maxwell distribution (2.16). To ensure that the gas modes are well resolved on the lattice, we restrict to k g ≤ 1.
The phases of a k are assigned to random numbers uniformly distributed in the range (0, 2π).
We have repeated simulations for several random initial phase realizations and have found that the choice of realization does not affect our results. The mean gas density ρ g and its total mass M g can be deduced as
ρ g = 1 N 3 d 3 x |ψ(x)| 2 = f g k 3 g (4π) 3/2 , M g = ρ g N 3 = f g k 3 g N 3 (4π) 3/2 . (4.6)
The gas density is limited from above by the condition to avoid the Jeans instability that triggers a halo formation and thereby complicates the interpretation of simulation results. Thus, we require the size of the simulation box to be smaller than the Jeans length (2.20), which yields the condition:
N < l J ⇐⇒ f g k g < 0.054 N 128 −2 . (4.7)
This puts a stringent restriction on the parameter space of the simulations.
Initial conditions for soliton
We superimpose the soliton wavefunction on top of the gas wavefunction at the beginning of the simulations. 8 The input soliton density profile uses the analytic fit (2.9) characterized by a single parameter, the half-peak radius r init s . The peak density of the fit is taken to be [36],
ρ init s, peak = 2.794 (r init s ) 4 ,(4.8)
which is slightly lower (by less than 2%) than the exact value implied by the formulas of section 2. This discrepancy is negligible given other uncertainties of the simulations. The initial phase of the soliton wave function is set to be zero. This choice does not change our average result since the phases of the axion gas are random. We notice that the initial soliton gets slightly deformed after superposing on the wavefunction of axion gas, but this deformation has little effect on the late time evolution. We take r init s ≥ 1.5 for the soliton to be resolved on the lattice. Periodic boundary conditions give rise to image solitons at distance N from the central one. We have observed that these images can distort the central soliton wavefunction. To avoid this distortion, we require the soliton size to be much smaller than the box, r init s < 0.1 N .
Measurement of the soliton mass
During the simulations the radius of the soliton evolves together with its mass. We estimate r s , M s at a given time using their relation to the soliton peak density provided by the fit to the soliton density profile, 9 r s = 1.293 ρ Since the soliton moves through the box during simulations, the position of its peak is unknown. We choose the maximal density in the whole box as a proxy for the soliton peak density assuming that the soliton is prominent within the axion gas. Note that due to interference between the soliton and the gas, the peak density of the axion field does not, in general, coincide with the soliton peak. Choosing the maximal density in the box can bias our estimate of the soliton peak density, and hence of its mass, upwards. Detailed investigation of this bias is performed in appendix C.2. It shows that the bias is at most 20% when the maximal density is higher than the mean gas density by a factor of 30 and quickly decreases for higher density contrasts. To obtain the soliton growth rate we analyze only the parts of the simulations with ρ s, peak > 30 ρ g .
On the other hand, we require the mass of the soliton to be significantly smaller than the total mass of the gas in order to avoid any effects on the soliton evolution that can arise due to a shortage of particles in the gas. We implement this by the condition M s < 0.5 M g .
Parameter space
Our simulations have four input parameters: N , k g , f g , and r init s , which describe the box size, the momentum distribution of axion gas, and the size of soliton. In this work, we use three box sizes, N = 128, 256, and 512. For the regime of light soliton, most of the simulations are conducted with N = 128, while for heavy solitons we use large boxes N = 512 in order to reach low (k g r s ) ∼ 0.1. The remaining three parameters are sampled in the ranges k g ∈ (0.1 , 1) , f g ∈ (10 −4 , 0.12) , r init s ∈ (1.5 , 12) . (4.10)
Their choice is dictated by the goal to efficiently capture the soliton growth/evaporation within realistic simulation time, while resolving the axion gas and the soliton on the lattice. In addition, they are subject to constraints discussed above which we summarize here for clarity:
a) f g k g <
Growing and evaporating solitons
In this section we present a case study of several simulations that illustrate possible evolution of the soliton-gas system. We use these examples to introduce our procedure for extraction of the soliton growth rate. We also provide evidence that the gas distribution remains close to Maxwellian during the simulations. We consider three simulation runs with the same initial gas configuration characterized by (N = 128, k g = 1, f g = 0.01) and different initial soliton sizes r init s : 1.51 (heavy soliton), 2.71 (median soliton), and 3.62 (light soliton). Figures 4-6 show the evolution of the soliton characteristics in the three runs. These include the soliton peak density ρ s, peak (t) (which we identify with the maximal density in the box), the soliton mass M s (t) and the soliton radius r s (t). The peak density is normalized to the mean density of the gas, whereas the mass and radius are determined using the relations (4.9). Clearly, the heavy soliton grows and the light soliton evaporates which is consistent with the analysis of section 3. The median soliton remains approximately unchanged indicating that the transition from growth to evaporation occurs at (k g r s ) ∼ 2.7. We also plot in figs. 4-6 the change in the total energy of the axion field in the box. For the median and light solitons the energy is conserved with high precision |E(t)/E(0) − 1| 10 −5 throughout the whole duration of the simulations. For the heavy soliton, the energy exhibits a slow drift and the error exceeds 0.1% by the end of the simulations. We associate this with the loss of spatial and temporal resolution for heavy solitons which have small sizes r s ∼ 1 and high oscillation frequencies |E s | ∼ 1 (see appendix C.1 for a detailed discussion). In our analysis we use only the portion of the simulation where |E(t)/E(0) − 1| < 10 −3 .
We now describe our algorithm to extract the soliton growth rate Γ s . The task is complicated by strong oscillations of the soliton peak density which are clearly visible in the plots and translate into oscillations of the estimated soliton mass and radius. Such oscillations have been observed in previous works [33,42] and correspond to the normal modes of the soliton [61,64] with the frequency of the lowest mode ω ∼ 0.5 r −2 s . To mitigate their effect, we construct running averages of the soliton parameters by smoothing them with a tophat function. 10 We take the width of the top-hat as a function of the initial soliton size t width = 70(r init s ) 2 which covers about five periods of the oscillations. The resulting smoothed dependences are shown in figs. 4-6 by thick curves.
While smoothing suppresses most of the chaotic oscillations, it still leaves some irregularities in the time dependence of the soliton mass that introduce significant noise when calculating its time derivative. To further suppress this noise, we fit the smoothed mass with an analytic function of time. We have found that a quadratic fit is sufficient in all cases. Thus, we write
M fit s (t) = a 0 + a 1 t + a 2 t 2 ,(4.11)
where a 0 , a 1 and a 2 are fitting parameters. The fitting time-range is determined by the following criteria:
• Inside the range the soliton peak density, mass and radius satisfy the conditions (c,d) from section 4.1; • The total energy in the simulation box is conserved within precision |E(t)/E(0) − 1| < 0.1%;
• The time duration is smaller than half of the relaxation time (2.19) to avoid possible changes in the gas distribution due to kinetic relaxation [39]. 11
The best-fit values of a 0 , a 1 , a 2 for the three sample runs are given in table 1. The corresponding fitted curves are shown in figs. 4-6 with yellow dots. We also define the "fitted" soliton radius by converting it from the soliton mass in accordance with eqs. (4.9), r fit s (t) ≡
32.37
M fit s (t) = 32.37 a 0 + a 1 t + a 2 t 2 .
(4.12)
The result matches very well the smoothed dependence r s (t), see figs. 4-6. We have verified that an independent fit of smoothed r s (t) with a quadratic polynomial produces essentially identical curves, which provides a consistency check of our procedure. We can now estimate the soliton growth rate substituting the fitted time dependence of the soliton mass in the defining formula (1.3), which yields, Γ fit s (t) = a 1 + 2 a 2 t a 0 + a 1 t + a 2 t 2 .
(4.13)
We are interested in the dependence of the growth rate on the soliton radius r s . Both these quantities depend on time, so a single run provides a continuous set of data points Table 1: Parameters of the soliton mass fit for the three simulations shown in figs. 4-6. The initial size of the soliton is r init s . The parameters of axion gas are N = 128, k g = 1, f g = 0.01.
r fit s (t), Γ fit s (t) sampled at different moments of time. In view of uncertainties of our smoothing and fitting procedure, we reduce this set to 20 data points r fit s (t i ), Γ fit s (t i ) , i = 1, . . . , 20, evenly distributed in time within the range of the fitting function M fit s (t). These 20 data points represent the output of a single run. In the next subsection we combine the outputs of 195 runs to build the cumulative dependence of the growth rate on the soliton and gas parameters.
Soliton growth rate depends on the gas distribution which can, in principle, change during the simulations. This could lead to incompatibility of the results read out at different moments from the start of the runs. To verify that this is not the case, we compare the runs that differ by the initial soliton mass, but have overlapping soliton mass ranges spanned during the evolution. The top panel of fig. 7 shows the evolution of the soliton mass in five simulations of heavy solitons with k g r init s varying from 0.75 to 2.26. The gas parameters are chosen the same in all five runs (N = 128, k g = 0.5, f g = 0.06). The curves have been shifted in time until they overlap. We observe that the curves are well aligned with each other. In the lower panel of fig. 7 we repeat the same exercise for five light soliton simulations with k g r init s from 3.32 to 4.52 and the gas parameters (N = 128, k g = 1, f g = 0.01). The stacked curves are again well aligned. We conclude that the soliton growth rate depends only on the initial gas parameters and the instantaneous soliton mass (or radius), and is insensitive to the previous evolution of the soliton-gas system. This justifies combination of the results extracted from different runs at different stages of simulations.
The above results suggest that the gas distribution remains close to Maxwellian during the simulations with solitons. We have measured the distribution directly at different moments of time and have seen that it is compatible with Maxwellian, though the measurement is rather noisy, see fig. 15 in appendix B. This is in stark contrast with simulations [39] without initial soliton where the gas distribution exhibits distinct evolution on the time scale τ rel (eq. (2.19)) towards populating low-momentum modes which culminates in the soliton formation. However, as discussed in appendix B, the distribution appears to return to Maxwellian after the soliton is formed. We also find that the growth of the soliton mass, though faster than in the Maxwellian gas right after the formation, approaches the Maxwellian rate within time of order τ rel , see fig. 14. This gives another evidence that the presence of the soliton "Maxwellizes" the gas.
The analytic derivation of section 3 implies that at fixed k g r s the soliton growth/evaporation rate is proportional to ρ 2 g /k 6 g ∝ f 2 g . To verify if this scaling holds in the simulations, we perform several runs with the same N , k g and r init s , but different f g . We measure the time dependence of the soliton mass and scale the time axis by f 2 g . The results are shown in fig. 8. We see a satisfactory agreement between different curves. A slightly faster growth of the curve with the highest value of f g at late times can be due to the fact that the gas in this case is closer to the Jeans instability leading to the development of an overdensity (proto-halo) around the soliton. We have clearly seen this overdensity in the runs with the parameters near the Jeans instability limit (4.7) and observed that it is correlated with the increase of the ratio Γ s /f 2 g . The associated bias is comparable to the other uncertainties in the measurement of Γ s and is included in the error bars for our final results in the next section.
Results
In this section, we construct the cumulative dependence of Γ s on the soliton and gas parameters. As explained above, each simulation run produces 20 data points (r s , Γ s ). We collect the data points from 195 runs and bin them in logarithmic scale in k g r s . In each bin we compute the average value and variance of
Γ s × (4π) 3 f 2 g = Γ s × k 6 g ρ 2 g .
(4.14)
The results of this procedure are shown in fig. 9. Note that we restore the dimensionful constants in the scale of Γ s in the figure.
Consistently with the analysis of section 3, the growth rate is positive at small k g r s (heavy solitons) corresponding to the soliton growth, and is negative at large k g r s (light solitons) corresponding to evaporation. Moreover, the data points with the largest values of k g r s match the asymptotic dependence (3.23), including the numerical coefficient (3.24), 12
Γ s −2.1 × (4πG) 2 m 3 ρ 2 g k 6 g (k g r s ) 2 . (4.15)
This dependence is shown by the blue line. Thus, we conclude that the asymptotics (3.23) are reached already at k g r s 5. The transition from evaporation to growth happens at k g r s ∼ 2.5 which is in reasonable agreement with the naive estimate (3.25). In terms of the gas and soliton virial temperatures, it corresponds to T g /T s 12.
For lower k g r s the soliton grows. The growth rate stays almost constant in the range 0.7 < k g r s < 2 where it is comparable to the inverse of the gas relaxation time τ −1 rel , see eq. (2.19). The lower end of the plateau corresponds to the equality of the gas and soliton virial temperatures, T g /T s = 1, which is marked by the dashed vertical line in fig. 9.
At k g r s < 0.7 (equivalently T g /T s < 1) the growth rate quickly decreases. We find that this decrease is consistent with a power law Γ s ∝ (k g r s ) n (4. 16) with n 3 indicated by the dotted line in the plot. The points with the smallest values of k g r s hint at a steepening dependence with n = 4 at k g r s → 0, in agreement with the analytic estimate (3.31). There are, however, several caveats that prevent us from claiming that we have reached the heavy soliton asymptotics. First, as pointed out in section 3.3, the expression (3.31) has been obtained under the assumption that the contribution of the bound states into the soliton growth scales with k g r s in the same way as the contribution of states from continuum. This assumption must be verified by analyzing the kinetic cascade in the soliton-gas system which is beyond the scope of the present paper. Second, the low-(k g r s ) bins in our simulations are at the extreme of the numerical resolution and close to the threshold for halo formation. Therefore they can be affected by systematics. Without the three lowest-(k g r s ) bins the numerical data are compatible with a shallower slope n = 2. All in all, the heavy soliton limit is challenging both to numerical and analytical methods. Taking into account the uncertainties, we conservatively conclude that the power n in eq. (4.16) for heavy solitons lies in the range 2 ≤ n ≤ 4. More work is needed to pin down the precise asymptotic value of n at k g r s → 0.
Discussion and outlook
Absence of kinetic equilibruium. We have found that a soliton (boson star) immersed into a homogeneous Maxwellian axion gas evaporates if its virial temperature is about 12 times lower than the virial temperature of the gas, and grows otherwise. This rules out the possibility of a stable kinetic equilibrium between the gas and the soliton.
Evaporation of light solitons. Though evaporation of cold solitons may at first sight appear surprising, the mechanism behind it is quite intuitive. Being a self-gravitating system, the soliton possesses negative heat capacity. Thus, a transfer of energy from the hot gas to the cold soliton makes the latter even colder. This leads to a run-away of the soliton temperature, and hence its mass, towards zero.
The parametric dependence of the evaporation rate can be estimated using the following simple considerations. 13 Wave interference in the axion gas produces density inhomogeneities with the characteristic size of half de Broglie wavelength λ a /2 = π/k g . These inhomogeneities can be though of as quasi-particles with the mass M qp ∼ ρ g (λ a /2) 3 [12]. A single quasi-particle colliding with the soliton transfers to it a recoil momentum
δp ∼ GM s M qp r 2 s · r s v qp , (5.1)
where v qp ∼ k g /m is the quasi-particle velocity, and r s appears as the typical impact parameter. This implies the soliton recoil energy
δE s ∼ δp 2 2M s ∼ G 2 M 2 qp M s 2r 2 s v 2 qp . (5.2)
Since the size of the quasi-particle is smaller than r s for the light soliton, the recoil energy is distributed non-uniformly throughout the soliton volume. This leads to excitation of its normal modes. The number of axions that get excited from the ground state and hence get lost by the soliton is of order δN s ∼ −δE s /|E s |. Combining everything together, we obtain the mass loss of the soliton in a single quasi-particle collision,
δM s M s ∼ − G 2 M 2 qp m 2 2v 2 qp , (5.3)
where we have used that |E s |r 2 s ∼ 1/m. To obtain the evaporation rate, we have to multiply this result by the number of quasi-particles bombarding the soliton in a unit of time, J qp ∼ 4πr 2 s (λ a /2) −3 v qp . In this way we arrive at
Γ s ∼ − 2π 4 G 2 m 3 ρ 2 g k 6 g (k g r s ) 2 , (5.4)
which agrees with the exact expressions (4.15) obtained from the kinetic theory within a factor of two. We have seen that the threshold for evaporation is set by the equality of the evaporation rate and the relaxation rate in the gas -a competing process leading to the soliton formation [39]. This explains why the solitons that are formed in the gas always have virial temperature comparable to that of the gas: they are just hot (and heavy) enough to survive.
In what physical situation can the soliton evaporation be relevant? For fuzzy dark matter, this is the case when a small subhalo with low velocity dispersion and light solitonic core falls into a bigger halo with higher velocity dispersion. Evaporation then adds a new destruction mechanism for the subhalo soliton, on top of the tidal stripping [59]. The time scale of evaporation is given by the inverse of |Γ s |, yr , (5.5) 13 We thank Neal Dalal and Junwu Huang for the discussion on this topic.
where ρ g and v g should be taken as the density and velocity dispersion of the bigger halo at the orbit of the soliton. The evaporation time is very sensitive to the halo parameters and can be longer or shorter than the age of the universe depending on their precise values. The evaporation should be also taken into account in the evolution of boson stars in merging QCD axion miniclusters. Though here the particle mass is much higher, the evaporation time can still be much shorter than the age of the universe due to the very small velocity dispersion v g ∼ 10 −5 km/s in the miniclusters and their extremely high density ρ g 10 6 GeV/cm 3 [65].
Growth of heavy solitons. For solitons with virial temperature above the evaporation threshold (T s 0.1 T g ) we have found that the growth rate quickly decreases once the soliton temperature exceeds that of the gas. This result is in qualitative agreement with other works [39,48]. The growth rate of heavy solitons measured from our numerical simulations is consistent with the power law (4.16) with n between 2 and 4. We have presented analytic arguments favoring n = 4 in the limit k g r s → 0, which is compatible with the numerical data in the lowest k g r s bins. These bins, however, suffer from large uncertainties and it remains unclear if the range k g r s 0.2 probed in the simulations is sufficient to reach into the asymptotic heavy soliton regime.
The power-law dependence of the rate (4.16) translates into power-law growth of the soliton mass, 14
M s ∝ t α , α = 1/n . (5.6)
Ref. [39] established that α = 1/2 provides a good fit to the soliton growth right after formation, whereas Ref. [48] found a dramatic flattening of the soliton mass curve at late times corresponding to α = 1/8. The results of Ref. [39] are consistent with ours, though our central value for the power n = 3 predicts a somewhat shallower dependence with α = 1/3. The steep growth observed in [39] might be due to a short duration of the simulations. Indeed, by carrying out numerical experiments with the same setup as in [39] (see appendix B) and fitting the soliton mass with the formula (5.6), we have observed a correlation of the best-fit index α with the soliton lifetime: α is about 1/2 for newly formed solitons and descreases down to 1/4 for grown-up solitons long after the relaxation time (see fig. 13). This trend is in agreement with our main simulations where we see indications of increasing n, and hence decreasing α, for heavier solitons. However, at this point the numerical data are rather inconclusive as to the robustness of this trend and the asymptotic value of α at t → ∞.
On the other hand, we do not see any evidence for the low α = 1/8 found in [48]. Moreover, our analytic considerations suggest that the asymptotic value of α is at least as high as 1/4. The discrepancy may be due to the difference in the setups. We study a soliton in a homogeneous gas, whereas Ref. [48] considers a soliton in the center of an axion halo. It is conceivable that suppression of the soliton growth in the latter case stems from its back reaction on the halo. It will be interesting to explore this possibility in more detail in future.
Soliton-host halo relation. One can ask whether our results have any implications for the soliton-host halo relation. The answer is: Not directly, because in the cosmological setting the solitons were found to form during the initial halo collapse when axions are not yet in the kinetic regime. Still, with some degree of extrapolation, one can argue that our results make unlikely formation of a light soliton since it would be evaporated by the fast axions from the halo. This sets a lower bound on the soliton mass which is just a factor of a few lower than M SSH s , the mass corresponding to the soliton-host halo relation. 15 Heavier solitons can, in principle, form with arbitrary masses and will continue growing upon the halo virialization. The time scale for this growth can, however, be very long and exceed the age of the universe when the soliton mass exceeds M SSH s . Moreover, it is natural to speculate that the solitons are more likely to form as light as they can which singles out M SSH s as the sweet spot. This reasoning still does not tell us how far the soliton-host halo relation can be extrapolated in the parameter space. In particular, we do not know whether the solitons form in any halo and for any value of axion mass, or for some parameters their formation becomes improbable. More work is needed to answer these questions.
Persistence of Maxwell distribution. It is known that without a soliton the velocity distribution of axion gas relaxes towards thermal form with high population of low-momentum modes [39]. We have found evidence that the presence of soliton changes the picture. In this case the Maxwell distribution appears to persist on timescales significantly longer than the kinetic relaxation time. Moreover, in the simulations with soliton formation we observed restoration of the Maxwell distribution after a transient period with enhanced population of low-momentum modes preceding the birth of the soliton. This "Maxwellization" manifests itself indirectly in the universality of the soliton mass evolution in simulations with different histories (figs. 7,14), as well as in the directly measured momentum distribution at different moments of time ( fig. 15). The latter, however, is subject to large temporal fluctuations which presently do not allow us to move beyond qualitative statements. It will be interesting to study this phenomenon more quantitatively in future by developing methods of measuring the momentum distribution with reduced noise. A complementary approach would be to track the distribution of axions in energy, instead of momentum, as suggested in Ref. [39]. and rewrite the Schrödinger-Poisson system as a single equation with non-local interaction,
i∂ t ψ + ∆ψ 2m − 4πGm 2 ψ 1 ∆ |ψ| 2 = 0 , (A.1)
where 1 ∆ denotes the Green's function of the Laplacian. Clearly, this equation conserves the total mass of axions in a box M tot = m d 3 x|ψ| 2 . Now, we make the split (2.22) into the soliton and gas and, using the fact that the soliton is a solution of eq. (A.1), obtain the equation for the gas component,
i∂ t ψ g + ∆ψ g 2m − 4πGm 2 ψ g 1 ∆ |ψ s | 2 + ψ s 1 ∆ (ψ * s ψ g ) + ψ s 1 ∆ (ψ s ψ * g ) − 4πGm 2 ψ g 1 ∆ (ψ * s ψ g ) + ψ g 1 ∆ (ψ s ψ * g ) + ψ s 1 ∆ |ψ g | 2 + ψ g 1 ∆ |ψ g | 2 = 0 . (A.2)
In the first line we have grouped the terms that affect the gas field at linear order, whereas the second line contains interactions. Note that, despite the presence of the small factor 4πGm 2 , all terms in the first line are of the same order because ψ s is proportional to (4πGm 2 ) −1/2 , see eq. (2.4). Correspondingly, the leading interaction terms are of order √ 4πGm 2 . The mass of the gas is not constant. From eq. (A.2) we have,
dM g dt = m d dt d 3 x|ψ g | 2 = −(8πGm 3 ) Im d 3 x ψ * s ψ g 1 ∆ (ψ * s ψ g ) + ψ * s ψ g 1 ∆ |ψ g | 2 , (A.3)
where we have canceled the boundary terms assuming periodic boundary conditions. Since the total mass is conserved, this must be compensated by the change in the soliton mass. Thus, we obtain for the soliton growth rate,
Γ s = 8πGm 3 M s Im d 3 x ψ * s ψ g 1 ∆ (ψ * s ψ g ) + ψ * s ψ g 1 ∆ |ψ g | 2 . (A.4)
If we neglect the interaction terms in eq. (A.2), it admits a set of periodic-in-time solutions. We decompose the gas field in these eigenmodes, 16
ψ g (t, x) = i a i (t)e −iE i t ψ i (x) , (A.5)
where the amplitudes a i (t) slowly vary due to the interactions. Substituting into eq. (A.4) we obtain,
Γ s = − 2m M s Im i,j a i a j e −i(E i +E j −2Es)t A is,js + i,j,k a i a j a * k e −i(E i +E j −E k −Es)t A is,jk , (A.6)
where the scattering amplitude A is,jk is defined in eq. (3.8), and A is,js is defined similarly with the kth wavefunction replaced by the soliton. All terms in the first sum quickly oscillate since the gas states are separated from the ground state by an energy gap of order |E s |. Thus, they disappear once we average the growth rate over time scales of order |E s | −1 and we omit them in what follows.
The second sum does not vanish upon time averaging because the combination of energies in the exponent can be small. However, to obtain the physical growth rate we also have to average over random initial phases of the gas amplitudes. In the absence of interactions the amplitudes a i in eq. (A.6) coincide with the initial amplitudes a (0) i and thus averaging over their phases will give Γ s = 0. To obtain a non-zero result, we have to take into account gas interactions.
The first correction to the free gas field is due to terms of order √ 4πGm 2 in eq. (A.2). We can write it schematically as
ψ (1) g = (4πGm 2 ) G ret * ψ (0) g 1 ∆ ψ * s ψ (0) g + ψ (0) g 1 ∆ ψ s ψ (0) g * + ψ s 1 ∆ |ψ (0) g | 2 , (A.7)
where ψ (0) g is the free gas field and G ret is the retarded Green's function of the operator in the first line of (A.2). Using the complete set of eigenmodes, it can be written as, 17
G ret (t − t , x, x ) = i dE 2π ψ i (x)ψ * i (x ) E − E i + i e −iE(t−t ) . (A.8)
Substituting this expression into (A.7) and expanding ψ (1) g and ψ (0) g into eigenmodes, we obtain the first-order correction to the amplitudes,
a (1) i = − j,k a (0) j a (0) k e −i(E j +E k −E i −Es)t E j + E k − E i − E s + i A ks,ji + a (0) j a (0) k * e −i(E j −E k −E i +Es)t E j − E k − E i + E s + i (A * ks,ij + A * is,kj ) . (A.9)
Next, we insert this expression into the first-order contribution to the soliton growth rate,
Γ (1) s = − 2m M s i,j,k a (1) i a (0) j a (0) k * + a (0) i a (1) j a (0) k * + a (0) i a (0) j a (1) k * e −i(E i +E j −E k −Es)t A is,jk , (A.10)
and average over the phases of a
(0) i using a (0) i a (0) j a (0) i * a (0) j * = f i f j (δ ii δ jj + δ ij δ ji ) . (A.11)
Upon a somewhat lengthy, but straightforward calculation, we arrive at
Γ s = m M s Im i,j,k f j f k + f i f k E k − E j − E i + E s + i |A is,jk + A js,ik | 2 + f j f k −E i + E s + i (A is,jj + A js,ij )(A * is,kk + A * ks,ik ) + h.c. + f i f j E i + E j − E k − E s − i |A is,jk + A js,ik | 2 . (A.12)
In the final step we use the formula
Im 1 z + i = −iπδ(z) . (A.13)
Then the second term vanishes because E i = E s , whereas the rest of the terms reproduce eq. (3.9). Thus, we have shown that the classical derivation leads to the same soliton growth rate as the quantum mechanical one, upon averaging over the ensemble of gas realizations with different initial phases. The above derivation also allows us to estimate the r.m.s. fluctuations of Γ s in individual realizations. To this aim, let us return to eq. (A.6) and smooth it with a Gaussian filter over time scales Γ s −1 > τ |E s | −1 . We obtain,
Γ s = − 2m M s Im i,j,k a i a j a * k e −i(E i +E j −E k −Es)t A is,jk e −τ 2 (E i +E j −E k −Es) 2 /2 . (A.14)
To get the r.m.s. fluctuations, we subtract Γ s , square the result and average over the gas phases. In the latter step we can replace a i with a (0) i to obtain the leading contribution. Retaining only the unsuppressed terms we obtain,
δΓ 2 s m M s 2 i,j,k f i f j f k |A is,jk + A js,ik | 2 e −τ 2 (E i +E j −E k −Es) 2 √ π τ m M s 2 i,j,k f i f j f k |A is,jk + A js,ik | 2 δ(E i + E j − E k − E s ) . (A.15)
Comparing this with the expression (3.9) for the rate, we get an estimate
δΓ 2 s ∼ 1 τ m M s f g Γ s . (A.16)
The fluctuations are much smaller than the average if Γ s τ mf g /M s which can be always achieved by an appropriate choice of the smoothing scale, as long as the number of particles in the soliton is much larger than the occupation numbers of individual modes in the gas, M s /m f g .
B Formation of axion soliton from the gas
In this appendix we report the results of simulations with formation of the soliton from the gas. We use the same numerical scheme and initial conditions for the gas as described in section 4.1, but we do not put the initial soliton. Instead, we wait for the soliton to emerge spontaneously. The purpose of these simulations is twofold. First, we cross-check our numerical approach by comparing with the simulations carried out in [39]. 18 Second, we investigate to what extent the evolution of spontaneously formed solitons is similar to the evolution of the solitons inserted into the gas from the start. We perform 118 independent simulations with the parameters summarized in fig. 10. The parameter space is restricted by the requirement of absence of the Jeans instability, so that the gas does not form a halo and remains homogeneous. Figure 11 shows the results of a typical simulation run. The maximal axion density within the simulation box remains small for time less than the relaxation time (2.19) marked with the red dotted line. Then it starts growing which signals the formation of a soliton. As in section 4, we determine the soliton mass from its peak density using eq. (4.9). We .7)). The number of runs on different lattices is indicated in parenthesis. also construct smoothed peak density and soliton mass using a top-hat filter with the width t width = 70/k 2 g . The smoothed dependences are shown in the figure with thick blue lines. To pin down the moment of soliton formation, we use the method proposed in [33]. We identify the density maximum within the simulation box and compute the kinetic (E K ) and potential (E U ) energy in a spherical region around it. The radius of the sphere is chosen as the radius at which the shell-averaged density drops to half of its peak value. To calculate the kinetic energy, we evaluate the field gradient, subtract the center-of-mass velocity contribution, square the result and integrate over the volume of the sphere. The potential energy is approximated by the potential energy of a uniform ball with the mass enclosed inside the sphere. For a random peak in the gas the ratio E U /E K is close to zero, whereas for the soliton it obeys the virial condition 19 E U /E K −2.8. In fig. 11 we see that this ratio changes abruptly from 0 to −2.8 around t ∼ τ rel . We identify the soliton formation time τ form as the moment when the smoothed curve E U /E K crosses half of its virial value,
E U /E K τ form = −1.4 . (B.1)
This time is marked with the green dotted line in the plot. We see that it agrees well with the relaxation time τ rel . 18 We thank Dmitry Levkov and Alexander Panin for sharing with us their results for a detailed comparison. 19 The ratio is different from −2 because we consider only the inner part of the whole soliton. Ref. [39] suggested that upon formation the growth of the soliton is described by a power-law
M s (t) = M 0 t τ 0 − 1 α (B.2)
with α = 1/2, τ 0 = τ rel and M 0 12πk g . To verify if this law is obeyed in our simulations, we fit the smoothed soliton mass at t > τ form with the formula (B.2) allowing α, τ 0 , M 0 to vary as free fitting parameters. The fitting time range is restricted by the condition that the energy error |E(t)/E(0) − 1| does not exceed 0.1%. The result of the fit is shown by yellow dotted line in fig. 11. The best-fit parameters for this run are α = 0.22, τ 0 = 8.2 × 10 5 , M 0 = 17.03. Note that the value of α is significantly lower than 1/2. We will discuss shortly how this result can be reconciled with those of Ref. [39]. We repeat the above analysis for each of 118 runs and construct the histograms of τ form , α, τ 0 , M 0 measured in different runs. These histograms are shown in fig. 12 together with their means and standard deviations. The mean values of τ form , τ 0 and M 0 are in good agreement with the findings of [39]. On the other hand, for the exponent we obtain a lower mean, α = 0.33 ± 0.02. It is important to notice, however, that the distribution of α is quite broad, extending from 20 0.2 to 0.5. From the analysis in the main text we know that the 20 There are three outlier runs with very high (α 0.8) and very low (α 0.1) exponents. The origin of soliton growth rate decreases when the soliton gets heavier. This suggests that the spread in α can arise due to different soliton masses achieved in different simulations. In this picture, the runs with larger duration should yield lower values of α since the solitons in them have more time to grow.
To check this expectation, we plot in fig. 13 the best-fit value of α as function of the duration of the simulation 21 in units of relaxation time. Apart from a few outliers, the bulk of the data exhibit a pronounced anti-correlation between α and t end /τ rel . The exponent varies from α 0.5 for newly-born solitons down to α 0.25 for long-lived solitons. Thus, the value α = 1/2 found in [39] can be explained by short duration of the simulations used in the analysis, whereas longer simulations carried out in the present work uncover a trend for the decrease of α with time. This trend is consistent, both qualitatively and quantitatively, with the results on heavy soliton growth from the main text. Indeed, the scaling (4.16) of the soliton growth rate implies
1 M s dM s dt ∝ 1 M n s =⇒ M s ∝ t τ 0 − 1 1/n , (B.3)
which leads to the identification α = 1/n. Thus, the slow-down of the soliton growth with α decreasing from 1/2 to 1/4 as time goes on matches the steepening of the Γ s dependence on k g r s with n increasing from 2 to 4 at smaller k g r s (see section 4.3). The above match is non-trivial. The simulations of section 4 are performed with Maxwellian gas and the growth rate is extracted from time ranges shorter than half of the these large fluctuations is unknown. 21 More precisely, we take t end to be the end of the time range used in the fit (B.2). fig. 7 with the addition of the soliton mass evolution from a run with soliton formation (in grey). The spontaneously formed soliton approaches the same growth rate as the solitons embedded in the gas from the start. Zoom-in on the low-k part of the spectrum, where we divide the distribution by k 2 to make the difference between curves more pronounced. The distribution in a simulation with spontaneous formation of the soliton from the gas (N = 128, k g = 0.5, f g = 0.06) is shown by solid lines with circles. It is sampled at three moments of time: at the beginning of the simulation (black), at the time before soliton formation formation (red) and after the soliton has formed (blue). Just before the soliton forms the distribution features a pronounced bump at low momenta which disappears afterwards. For comparison, we show with dashed lines the distribution in a simulation with soliton inserted in the initial conditions (k g r init s = 1.51) sampled at the same time intervals. Maxwell distribution corresponding to the input gas parameters is shown with thick green line. The momentum wavefunction of the soliton with the mass achieved at latest sampling point is plotted by thick yellow line. relaxation time to avoid any significant change in the gas distribution. On the other hand, the simulations in this appendix, by construction, span more than the relaxation time. Moreover, it is known [39] that the soliton formation is preceded by a dramatic change in the gas distribution with enhanced population of low-momentum modes. Thus, the solitons in the two simulation suits are embedded in environments with very different histories and their growth rate need not be the same. Nevertheless, it turns out that the soliton growth exhibits a remarkable universality. In fig. 14 we superimpose the time-dependent mass of a soliton born in the gas on top of the soliton masses from out main simulation suit with solitons incorporated in the initial conditions. We see that after a brief transient period of a faster growth, the formed soliton approaches the same time dependence as the solitons with the same mass that are present in the gas from the start.
This suggests that the gas distribution restores its Maxwellian form after the soliton formation. We check this conjecture by measuring the amplitudes of axion modes |ψ k | 2 in the simulation from fig. 14 at several moments of time: at the beginning of the simulation (t = 0), before the soliton formation (t = 0.89 τ rel ), and after the soliton has formed (t = 1.78 τ rel ). The amplitudes are averaged over spherical shells with fixed values of k = |k|. The results are shown in fig. 15 (solid lines with circles). We see that right before the soliton formation, the distribution develops a pronounced bump in the low-k part of the spectrum, consistently with the results of [39]. This bump, however, disappears after the soliton is formed and at late times the distribution qualitatively resembles Maxwellian (shown by the thick green line). We also superimpose in the same figure the distribution for the run with soliton initially present in the gas sampled at the same intervals from the start of the simulation (dashed lines). The parameters of this run are (N = 128, k g = 0.5, f g = 0.06, k g r init s = 1.51) and correspond to the blue curve in fig. 14. In this case we see that the distribution preserves the Maxwellian shape at all times, without any excess at low-k modes. We conclude that the presence of the soliton affects the axion gas in a curious way: it stabilizes the Maxwell distribution of axion momenta.
It is worth stressing that we are talking about the distribution in the gas and not in the soliton itself. Though our numerical procedure does not allow us to separate the two, we can compare the total distribution to the wavefunction of the soliton in momentum space. This is shown by thick yellow line in fig. 15. We take the soliton mass to be M s = 20 corresponding to the latest sampling time. We see that the contamination of the distribution from the soliton is negligible.
We do not attempt to explore this "Maxwellization" phenomenon further in this work. The axion momentum distribution is subject to significant temporal fluctuations which form an obstruction for moving beyond qualitative statements. For a quantitative study, one needs to devise less noisy probes. We leave this task for future.
C Details of the numerical simulations
C.1 Convergence tests
In this work, we adopt second order drift-kick-drift operator (4.2) to evolve wave function for each time step dt. The gravitational potential Φ and kinetic energy operators ∆ are calculated with CUDA Fast Fourier Transform (cuFFT) 22 . We notice that the single precision of cuFFT causes ≈ 10% mass loss in 10 6 time steps. We therefore conduct the simulations in this work using the double precision. This makes the mass loss negligible (less than 10 −6 ).
The requirement that the gas and the soliton must be resolved by the spatial lattice puts and upper bound on the gas momentum k g and a lower bound on the initial soliton size r init s accessible in the simulations. To determine the domain of validity of our code, we perform several convergence tests. First, we evolve the gas without the soliton using three different time steps: dt = 2/π 0.64 (our fiducial value), dt = 1/π 0.32 and dt = 1/(2π) 0.16. The gas parameters in all three runs are (N = 128, k g = 0.5, f g = 0.04). The maximal density within the box and the total energy measured in these runs are shown in the left panel of fig. 16. We observe that the density curves essentially coincide, while the energy error is proportional to (dt) 2 , as it should. For our fiducial value of dt = 2/π, the error stays well below 10 −7 . We conclude that the gas with k g = 0.5 is comfortably resolved in our simulations.
Next, we repeat the same convergence test with an isolated soliton of radius r init s = 1.5. The results are shown in the right panel of fig. 16. Since the analytical template (2.9) used in the simulations to set the initial conditions slightly deviates from the exact soliton profile, the soliton is initiated in an excited state which leads to the oscillations of the peak density. The oscillations obtained with three different time steps match almost identically. The energy error also exhibits the proper scaling, |E(t)/E(0)−1| ∝ (dt) 2 . However, now it is significantly Figure 16: Convergence tests in the simulations with pure gas (left) and an isolated soliton (right). In each case we perform three runs: one with the fiducial time step dt = 0.64, and two with time steps reduced by a factor of 2 and 4. The gas momentum is k g = 0.5, whereas the soliton radius is r init s = 1.5. The lattice size if N = 128 in both cases. Figure 17: Temporal (left) and spatial (right) convergence tests for the extreme values of the gas momentum and soliton radius k g = 1, r init s = 1.5. Temporal test contains three simulations by decreasing the time step size dt by 2 or 4 relative to the fiducial value, whereas the spatial test consists of two simulations with the box size N differing by a factor of 2. The simulations for spatial test follow the scaling relation (2.2). larger, reaching up to 10 −3 for the fiducial dt. This is likely due to high frequency of the soliton phase rotation |E s | 0.52 which is less resolved with the large time step. Therefore, to correctly capture the evolution of the soliton wavefunction, we restrict our simulations to r init s ≥ 1.5. For a third test, we superimpose the soliton and the gas and again run three simulations with decreasing time step. We take the soliton with r init s = 1.5 and push the gas momentum up to k g = 1. The evolution of the soliton mass and the total energy in these runs is shown in the left panel of fig. 17. The soliton mass growth in the three cases is broadly the same, though detailed features are slightly different. The energy error is low in the initial time range t 10 3 where it also obeys the (dt) 2 scaling. However, from t ∼ 10 3 it starts to steadily grow and its scaling with (dt) 2 gets violated. Still, the error remains small until very late times. For the fiducial time step it reaches 10 −3 when the soliton mass exceeds M s 27 and hence its radius drops below r s 1.2. This suggests that the soliton-gas system with r s ∼ 1.2 and k g ∼ 1 is at the extreme of our numerical resolution. Since we are interested in the averaged properties of the soliton evolution, rather than fine details, we accept k g = 1 as the upper boundary for admissible gas momenta. To ensure the absence of any excessive loss of precision, we monitor the energy conservation throughout our simulations and only use data where the energy is conserved with accuracy better than 10 −3 .
Finally, we perform a spatial convergence test. Instead of varying dx, which is fixed to 1 in our code, we make use of the scaling symmetry (2.2). It implies that decreasing dx is equivalent to an increase of N accompanied by an appropriate rescaling of other parameters. Thus we consider two simulation runs with (N = 128, k g = 1, f g = 0.04, r init s = 1.5) and (N = 256, k g = 0.5, f g = 0.02, r init s = 3.0). Note that we do not rescale the time step dt = 2/π which is tied to the lattice spacing in order to avoid aliasing [35]. The results of these two runs are compared in the right panel of fig. 17. While energy conservation is much better satisfied on the bigger lattice, the broad features of the mass evolution in these two runs agree. This further support the validity of our numerical results up to the extreme values k g = 1, r init s = 1.5.
C.2 Conversion of peak density into soliton mass
As discussed in section 4.1, we estimate the mass of the soliton and its radius from the maximal density in the box ρ max , assuming that it corresponds to the soliton peak density ρ s, peak . However, the interference of the soliton wavefunction with the gas waves can increase the maximal density above that of the soliton. The increase is proportional to the product of the soliton and gas wavefunctions, hence to the geometric mean of their densities. In more detail, we can estimate the bias as ρ max ρ s, peak − 1 ∼ 2 ρ g ρ s, peak , (C.1) which can be significant even for large density contrasts. For example, the density bias is about 40% for ρ s, peak /ρ g = 30. The situation is further complicated by large fluctuations in the local gas density that can further increase the bias. In particular, when the soliton is too light, its peak becomes completely obscured by the gas.
To pin down the lowest density contrast between the soliton and the gas for which the bias is unimportant, we conduct a series of the following auxiliary numerical experiments. We generate a gas field with given mean density ρ g and superimpose on it a soliton of mass M s without any evolution. Then we evaluate the estimator of the soliton mass using our formula M s,est = 25.04 ρ 1/4 max ,
(C.2)
where ρ max is the maximal density of the axion field in the box. The estimator is compared to the true soliton mass in fig. 18. We observe that when the soliton is prominent enough, say ρ s, peak ρ max > 100 ρ g , the estimator is unbiased. On the other hand, for ρ max 20 ρ g , we are essentially unable to distinguish the soliton peak against the gas density fluctuations. We adopt the threshold ρ max > 30 ρ g when measuring the soliton mass in our simulations, which introduces an error of at most 20% in the mass estimate. Figure 18: Ratio of the soliton mass estimator to the true soliton mass as functon of the density contrast in the axion field generated by superposition of the soliton and gas wavefunctions. We adopt the threshold ρ max > 30 ρ g when measuring the soliton mass from the simulations.
Figure 1 :
1The standard soliton profile in linear (left) and in log (right) scale. The solid lines show the exact solution of the Schrödinger-Poisson equations, while the dotted lines correspond to the fitting function (2.6).
Figure 2 :
2Feynman diagrams describing absorption (a, b) and emission (c, d) of a particle by the soliton interacting with axion gas. Solid lines correspond to gas particles, dashed line corresponds to the soliton, and wavy line -to the Newtonian interaction. The time direction is from left to right. The labels on the external legs represent the energies of the scattered states, whereas k is the momentum exchange.
Figure 3 :
3Parameters of 195 individual simulations used in this work. The four-dimensional parameter space is projected on the directions corresponding to the box size N , the soliton half-peak radius r init s , and the parameters of the Maxwell distribution of axion gas k g , f g . The horizontal axis is common to all panels and shows the product k g r init s . Green circles correspond to simulations leading to soliton growth, while red circles show the cases of soliton evaporation. Darker circles indicate multiple realizations of axion gas by changing the phases in the wavefunction.
0.054 (N /128) −2 : the axion gas does not form a halo due to Jeans instability; b) r init s < 0.1 N : the effect of periodic images on the soliton is suppressed;c) ρ s, peak > 30ρ g : soliton is prominent enough to suppress bias in its mass measurement; d) M s < 0.5 M g : soliton does not overwhelm axion waves.Note that the conditions (a,b) are imposed on the initial configuration, whereas the conditions (c,d) are monitored throughout the whole duration of the simulations. In total we have run 195 simulations with independent realizations of random gas phases. Their parameters are shown infig. 3against the product k g r init s which controls the physics of the soliton-gas interaction.
Figure 4 :
4Evolution of the soliton peak density, mass and radius for the case of heavy soliton (r init s = 1.51). The mass and radius are estimated from the peak density. Thin blue curves show the instantaneous values, whereas the thick curves are obtained by smoothing with a top-hat filter. Yellow dots show the result of fitting the soliton mass with a quadratic polynomial. We also show the time dependence of the total energy in the simulation box used to control the precision of numerical calculations. The gas parameters are (N = 128, k g = 1, f g = 0.01).
Figure 5 :
5Same as fig. 4 for the case of median soliton (r init s = 2.71).
Figure 6 :
6Same as fig. 4 for the case of light soliton (
Figure 7 :
7Soliton mass evolution in simulations with k g r init s from 0.75 to 2.26 (top) and from 3.32 to 4.52 (bottom). By shifting the curves along the time axis we have observed that they can be stacked on top of each other.
Figure 8 :
8Growth of the soliton mass in the simulations with the same values of (N = 128, k g = 0.5, k g r init s = 1.51) and varying f g . The time axis in different runs has been scaled by f 2 g and normalized to the case f g = 0.06. The time span of the curves is restricted to half of the relaxation time(2.19) and covers the portion of the data used in the measurement of the soliton growth rate.
Figure 9 :
9The soliton growth/evaporation rate as function of k g r s -the product of the gas momentum and the soliton half-density radius. The cumulative dependence is constructed using 3900 data points extracted from 195 independent simulations with different gas and soliton parameters. The data are binned on logarithmic scale in k g r s . Each dot gives the average value of the growth rate in the bin, while the vertical error bars correspond to the standard deviation within the bin. The blue solid line shows the asymptotic dependence predicted by eq. (3.23). At small k g r s the dotted lines indicate possible power-law dependences. The dashed vertical line marks the value of k g r s corresponding to the equality of the gas and soliton virial temperatures, T g /T s = 1.
Figure 10 :
10Gas parameters for the simulations with soliton formation. Solid lines bound the regions without Jeans instability for different simulation box sizes (see eq. (4
Figure 11 :
11Example of spontaneous soliton formation in axion gas with parameters (N = 128, k g = 0.5, f g = 0.02). From top to bottom: maximal density in the simulation box, soliton mass estimated from the peak density, virial ratio E U /E K , total energy in the box. Thick blue lines show the smoothed dependences. Yellow dotted line is the fit (B.2). Vertical red and green dotted lines mark the relaxation time(2.19) and the measured soliton formation time, respectively.
Figure 12 :
12Results of the measurements in the simulations with soliton formation. The histograms show the distributions of the soliton formation time τ form , and the parameters in the power-law fit (B.2) of the soliton mass growth: α, τ 0 , M 0 . The relaxation time τ rel is given by eq. (2.19) and k g is the gas momentum.
Figure 13 :
13The exponent in the power-law fit (B.2) for the soliton mass against final simulation time t end measured in units of the relaxation time (2.19). Longer simulations produce more massive solitons which have slower growth rate and hence lower values of α. Three outlier simulations with α 0.8 and α 0.1 represent large fluctuations of unknown origin.
Figure 14 :
14Same as upper panel in
Figure 15 :
15Left panel: Evolution of momentum distribution of axions in the simulation box. The mode amplitudes are spherically averaged over shells with fixed k = |k|. Right panel:
We will use the two names interchangeably.
Though the quantitative characteristics -the evaporation rate -does depend on the gas density, Γs ∝ ρ 2 g (see eq. (3.12)).
At |x| rs the wavefunctions are modified by the gravitational field of the soliton, see below.
As discussed below, numerical simulations suggest that Maxwell distribution may still be a good approximation, but this question requires further study.
This corresponds to an arbitrary choice of the zero-point energy in the Schrödinger equation (4.1a).
Dynamical soliton formation from the gas is discussed in appendix B.9 The expression for the soliton mass (4.9) is by 3% lower for a given peak density than the value obtained from the exact wavefunction, see section 2. This error is insignificant for our analysis. Note that its effect is opposite to the bias introduced by the interference with the axion gas discussed below.
Note that we smooth ρ s, peak (t), Ms(t) and rs(t) separately.
In principle, this requirement might be too stringent since we observe that in the presence of a soliton the gas distribution remains close to Maxwellian even on time scales longer than the relaxation time, as will be discussed shortly.
Recall the proportionality between ν and kgrs, eq. (2.24).-24 -
Recall that rs ∝ M −1 s , whereupon the evolution equation for the mass is easily integrated.
Note that by the soliton-host halo relation we understand here correlation between the soliton mass and the virial temperature of the halo, while in the literature the soliton-host halo relation is commonly formulated in terms of the halo mass. We believe that the former formulation reflects better the underlying physical mechanisms behind the relation.
Due to the last term in the first line of eq. (A.2) that mixes ψg and ψ * g , the eigenmodes contain both positive and negative frequencies[60]. To avoid cumbersome expressions, we neglect this subtlety in the following discussion. It does not affect the final result for the soliton growth rate.
For simplicity, we again neglect the subtleties associated with the negative-frequency components of the eigenmodes[60].
https://developer.nvidia.com/cufft
AcknowledgmentsWe are grateful to AsiminaA Classical derivation of the soliton growth rateIn this appendix we derive the expression (3.9) for the soliton growth rate as the consequence of the classical equations of motion. It is convenient to integrate out the gravitational potential
A New Light Boson?. S Weinberg, 10.1103/PhysRevLett.40.223Phys. Rev. Lett. 40223S. Weinberg, A New Light Boson?, Phys. Rev. Lett. 40 (1978) 223.
Problem of Strong P and T Invariance in the Presence of Instantons. F Wilczek, 10.1103/PhysRevLett.40.279Phys. Rev. Lett. 40279F. Wilczek, Problem of Strong P and T Invariance in the Presence of Instantons, Phys. Rev. Lett. 40 (1978) 279.
Can Confinement Ensure Natural CP Invariance of Strong Interactions?. M A Shifman, A I Vainshtein, V I Zakharov, 10.1016/0550-3213(80)90209-6Nucl. Phys. 166493M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, Can Confinement Ensure Natural CP Invariance of Strong Interactions?, Nucl. Phys. B166 (1980) 493.
Weak Interaction Singlet and Strong CP Invariance. J E Kim, 10.1103/PhysRevLett.43.103Phys. Rev. Lett. 43103J. E. Kim, Weak Interaction Singlet and Strong CP Invariance, Phys. Rev. Lett. 43 (1979) 103.
On Possible Suppression of the Axion Hadron Interactions. A R Zhitnitsky, Sov. J. Nucl. Phys. 31260Yad. Fiz.A. R. Zhitnitsky, On Possible Suppression of the Axion Hadron Interactions. (In Russian), Sov. J. Nucl. Phys. 31 (1980) 260 [Yad. Fiz.31,497(1980)].
A Simple Solution to the Strong CP Problem with a Harmless Axion. M Dine, W Fischler, M Srednicki, 10.1016/0370-2693(81)90590-6Phys. Lett. 104199M. Dine, W. Fischler and M. Srednicki, A Simple Solution to the Strong CP Problem with a Harmless Axion, Phys. Lett. 104B (1981) 199.
. J Preskill, M B Wise, F Wilczek, 10.1016/0370-2693(83)90637-8Cosmology of the Invisible Axion. 120127Phys. Lett.J. Preskill, M. B. Wise and F. Wilczek, Cosmology of the Invisible Axion, Phys. Lett. 120B (1983) 127.
A Cosmological Bound on the Invisible Axion. L F Abbott, P Sikivie, 10.1016/0370-2693(83)90638-XPhys. Lett. 120133L. F. Abbott and P. Sikivie, A Cosmological Bound on the Invisible Axion, Phys. Lett. 120B (1983) 133.
The Not So Harmless Axion. M Dine, W Fischler, 10.1016/0370-2693(83)90639-1Phys. Lett. 120137M. Dine and W. Fischler, The Not So Harmless Axion, Phys. Lett. 120B (1983) 137.
. A Arvanitaki, S Dimopoulos, S Dubovsky, N Kaloper, J March-Russell, String Axiverse, 10.1103/PhysRevD.81.123530Phys. Rev. 811235300905.4720A. Arvanitaki, S. Dimopoulos, S. Dubovsky, N. Kaloper and J. March-Russell, String Axiverse, Phys. Rev. D81 (2010) 123530 [0905.4720].
. D J E Marsh, 10.1016/j.physrep.2016.06.0051510.07633Axion Cosmology, Phys. Rept. 6431D. J. E. Marsh, Axion Cosmology, Phys. Rept. 643 (2016) 1 [1510.07633].
Ultralight scalars as cosmological dark matter. L Hui, J P Ostriker, S Tremaine, E Witten, 10.1103/PhysRevD.95.0435411610.08297Phys. Rev. 9543541L. Hui, J. P. Ostriker, S. Tremaine and E. Witten, Ultralight scalars as cosmological dark matter, Phys. Rev. D95 (2017) 043541 [1610.08297].
Wave Dark Matter. L Hui, 10.1146/annurev-astro-120920-010024Ann. Rev. Astron. Astrophys. 592472101.11735L. Hui, Wave Dark Matter, Ann. Rev. Astron. Astrophys. 59 (2021) 247 [2101.11735].
CP Conservation in the Presence of Instantons. R D Peccei, H R Quinn, 10.1103/PhysRevLett.38.1440Phys. Rev. Lett. 381440R. D. Peccei and H. R. Quinn, CP Conservation in the Presence of Instantons, Phys. Rev. Lett. 38 (1977) 1440.
Constraints Imposed by CP Conservation in the Presence of Instantons. R D Peccei, H R Quinn, 10.1103/PhysRevD.16.1791Phys. Rev. 161791R. D. Peccei and H. R. Quinn, Constraints Imposed by CP Conservation in the Presence of Instantons, Phys. Rev. D16 (1977) 1791.
P Svrcek, E Witten, 10.1088/1126-6708/2006/06/051hep-th/0605206Axions In String Theory. 51P. Svrcek and E. Witten, Axions In String Theory, JHEP 06 (2006) 051 [hep-th/0605206].
Cold and fuzzy dark matter. W Hu, R Barkana, A Gruzinov, 10.1103/PhysRevLett.85.1158astro-ph/0003365Phys. Rev. Lett. 851158W. Hu, R. Barkana and A. Gruzinov, Cold and fuzzy dark matter, Phys. Rev. Lett. 85 (2000) 1158 [astro-ph/0003365].
Constraining the mass of light bosonic dark matter using SDSS Lyman-α forest. E Armengaud, N Palanque-Delabrouille, C Yèche, D J E Marsh, J Baur, 10.1093/mnras/stx1870Mon. Not. Roy. Astron. Soc. 47146061703.09126E. Armengaud, N. Palanque-Delabrouille, C. Yèche, D. J. E. Marsh and J. Baur, Constraining the mass of light bosonic dark matter using SDSS Lyman-α forest, Mon. Not. Roy. Astron. Soc. 471 (2017) 4606 [1703.09126].
Lyman-α constraints on ultralight scalar dark matter: Implications for the early and late universe. T Kobayashi, R Murgia, A Simone, V Iršič, M Viel, 10.1103/PhysRevD.96.1235141708.00015Phys. Rev. D. 96123514T. Kobayashi, R. Murgia, A. De Simone, V. Iršič and M. Viel, Lyman-α constraints on ultralight scalar dark matter: Implications for the early and late universe, Phys. Rev. D 96 (2017) 123514 [1708.00015].
Strong Bound on Canonical Ultralight Axion Dark Matter from the Lyman-Alpha Forest. K K Rogers, H V Peiris, 10.1103/PhysRevLett.126.071302Phys. Rev. Lett. 12671302K. K. Rogers and H. V. Peiris, Strong Bound on Canonical Ultralight Axion Dark Matter from the Lyman-Alpha Forest, Phys. Rev. Lett. 126 (2021) 071302 [2007.12705].
Galactic rotation curves versus ultralight dark matter: Implications of the soliton-host halo relation. N Bar, D Blas, K Blum, S Sibiryakov, 10.1103/PhysRevD.98.0830271805.00122Phys. Rev. D. 9883027N. Bar, D. Blas, K. Blum and S. Sibiryakov, Galactic rotation curves versus ultralight dark matter: Implications of the soliton-host halo relation, Phys. Rev. D 98 (2018) 083027 [1805.00122].
Galactic rotation curves versus ultralight dark matter: A systematic comparison with SPARC data. N Bar, K Blum, C Sun, 10.1103/PhysRevD.105.0830152111.03070Phys. Rev. D. 10583015N. Bar, K. Blum and C. Sun, Galactic rotation curves versus ultralight dark matter: A systematic comparison with SPARC data, Phys. Rev. D 105 (2022) 083015 [2111.03070].
Ultra-light Dark Matter is Incompatible with the Milky Way's Dwarf Satellites. M Safarzadeh, D N Spergel, M. Safarzadeh and D. N. Spergel, Ultra-light Dark Matter is Incompatible with the Milky Way's Dwarf Satellites, 1906.11848.
The MUSE-Faint survey -II. The dark-matter density profile of the ultra-faint dwarf galaxy Eridanus 2. S L Zoutendijk, J Brinchmann, N F Bouché, M D Brok, D Krajnović, K Kuijken, 10.1051/0004-6361/2020402392101.00253Astron. Astrophys. 65180S. L. Zoutendijk, J. Brinchmann, N. F. Bouché, M. d. Brok, D. Krajnović, K. Kuijken et al., The MUSE-Faint survey -II. The dark-matter density profile of the ultra-faint dwarf galaxy Eridanus 2, Astron. Astrophys. 651 (2021) A80 [2101.00253].
Milky Way Satellite Census. III. Constraints on Dark Matter Properties from Observations of Milky Way Satellite Galaxies. E O Nadler, DES collaboration10.1103/PhysRevLett.126.091101Phys. Rev. Lett. 126911012008.00022DES collaboration, E. O. Nadler et al., Milky Way Satellite Census. III. Constraints on Dark Matter Properties from Observations of Milky Way Satellite Galaxies, Phys. Rev. Lett. 126 (2021) 091101 [2008.00022].
Strong Constraints on Fuzzy Dark Matter from Ultrafaint Dwarf Galaxy Eridanus II. D J E Marsh, J C Niemeyer, 10.1103/PhysRevLett.123.0511031810.08543Phys. Rev. Lett. 12351103D. J. E. Marsh and J. C. Niemeyer, Strong Constraints on Fuzzy Dark Matter from Ultrafaint Dwarf Galaxy Eridanus II, Phys. Rev. Lett. 123 (2019) 051103 [1810.08543].
Not so fuzzy: excluding FDM with sizes and stellar kinematics of ultra-faint dwarf galaxies. N Dalal, A Kravtsov, 2203.05750N. Dalal and A. Kravtsov, Not so fuzzy: excluding FDM with sizes and stellar kinematics of ultra-faint dwarf galaxies, 2203.05750.
Systems of selfgravitating particles in general relativity and the concept of an equation of state. R Ruffini, S Bonazzola, 10.1103/PhysRev.187.1767Phys. Rev. 1871767R. Ruffini and S. Bonazzola, Systems of selfgravitating particles in general relativity and the concept of an equation of state, Phys. Rev. 187 (1969) 1767.
Gravitational cooling of self-gravitating Bose-Condensates. F S Guzman, L A Urena-Lopez, 10.1086/504508astro-ph/0603613Astrophys. J. 645814F. S. Guzman and L. A. Urena-Lopez, Gravitational cooling of self-gravitating Bose-Condensates, Astrophys. J. 645 (2006) 814 [astro-ph/0603613].
Axion miniclusters. C J Hogan, M J Rees, 10.1016/0370-2693(88)91655-3Phys. Lett. 205228C. J. Hogan and M. J. Rees, Axion miniclusters, Phys. Lett. B205 (1988) 228.
Axion miniclusters and Bose stars. E W Kolb, I I Tkachev, 10.1103/PhysRevLett.71.3051hep-ph/9303313Phys. Rev. Lett. 713051E. W. Kolb and I. I. Tkachev, Axion miniclusters and Bose stars, Phys. Rev. Lett. 71 (1993) 3051 [hep-ph/9303313].
Cosmic Structure as the Quantum Interference of a Coherent Dark Wave. H.-Y Schive, T Chiueh, T Broadhurst, 10.1038/nphys2996Nature Phys. 104961406.6586H.-Y. Schive, T. Chiueh and T. Broadhurst, Cosmic Structure as the Quantum Interference of a Coherent Dark Wave, Nature Phys. 10 (2014) 496 [1406.6586].
Formation and structure of ultralight bosonic dark matter halos. J Veltmaat, J C Niemeyer, B Schwabe, 10.1103/PhysRevD.98.0435091804.09647Phys. Rev. D. 9843509J. Veltmaat, J. C. Niemeyer and B. Schwabe, Formation and structure of ultralight bosonic dark matter halos, Phys. Rev. D 98 (2018) 043509 [1804.09647].
Solitons in the dark: non-linear structure formation with fuzzy dark matter. M Mina, D F Mota, H A Winther, 4119M. Mina, D. F. Mota and H. A. Winther, Solitons in the dark: non-linear structure formation with fuzzy dark matter, 2007.04119.
Structure formation in large-volume cosmological simulations of fuzzy dark matter: impact of the non-linear dynamics. S May, V Springel, 10.1093/mnras/stab1764Mon. Not. Roy. Astron. Soc. 50626032101.01828S. May and V. Springel, Structure formation in large-volume cosmological simulations of fuzzy dark matter: impact of the non-linear dynamics, Mon. Not. Roy. Astron. Soc. 506 (2021) 2603 [2101.01828].
Understanding the Core-Halo Relation of Quantum Wave Dark Matter from 3D Simulations. H.-Y Schive, M.-H Liao, T.-P Woo, S.-K Wong, T Chiueh, T Broadhurst, 10.1103/PhysRevLett.113.2613021407.7762Phys. Rev. Lett. 113261302H.-Y. Schive, M.-H. Liao, T.-P. Woo, S.-K. Wong, T. Chiueh, T. Broadhurst et al., Understanding the Core-Halo Relation of Quantum Wave Dark Matter from 3D Simulations, Phys. Rev. Lett. 113 (2014) 261302 [1407.7762].
Simulations of solitonic core mergers in ultralight axion dark matter cosmologies. B Schwabe, J C Niemeyer, J F Engels, 10.1103/PhysRevD.94.0435131606.05151Phys. Rev. D. 9443513B. Schwabe, J. C. Niemeyer and J. F. Engels, Simulations of solitonic core mergers in ultralight axion dark matter cosmologies, Phys. Rev. D 94 (2016) 043513 [1606.05151].
Galaxy formation with BECDM -I. Turbulence and relaxation of idealized haloes. P Mocz, M Vogelsberger, V H Robles, J Zavala, M Boylan-Kolchin, A Fialkov, 10.1093/mnras/stx18871705.05845Mon. Not. Roy. Astron. Soc. 4714559P. Mocz, M. Vogelsberger, V. H. Robles, J. Zavala, M. Boylan-Kolchin, A. Fialkov et al., Galaxy formation with BECDM -I. Turbulence and relaxation of idealized haloes, Mon. Not. Roy. Astron. Soc. 471 (2017) 4559 [1705.05845].
Panin and I. I. Tkachev, Gravitational Bose-Einstein condensation in the kinetic regime. D G Levkov, 10.1103/PhysRevLett.121.1513011804.05857Phys. Rev. Lett. 121151301D. G. Levkov, A. G. Panin and I. I. Tkachev, Gravitational Bose-Einstein condensation in the kinetic regime, Phys. Rev. Lett. 121 (2018) 151301 [1804.05857].
Construction of wave dark matter halos: Numerical algorithm and analytical constraints. T D Yavetz, X Li, L Hui, 10.1103/PhysRevD.105.0235122109.06125Phys. Rev. D. 10523512T. D. Yavetz, X. Li and L. Hui, Construction of wave dark matter halos: Numerical algorithm and analytical constraints, Phys. Rev. D 105 (2022) 023512 [2109.06125].
A Universal density profile from hierarchical clustering. J F Navarro, C S Frenk, S D M White, 10.1086/304888astro-ph/9611107Astrophys. J. 490493J. F. Navarro, C. S. Frenk and S. D. M. White, A Universal density profile from hierarchical clustering, Astrophys. J. 490 (1997) 493 [astro-ph/9611107].
Formation and mass growth of axion stars in axion miniclusters. B Eggemeier, J C Niemeyer, 10.1103/PhysRevD.100.0635281906.01348Phys. Rev. D. 10063528B. Eggemeier and J. C. Niemeyer, Formation and mass growth of axion stars in axion miniclusters, Phys. Rev. D 100 (2019) 063528 [1906.01348].
Soliton Random Walk and the Cluster-Stripping Problem in Ultralight Dark Matter. H.-Y Schive, T Chiueh, T Broadhurst, 10.1103/PhysRevLett.124.2013011912.09483Phys. Rev. Lett. 124201301H.-Y. Schive, T. Chiueh and T. Broadhurst, Soliton Random Walk and the Cluster-Stripping Problem in Ultralight Dark Matter, Phys. Rev. Lett. 124 (2020) 201301 [1912.09483].
Oscillations and Random Walk of the Soliton Core in a Fuzzy Dark Matter Halo. X Li, L Hui, T D Yavetz, 10.1103/PhysRevD.103.023508Phys. Rev. D. 103235082011.11416X. Li, L. Hui and T. D. Yavetz, Oscillations and Random Walk of the Soliton Core in a Fuzzy Dark Matter Halo, Phys. Rev. D 103 (2021) 023508 [2011.11416].
J Luna Zagorac, I Sands, N Padmanabhan, R Easther, arXiv:2109.01920[2109.01920Schrödinger-Poisson Solitons: Perturbation Theory, arXiv e-prints. J. Luna Zagorac, I. Sands, N. Padmanabhan and R. Easther, Schrödinger-Poisson Solitons: Perturbation Theory, arXiv e-prints (2021) arXiv:2109.01920 [2109.01920].
The diversity of core-halo structure in the fuzzy dark matter model. H Y J Chan, E G M Ferreira, S May, K Hayashi, M Chiba, 10.1093/mnras/stac063Mon. Not. Roy. Astron. Soc. 5119432110.11882H. Y. J. Chan, E. G. M. Ferreira, S. May, K. Hayashi and M. Chiba, The diversity of core-halo structure in the fuzzy dark matter model, Mon. Not. Roy. Astron. Soc. 511 (2022) 943 [2110.11882].
Scaling relations of fuzzy dark matter haloes -I. Individual systems in their cosmological environment. M Nori, M Baldi, 10.1093/mnras/staa3772Mon. Not. Roy. Astron. Soc. 5011539M. Nori and M. Baldi, Scaling relations of fuzzy dark matter haloes -I. Individual systems in their cosmological environment, Mon. Not. Roy. Astron. Soc. 501 (2021) 1539 [2007.01316].
New insights into the formation and growth of boson stars in dark matter halos. J Chen, X Du, E W Lentz, D J E Marsh, J C Niemeyer, 10.1103/PhysRevD.104.083022Phys. Rev. D. 104830222011.01333J. Chen, X. Du, E. W. Lentz, D. J. E. Marsh and J. C. Niemeyer, New insights into the formation and growth of boson stars in dark matter halos, Phys. Rev. D 104 (2021) 083022 [2011.01333].
How do stars affect ψDM haloes?. J H H Chan, H.-Y Schive, T.-P Woo, T Chiueh, 10.1093/mnras/sty9001712.01947MNRAS. 4782686J. H. H. Chan, H.-Y. Schive, T.-P. Woo and T. Chiueh, How do stars affect ψDM haloes?, MNRAS 478 (2018) 2686 [1712.01947].
Baryon-driven growth of solitonic cores in fuzzy dark matter halos. J Veltmaat, B Schwabe, J C Niemeyer, 10.1103/PhysRevD.101.0835181911.09614Phys. Rev. D. 10183518J. Veltmaat, B. Schwabe and J. C. Niemeyer, Baryon-driven growth of solitonic cores in fuzzy dark matter halos, Phys. Rev. D 101 (2020) 083518 [1911.09614].
Soliton Oscillations and Revised Constraints from Eridanus II of Fuzzy Dark Matter. B T Chiang, H.-Y Schive, T Chiueh, 10.1103/PhysRevD.103.103019Phys. Rev. D. 1031030192104.13359B. T. Chiang, H.-Y. Schive and T. Chiueh, Soliton Oscillations and Revised Constraints from Eridanus II of Fuzzy Dark Matter, Phys. Rev. D 103 (2021) 103019 [2104.13359].
On the Random Motion of Nuclear Objects in a Fuzzy Dark Matter Halo. D D Chowdhury, F C Van Den, V H Bosch, P Robles, H.-Y Van Dokkum, T Schive, Chiueh, 10.3847/1538-4357/ac043fAstrophys. J. 916272105.05268D. D. Chowdhury, F. C. van den Bosch, V. H. Robles, P. van Dokkum, H.-Y. Schive, T. Chiueh et al., On the Random Motion of Nuclear Objects in a Fuzzy Dark Matter Halo, Astrophys. J. 916 (2021) 27 [2105.05268].
Panin and I. I. Tkachev, Relativistic axions from collapsing Bose stars. D G Levkov, 10.1103/PhysRevLett.118.0113011609.03611Phys. Rev. Lett. 11811301D. G. Levkov, A. G. Panin and I. I. Tkachev, Relativistic axions from collapsing Bose stars, Phys. Rev. Lett. 118 (2017) 011301 [1609.03611].
Fast Radio Bursts and Axion Miniclusters. I I Tkachev, 10.1134/S00213640150101541411.3900JETP Lett. 1011I. I. Tkachev, Fast Radio Bursts and Axion Miniclusters, JETP Lett. 101 (2015) 1 [1411.3900].
Dark Matter Axion Clump Resonance of Photons. M P Hertzberg, E D Schiappacasse, 10.1088/1475-7516/2018/11/0041805.00430JCAP. 114M. P. Hertzberg and E. D. Schiappacasse, Dark Matter Axion Clump Resonance of Photons, JCAP 11 (2018) 004 [1805.00430].
Panin and I. I. Tkachev, Radio-emission of axion stars. D G Levkov, 10.1103/PhysRevD.102.023501Phys. Rev. D. 102235012004.05179D. G. Levkov, A. G. Panin and I. I. Tkachev, Radio-emission of axion stars, Phys. Rev. D 102 (2020) 023501 [2004.05179].
D Ellis, D J E Marsh, B Eggemeier, J Niemeyer, J Redondo, K Dolag, 2204.13187Structure of Axion Miniclusters. D. Ellis, D. J. E. Marsh, B. Eggemeier, J. Niemeyer, J. Redondo and K. Dolag, Structure of Axion Miniclusters, 2204.13187.
Gravitational collapse in the postinflationary Universe. B Eggemeier, B Schwabe, J C Niemeyer, R Easther, 10.1103/PhysRevD.105.023516Phys. Rev. D. 105235162110.15109B. Eggemeier, B. Schwabe, J. C. Niemeyer and R. Easther, Gravitational collapse in the postinflationary Universe, Phys. Rev. D 105 (2022) 023516 [2110.15109].
Tidal disruption of fuzzy dark matter subhalo cores. X Du, B Schwabe, J C Niemeyer, D Bürger, 10.1103/PhysRevD.97.0635071801.04864Phys. Rev. D. 9763507X. Du, B. Schwabe, J. C. Niemeyer and D. Bürger, Tidal disruption of fuzzy dark matter subhalo cores, Phys. Rev. D 97 (2018) 063507 [1801.04864].
. J Chan, S Sibiryakov, W Xue, in preparationJ. Chan, S. Sibiryakov and W. Xue, in preparation.
Evolution of the Schrodinger-Newton system for a selfgravitating scalar field. F S Guzman, L A Urena-Lopez, 10.1103/PhysRevD.69.124033gr-qc/0404014Phys. Rev. D. 69124033F. S. Guzman and L. A. Urena-Lopez, Evolution of the Schrodinger-Newton system for a selfgravitating scalar field, Phys. Rev. D 69 (2004) 124033 [gr-qc/0404014].
Bose-Einstein Condensation and Superfluidity. L Pitaevskii, S Stringari, International Series of Monographs on Physics. OUP Oxford. L. Pitaevskii and S. Stringari, Bose-Einstein Condensation and Superfluidity, International Series of Monographs on Physics. OUP Oxford, 2016.
Plasma physics via computer simulation. C K Birdsall, A B Langdon, CRC pressC. K. Birdsall and A. B. Langdon, Plasma physics via computer simulation. CRC press, 2018.
Oscillation modes of ultralight BEC dark matter cores. F S Guzmán, 10.1103/PhysRevD.99.0835131812.11612Phys. Rev. D. 9983513F. S. Guzmán, Oscillation modes of ultralight BEC dark matter cores, Phys. Rev. D 99 (2019) 083513 [1812.11612].
Large amplitude isothermal fluctuations and high density dark matter clumps. E W Kolb, I I Tkachev, 10.1103/PhysRevD.50.769astro-ph/9403011Phys. Rev. D. 50769E. W. Kolb and I. I. Tkachev, Large amplitude isothermal fluctuations and high density dark matter clumps, Phys. Rev. D 50 (1994) 769 [astro-ph/9403011].
|
[] |
[
"DG STRUCTURES ON ODD CATEGORIFIED QUANTUM sl(2)",
"DG STRUCTURES ON ODD CATEGORIFIED QUANTUM sl(2)"
] |
[
"Ilknur Egilmez ",
"Aaron D Lauda "
] |
[] |
[] |
We equip Ellis and Brundan's version of the odd categorified quantum group for sl(2) with a differential giving it the structure of a graded dg-2-supercategory. The presence of the super grading gives rise to two possible decategorifications of the associated dg-2-category. One version gives rise to a categorification of quantum sl(2) at a fourth root of unity, while the other version produces a subalgebra of quantum gl(1|1) defined over the integers. Both of these algebras appear in connection with quantum algebraic approaches to the Alexander polynomial.
|
10.4171/qt/135
|
[
"https://arxiv.org/pdf/1808.04924v1.pdf"
] | 119,313,888 |
1808.04924
|
aed9698606f92df99c1e064348822e4caaf792e3
|
DG STRUCTURES ON ODD CATEGORIFIED QUANTUM sl(2)
14 Aug 2018
Ilknur Egilmez
Aaron D Lauda
DG STRUCTURES ON ODD CATEGORIFIED QUANTUM sl(2)
14 Aug 2018
We equip Ellis and Brundan's version of the odd categorified quantum group for sl(2) with a differential giving it the structure of a graded dg-2-supercategory. The presence of the super grading gives rise to two possible decategorifications of the associated dg-2-category. One version gives rise to a categorification of quantum sl(2) at a fourth root of unity, while the other version produces a subalgebra of quantum gl(1|1) defined over the integers. Both of these algebras appear in connection with quantum algebraic approaches to the Alexander polynomial.
1. Introduction 1.1. Motivations from link homology theory. Khovanov homology, categorifying a certain normalization of the Jones polynomial [Kho00,Kho02], is the simplest of a family of link homology theories associated to quantum groups and their representations. Surrounding Khovanov homology is an intricate system of related combinatorial and geometric ideas. Everything from extended 2-dimensional TQFTs [Kho02,LP09,CMW09], planar algebras [BN02,BN05], category O [Str09, Str05,BS11b,BFK99], coherent sheaves on quiver varieties [CK08], matrix factorizations [KR08a,KR08b], homological mirror symmetry [SS06], arc algebras [Kho02,CK14,Str09,BS11a], Springer varieties [Kho04,Str09,SW12], stable homotopy theory [LS14a,LS14c,LS14b], and 5-dimensional gauge theories [GSV05,Wit12a,Wit12b] appear in descriptions of Khovanov homology, among many other constructions.
Given that Khovanov homology provides a nexus bridging the sophisticated structures described above, it is surprising to discover that there exists a distinct categorification of the Jones polynomial. Ozsváth, Rasmussen, Szabó found an odd analogue of Khovanov homology [ORS13] that agrees with the original Khovanov homology when coefficients are taken modulo 2. Both of these theories categorify the Jones polynomial, and results of Shumakovitch [Shu11] show that these categorified link invariants are not equivalent.
The discovery of odd Khovanov homology was motivated by the existence of a spectral sequence from ordinary Khovanov homology to the Heegaard Floer homology of the double branch cover [OS05] with 2 coefficients. Odd Khovanov homology was defined in an attempt to extend this spectral sequence to coefficients, rather than 2 . Indeed, in [ORS13] they conjecture that for a link K in S 3 , there is a spectral sequence whose E 2 term is the reduced odd Khovanov homology Khr(K) of K and whose E ∞ term is the Heegaard-Floer homology HF (−Σ(K)) of the branched double cover Σ(K) with the orientation reversed (with coefficients in ).
Khr(K) OKhr(K)
HF (−Σ(K)) /2 D D , l , l , l , l , l , l , l , l ? P P 2 r 2 r 2 r 2 r 2 r 2 r 2 r 2 r A related version of this conjecture was proven in the context of instanton homology in [Sca15].
There are now a number of spectral sequences connecting variants of Khovanov homology to variants of Floer homology [Ras05,Sza15,Blo11,KM11,Rob13,Hen12,Bei12,Bal11,BLS17]. For even Khovanov homology there are many interesting connections with knot Floer-homology HF K(K). This is a bigraded homology for knots and links
HF K(K) = i,a∈ HF K i (K, a)
where i is called the Maslov (or homological) grading and a is the Alexander grading. The graded Euler characteristic of HF K(K) is the Alexander polynomial i,a∈ (−1) i t a · rank HF K i (K, a) = ∆ K (t).
Many of the spectral sequences listed above arise via a collapse of the bigraded homology groups to a single δ-grading. For Khovanov homology the δ-grading is given by δ = h − 1/2q, where q denotes the quantum grading and h the homological. On HF K the δ-grading is δ = a − m. Rasmussen conjectured a spectral sequence between the singly δ-graded Khovanov homology Kh δ (K) and the δ-graded knot Floer homology HF K δ (K) [Ras05]. Under the collapse of grading the graded Euler characteristic becomes an integer rather than a polynomial. It is interesting to note that that if set q = √ −1 in the Euler characteristic formula
i,j (−1) i q j rk(Kh i,j )| q= √ −1 = i,j (−1) i−j/2 rk(Kh i,j )
we recover the Euler characteristic of the δ-graded Khovanov homology theory. Similarly, in HF K where δ = a − m, so that the parameters are related by q 2 = t, we see that q = √ −1 corresponds to t = −1, so the Euler characteristic specializes to i,a∈ (−1) i+a · rank HF K i (K, a) = ∆ K (−1).
The t = −1 evaluation of Alexander polynomial is equal to the knot determinant det(K). This invariant has another categorification via the Heegaard-Floer 3-manifold homology of the branched double cover of K, χ HF (Σ(K)) = |H 2 (Σ(K), )| = det(K) = |∆ K (−1)| see [OS05,Section 3]. This variant of Heegaard-Floer homology is the target of the conjectured spectral sequence from odd Khovanov homology discussed above.
1.2. Quantum algebra and a zoo of quantum invariants. These connection between varients of Heegaard-Floer homology and even/odd Khovanov homology are somewhat striking given that these invariants are defined in very different ways. However, quantum algebra sheds some light as to why such a connection is less surprising. It is well known that the Jones polynomial can be interpreted as a quantum invariant associated to the quantum group for sl 2 and its two dimensional representation. Varying the semisimple Lie algebra g and the irreducible representations coloring the strands of a link, one arrives at a whole family of quantum invariants.
The Alexander-Conway function ∇ L (t 1 , . . . , t k ) for a k component link L is a rational function in variables t 1 , . . . , t k . Similarly, the Alexander polynomial ∆ L (t 1 , . . . , t k ) is a Laurent polynomial in variables t 1 2 1 , . . . , t 1 2 1 . They are related by ∇ L (t 1 , . . . , t k ) = ∆ L (t 2 1 , . . . t 2 k ) if k > 1, and ∇ L (t) = ∆ L (t 2 ) t − t −1 . The Alexander-Conway polynomial can be formulated as a (non-semisimple) quantum invariant in several ways. One formulation realizes ∇ L using the quantum group associated to the super Lie algebra gl(1|1) [RS93]. Murakami gave a construction using quantum sl 2 with the quantum parameter specialized to a fourth root of unity [Mur92,Mur93]. Kauffman and Saleur give a construction based on quantum sl(1|1).
A comparison and review of the U √ −1 (sl 2 ) and U q (gl(1|1)) Reshetikhin-Turaev functors are studied in [Vir02]. In this work, Viro shows that there is a 'q-less subalgebra' U 1 of U q (gl(1|1)) that is responsible for producing the Reshetikhin-Turaev functor that is closely related to the one coming from U √ −1 (sl 2 ). Similarly, an algebra that can be defined over also appears in the Kauffman-Saleur U q (sl(1|1)) construction of the Alexander-Conway polynomial ∇ K via a specialization (λ = 1 in their notation, see [KS91,Equation (2.1)]), which corresponds in our notation to working with the subalgebrȧ U(sl(1|1))1 1 ofU(sl(1|1)), see section 8.5. The quantum parameter is not needed in the definition of this algebra, it only arises in the coalgebra structure when one acts on tensor product representations.
Connections between the Alexander invariant and the Jones polynomial then arise via an observation by Kauffman and Saleur that the R-matrix for braiding the fundamental representations of sl 2 and sl(1|1) agree when evaluated at q = √ −1. This implies an identification of quantum invariants
J K (q)| q= √ −1 = ∇ K (t) t= √ −1 = ∆ K (t) t=−1 .
(1.1)
Our aim in this article is to lay the groundwork for a higher representation theoretic categorification of the knot determinant |∆ K (−1)| by categorifying the quantum algebras used to define it. Our approach provides a new perspective on connections between these different approaches via the theory of covering Kac-Moody algebras.
1.3. The oddification program. The so called 'oddification' program [LR14] in higher representation theory grew out of an attempt to provide a representation theoretic explanation for a number of phenomena observed in connection with odd Khovanov homology. The idea is that Khovanov homology shares many connections throughout out mathematics and theoretical physics, suggesting that many of the other fundamental structures connected with Khovanov homology may also have odd analogs. The oddification program looks for odd analogs of structures that are typically non-commutative, having the same graded ranks as traditional objects and becoming isomorphic when coefficients are reduced modulo two. Often the odd world provides the same combinatorial relationships in a non-commutative setting.
The nilHecke algebra plays a central role in the theory of categorified quantum groups, giving rise to an integral categorification of the negative half of U q (sl 2 ) [Lau08, KL10,Rou08]. An oddification of this algebra was defined in [EKL14] which can be viewed as algebra of operators on a skew polynomial ring. The invariants under this action define an odd version of the ring of symmetric functions [EK12,EKL14]. The odd nilHecke algebra also gives rise to "odd" noncommutative analog of the cohomology of Grassmannians and Springer varieties [LR14,EKL14]. It also fits into a 2-categorical structure [EL16,BE17b] giving an odd analog of the categorification of the entire quantum group U q (sl 2 ). In each of these cases, the structures possess combinatorics quite similar to those of their even counterparts. When coefficients are reduced modulo two the theories become identical, but the odd analogues possess an inherent non-commutativity making them distinct from the classical theory.
The odd nilHecke algebra appears to be connected to a number of important objects in traditional representation theory. It was independently introduced by Kang, Kashiwara and Tsuchioka [KKT16] starting from the different perspective of trying to develop super analogues of KLR algebras. Their quiver Hecke superalgebras become isomorphic to affine Hecke-Clifford superalgebras or affine Sergeev superalgebras after a suitable completion, and the sl 2 case of their construction is isomorphic to the odd nilHecke algebra. Cyclotomic quotients of quiver Hecke superalgebras supercategorify certain irreducible representations of Kac-Moody algebras [KKO13,KKO14]. A closely related spin Hecke algebra associated to the affine Hecke-Clifford superalgebra appeared in earlier work of Wang [Wan09] and many of the essential features of the odd nilHecke algebra including skew-polynomials appears much earlier in this and related works on spin symmetric groups [KW08a,KW08b,KW09].
1.4. Covering Kac-Moody algebras. Clark, Hill, and Wang showed that the odd nilHecke algebra and its generalizations fit into a framework they called covering Kac-Moody algebras [HW15,CW13,CHW13,CHW14]. Their idea was to decategorify the supergrading on the odd nilHecke algebra by introducing a parameter π with π 2 = 1. The covering Kac-Moody algebra is then defined over Q(q)[π]/(π 2 −1) for certain very specific families of Kac-Moody Lie algebras. The specialization to π = 1 gives the quantum enveloping algebra of a Kac-Moody algebra and the specialization to π = −1 gives a quantum enveloping algebra of a Kac-Moody superalgebra. This idea led to a novel bar involution q = πq −1 allowing the first construction of canonical bases for Lie superalgebras [CHW14,CW13]. In the simplest case, the covering algebra U q,π can be seen as a simultaneous generalization of the modifed quantum groupU(sl 2 ) and the modified quantum Lie superalgebraU(osp(1|2)). This relationship is illustrated below.U q,π
U(sl 2 )U(osp(1|2)) π→1 ⑧ ⑧ ⑧ ⑧ ⑧ ⑧ ⑧ ⑧ π→−1 1 1 ❄ ❄ ❄ ❄ ❄ ❄ ❄ ❄
Covering Kac-Moody algebras are not an sl n phenomenon. In finite type, the covering Kac-Moody algebras U q,π (g) can be defined connecting the superalgebra of the anisotropic Lie superalgebra g = osp(1|2n) with the quantum Kac-Moody algebra g = so(2n + 1) obtained by forgetting the parity in the root datum [CHW13,HW15]. In particular, the only finite type family of covering Kac-Moody algebras U q,π (g) have a π = 1 specialization equal to the quantum eveloping algebra U q (so(2n + 1)) and the π = −1 specialization the quantum superalgebra U q (osp(1|2n). The connection to sl 2 only arises because of the Lie algebra coincidence sl 2 ∼ = so(3).
The algebra/superalgebra pairs connected by covering theory are closely connected by the theory of twistors developed by Clark, Fan ,Li, Wang [FL15,CFLW14]. Denote by t a square root of −1, and letU[t] denote the algebraU q,π with scalars extended by t. Then the twistor associated to a covering algebraU q,π (g) gives an isomorphismΨ
:U[t]| π=−1 −→U[t]| π=1
(1.2) sending π → −π and thereby switching between a quantum group and its super analog. This map sends q → t −1 q. Hence,U[t] π=1 andU[t] π=−1 can be regarded as two different rational forms of a common algebraU[t]. These two rational forms each admit their own distinct integral forms. The twistor isomorphism (1.2) has implications for the corresponding quantum link invariants. Blumen showed that osp(1|2n) and so(2n + 1) invariants colored by the standard (2n + 1)-dimensional representations agree up to a substitution of variable [Blu10]. To a knot or link K, Clark greatly extended this observation by defining covering colored knot invariants J λ K (q, t) associated to U q,π (g) and a dominant integral weight λ ∈ X + . These knot invariants take values in a larger field Q(q, t) τ with τ 2 = π. They have the property of simultaneously generalizing the colored so(2n+1) quantum invariant and the osp(1|2n) super quantum invariant. If we define so J λ k (q) := J λ K (q, 1) and osp J λ k (q) := J λ K (q, t) then Clark shows [Cla17,Theorem 4.24] that the twistor isomorphism (1.2) gives rise to an identification of quantum knot invariants
osp J λ k (q) = α(λ, K) so J λ k (t −1 q) (1.3)
for some scalar α(λ, K) depending on the dominant weight λ and the link K. In the case when n = 1 this gives the surprising observation that the colored Jones polynomial can be obtained from the super representation theory of osp(1|2) with appropriate scalars.
Here we show that the covering algebraU q,π for n = 1 specializes at (q, π) = ( √ −1, 1) to the small quantum group for sl 2 (at a fourth root of unity) and at parameters (q, π) = (−1, −1) to a "q-less subalgebra" of modified sl(1|1), see Sections 8.4 and 8.5. The quantum knot invariant twistor isomorphism (1.3) at n = 1 specializes at q = −1 to a connection between the osp(1|2) invariant at parameter q = −1 and the sl 2 -invariant at q = t −1 (−1) = t which is a fourth root of unity. Hence, the connection between a q-less subalgebra of quantum sl(1|1) and sl 2 at a fourth root of unity may be a special case of a twistor arising from the covering Kac-Moody theory.
1.5. Categorification. The existence of a canonical basis for the covering algebraU q,π led Clark and Wang to conjecture the existence of a categorification of this algebra [CW13]. The conjecture was proven in [EL16] who defined a × 2 -graded categorification U q,π ofU q,π . Later, Brundan and Ellis gave a simplified treatment [BE17b] using the theory of monoidal supercategories [BE17a]. This work provided a drastic simplification that makes the present work possible.
Thus far, the odd categorification U q,π of quantum sl 2 has yet to be applied to give a higher representation theoretic interpretation of odd Khovanov homology. However, it is interesting to note the strong agreement between the existence of covering Kac-Moody algebras for so(2n + 1) and the existence of an "odd link homology" for the same algebras predicted by the string theoretic approach to link homology constructed by Mikhaylov and Witten using D3-branes with boundary on fivebrane [MW15].
Given the expected connections to odd link homology, the conjectural spectral sequences connecting odd Khovanov homology and knot Floer homology motivates the investigation of 2-categorical differentials on the odd categorified quantum group. In particular, we categorify both specializations of the covering algebra at (q, π) = ( √ −1, 1) and (−1, −1) corresponding sl 2 at a fourth root of unity and a subalgebra of quantum sl(1|1), see Corollary 9.10. This is not as straightforward as one might hope. In both algebras there are relations of the form E 2 = F 2 = 0 and such relations are known to be nontrivial to categorify.
If the identity morphism of a generator E in a category is represented diagrammatically by a vertical arrow, then two vertical strands represents the object EE. Khovanov was the first to identify the representation theoretic importance of dg-structures with a diagrammatic relation defining the differential of a crossing to be two vertical strands. Such structures appeared in work of Lipshitz, Ozsvath, Thurston [LOT11] providing a combinatorial construction of Heegaard-Floer homology. Khovanov showed that such a relation could be used to produce the nilpotent relation E 2 = 0 needed for a categorification of the positive part of gl(1|1) [Kho14]. This led to a categorification of the positive part of gl(m|1) [KS17].
Since Khovanov's initial observations, there have been various proposals to categorifications connected with gl(1|1) appearing in the literature. In [EPV15] the tangle Floer dg algebra is identified with a tensor product of U q (gl(1|1)) representations and dg-bimodules were defined giving the action of quantum group generators E and F . Further, Ozvath and Szabo's new bordered Heegaard-Floer homology [OS18,OS17] can be seen as a categorification of gl(1|1) representations via the work of Manion [Man16]. Motivated by contact geometry, Tian defined a categofication of U q (sl(1|1)) using triangulated categories arising from the contact category of the disc with points on the boundary [Tia16,Tia14a,Tia14b]. An approach to categorifying tensor powers of the vector representation of U q (gl(1|1)) based on super Schur-Weyl duality is given in [Sar16], which is related to the bordered theory in [Man17].
Here we extend Khovanov's observation in order to categorify the specializations of the covering algebra at q 2 = −π. To do this we define new dg-structures on the 2-category U q,π .
1.6. Differential graded structures on categorified quantum group. Derivations on the even categorification U(sl 2 ) were studied by Elias and Qi [EQ16a]. They were interested in categorifying the small quantum group for sl 2 at a (prime) root of unity. Their approach made use of the theory of Hopfological algebra initiated by Khovanov [Kho16] and developed by Qi [Qi14]. The main idea in Hopfological algebra is to equip a given categorification with the structure of a p-dg algebra. This is like a dg-algebra, except that d p = 0 rather than d 2 = 0.
Within the framework of Hopfological algebra, there have been a number of investigations into categorifications at a prime root of unity. A p-dg analog of the nilHecke algebra was studied in [KQ15]. In [EQ16a] Elias and Qi categorify the small quantum group for sl 2 at a (prime) root of unity by equipping the 2-category U with a p-differential giving it the structure of a p-dg-2-category. Using thick calculus from [KLMS12], in Elias and Qi categorify an idempotented form of quantum sl 2 and some of its simple representations at a prime root of unity [EQ16b]. This involves equipping the Karoubi envelopeU of the 2-category U with a p-dg structure. Related categorifications studied were studied in [QS17]. All of these approaches require p to be a prime root of unity and the base field to have characteristic p.
Much less in known about honest dg-structures, or categorification at a root of unity working over an arbitrary field (see [LQ18] for the current state of the art). In particular, it was shown in [EQ16a] that there are no nontrivial differentials in characteristic zero on the original categorification U(sl 2 ). The only clue we have is the work of Ellis and Qi that equips the odd nilHecke algebra with an honest dg-algebra structure [EQ16c] . Their work gives a categorification of the positive part of U q (sl 2 ) with q specialized to a fourth root of unity. There are a couple of points here worth highlighting. First, they work with the odd nilHecke algebra defined over an arbitrary field or (no need to work in characteristic p). Second, the fourth root of unity doesn't come from considering a funny version of chain complexes with d 4 = 0; they use ordinary dg-algebras. However, the differential they define on the odd nilHecke algebra is not bidegree zero. Rather it has × 2 -degree (2,1) leading to so called mixed complexes, or 'half graded' chain complexes of vector spaces.
The effect of having mixed complexes is a collapse of the × 2 -bigrading, analogous to the δ-grading from link homology theory. At the level of the Grothendieck ring of the derived category of dg-modules, this has the effect of imposing the relation 1 + q 2 π = 0 in the ground ring [q, q −1 , π]/(π 2 − 1). When π = 1, this gives the Grothendieck ring the structure of [ √ −1]-algebra. So the fourth root of unity comes from the bidegree of the differential, not from the theory of p-dg algebras. This is discussed in greater detail in section 3.4.
Ellis and Qi suggested that their work on the differential graded odd nilHecke algebra should extend to the odd categorified quantum group U(sl 2 ) to provide a characteristic zero lift of the differentials defined on the original categorification U(sl 2 ) that were studied in finite characeteristic in [EQ16a].
Here we prove this conjecture by defining a family of differentials on the odd 2-supercategory U, see Proposition 7.1. 1.7. Main Results. In Proposition 7.1 we classify 2-categorical differentials on the odd 2-category U q,π . Our classification depends on the so-called nondegeneracy conjecture stating that certain spanning sets form a basis for the 2-homs in U q,π . However, the differentials obtained via this assumption suffice to achieve our aim. Following similar arguments from [EQ16a], we show that the odd 2-category U q,π is dg-Morita equivalent to a positivly graded dg-algebra enabling us to compute the Grothendieck ring of the dg-2-supercategory (U q,π , ∂) using the theory of fantastic filtrations developed by Elias and Qi [EQ16b]. As explained in section 3.4, we have freedom in how we treat the 2 -grading in the Grothendieck group. In particular, the Grothendieck group is naturally a [q, q −1 , π]/(π 2 − 1, 1 + q 2 π) module with [M Π] = π[M ]. We show in Corollary 9.10 that taking π = 1 specialization results in a categorification ofU(sl 2 ) at a fourth root of unity. While taking the π = −1 specialization eliminates q entirely and we are left with a -module closely related to gl(1|1). In particular, we have relations E 2 = F 2 = 0 and a super commutator relation for E and F . In this way, U √ −1 (sl 2 ) together with a q-less version of gl(1|1) appear naturally via different decategorifications of the same 2-category U q,π .
Acknowledgements. The authors are very grateful to You Qi for patiently explaining the details of his previous work and to Andy Manion for explaining his perspective on quantum algebraic aspects of Heegaard-Floer homology. We would also like to thank Job Brundan and Joshua Sussan for comments on an earlier draft of this article. Both authors were partially supported by the NSF grants DMS-1255334 and DMS-1664240.
Super dg theory
Here we consider × 2 -graded dg categories. This is a modest generalization of the standard theory of dg-categories, since a -graded dg-category induces a 2 -graded by collapsing the grading modulo 2. However, we note that the 2 grading on 2-morphisms in the 2-category U defined in section 5 are not the mod 2 reductions of the quantum -grading. It is easy to see this from the bigrading on caps and cups. We consider differentials with respect to the 2 (or super) grading. If the differential also has a nontrivial -grading (as is the case with the differential on U) this can produce interesting effects on the Grothendieck ring. In particular, if the differential has bidegree (2,1) we are led to the notion of 'half graded' complexes whose Grothendieck ring corresponds to the Gaussian integers, see section 3.4.
The natural context for discussing 2 -graded dg categories is the super category formalism developed by Ellis and Brundan [BE17a,BE17b] that we review in section 2.1.
2.1. Super 2-categories. Let k be a field with characteristic not equal to 2. A superspace is a
2 -graded vector space V = V0 ⊕ V1. For a homogeneous element v ∈ V , write |v| for the parity of v.
Let SVect denote the category of superspaces and all linear maps. Note that homs Hom SVect (V, W ) has the structure of a superspace since and linear map f : V → W between superspaces decomposes uniquely into an even and odd map. The usual tensor product of k-vector spaces is again a superspace with
(V ⊗ W )0 = V0 ⊗ W0 ⊕ V1 ⊗ W1 and (V ⊗ W )1 = V0 ⊗ W1 ⊕ V1 ⊗ W0.
Likewise, the tensor product f ⊗ g of two linear maps between superspaces is defined by
(f ⊗ g)(v ⊗ w) := (−1) |g||v| f (v) ⊗ g(w).
(2.1)
Note that this tensor product does not define a tensor product on SVect, as the usual interchange law between tensor product and composition has a sign in the presence of odd maps
(f ⊗ g) • (h ⊗ k) = (−1) |g||h| (f • h) ⊗ (g • k). (2.2)
This failure of the interchange law depending on pairity is the primary structure differentiating super monoidal categories from their non-super analogs.
If we set SVect to be the subcategory consisting of only even maps, then the tensor product equips SVect with a monoidal structure. The map u ⊗ v → (−1) |u||v| v ⊗ u makes SVect into a symmetric monoidal category. We now define supercategories, superfunctors, and supernatural transformations by enriching categories over the symmetric monoidal category SVect. See [Kel82] for a review of the enriched category theory.
Definition 2.1. A supercategory A is a category enriched in SVect. A superfunctor F : A → B between supercategories is an SVect-enriched functor.
Unpacking this definition, the hom spaces in a supercategory are superspaces
HOM A (X, Y ) = Hom0 A (X, Y ) ⊕ Hom1 A (X, Y )
and composition is given by an even linear map. Let SCat denote the category of all (small) supercategories, with morphisms given by superfunctors. This category admits a monoidal structure making it a symmetric monoidal category [BE17b, Definition 1.2].
Definition 2.2. A 2-supercategory is a category enriched in SCat. These means that for each pair of objects we have a supercategory of morphisms, with composition given by a superfunctor.
For our purpose, it suffices to consider a 2-supercategory to be an extension of the definition of a 2-category to a context where the interchange law relating horizontal and vertical composition is replaced by the super interchange law
g f λ µ ν Y X Y ′ X ′ = g f λ µ ν Y X Y ′ X ′ = (−1) |f ||g| g f λ µ ν Y X Y ′ X ′
Effectively this means that when exchanging heights of morphisms we must take into account their parity.
(Q, Π)-envelopes.
Definition 2.3 ([BE17b] Definition 1.6). Given a graded 2-supercategory U, its (Q, Π)-envelope U q,π is the graded 2-supercategory with the same objects as U, 1-morphisms defined from 2.3. Super dg-algebras. In this section we collect some facts about differential graded algebras in the setting of super setting. Following [EQ16c] we grade our dg algebras by /2 . Traditional dg algebras inherit a 2 grading by collapsing the -grading mod 2. However, in our setting we will have both a -grading and 2 -grading that is not the mod 2 reduction of the grading.
Hom Uq,π (λ, u) := {Q m Π a F | for all F ∈ Hom U (λ, µ) with m ∈ and a ∈ /2 } with horizontal composition law (Q n Π b G)(Q m Π a F ) := Q m+n Π a+b (GF ). The 2-morphisms are defined by Hom Uq,π (Q m Π a F, Q n Π b G) := x n,b m,a | for all x ∈ Hom U (F,G)
A super dg-algebra (A, ∂ A ) is a superalgebra A = A0 ⊕ A1 and an odd parity1 k-linear map ∂ = ∂ A : A → A satisfying ∂ 2 and for any homogeneous a, b ∈ A
∂(ab) = ∂ A (ab) = ∂ A (a)b + (−1) |a| a∂ A (b). A left super dg-module (M, ∂ M ) is a supermodule M = M0 ⊕ M1 equipped with an odd parity k-linear map ∂ M : M → M such that for any homogeneous elements a ∈ A, m ∈ M we have ∂ M (ma) = ∂ A (a)m + (−1) |a| a∂ M (m).
If A and B are super dg-algebras, then a super dg (A, B)-bimodule is a superspace equipped with a differential and commuting left super dg A-module and right super dg B-module structure.
Denote by C(A) the homotopy category of super dg-modules given by quotienting maps of dgmodules by null-homotopies. Likewise, we denote by D(A) the derived category of dg-modules. Both C(A) and D(A) are triangulated categories. In the super setting that we are working in, the translation functor [1] acts by the parity shift:
(M [1]) k := M k+1 , ∂ M[1] := −∂ M 2.4. Super dg-categories.
For standard results on dg-categories see [Kel06].
Definition 2.4. A supercategory A is called a super dg-category if the morphism spaces between any two objects X, Y ∈ A are equipped with a degree1 differential ∂
∂ : Homx A (X, Y ) −→ Homx +1 A (X, Y ),
which acts via the Leibnitz rule
∂ : HOM A (Y, Z) × HOM A (X, Y ) −→ HOM A (X, Z) (g, f ) → ∂(g • f ) = ∂(g) • f + (−1) |g| g • ∂(f ).
Given a dg algebra A, consider the dg-enhanced module category A ∂ −dmod by defining the HOMcomplex between two dg modules M and N to be
HOM A (M, N ) = Hom0 A (M, N ) ⊕ Hom1 A (M, N ).
The differential ∂ acts on a homogenous map f ∈ HOM A (M, N ) as
∂(f ) := ∂ N • f − (−1) |f | f • ∂ M
If we take A = k with trivial differential differential then k ∂ −dmod is just the dg-category of chain complexes of super vector spaces.
Definition 2.5. A left (respectively right) super dg-module M over a super dg-category A is a superfunctor
M : A → k ∂ −dmod, resp. M : A op → k ∂ −dmod, (2.3)
that commutes with the ∂-actions on A and k ∂ −dmod.
Super dg-2-categories.
Definition 2.6. A (strict) super dg 2-category (U, ∂) consists of a 2-category U, together with a differential on 2-morphisms satisfying the super Leibnitz rule for both horizontal and vertical composition.
More explicitly, a super dg-2-category consists of the following data.
(1) A set of objects I = λ, µ, . . ., and for an λ, µ ∈ I we have µ U λ := Hom U (λ, µ) is a super dg-category. In particular, vertical composition of 2-morphisms obeys the super-dgcategory Leibnitz rule for morphisms.
(2) For any pair of 1-morphisms µ E λ , µ E ′ λ in the same Hom space, the space of 2-morphisms
HOM µ U λ ( µ E λ , µ E ′ λ )
is a chain complex of super vector spaces.
(3) The horizontal composition of 2-morphisms satisfied the Leibnitz rule. That is, for any triple of objects λ, µ, ν ∈ I, then
HOM ν Uµ ( ν F µ , ν F µ ) × HOM µ U λ ( µ E λ , µ E ′ λ ) −→ HOM ν U λ ( µ F E λ , µ F ′ E ′ λ ) (h, f ) → (hf ) satisfies ∂(hf ) = ∂(h)f + (−1) |h| h∂(f ).
Hopfological algebra
One of the primary reasons that triangulated categories are prevalent in categorification is the need to accommodate minus signs in the Grothendieck ring. For positive algebraic structures, typically additive categories suffice with basis elements corresponding to indecomposable objects in the categorification. Quantum groups with their canonical basis are an excellent example of this phenomenon. However, as we expand categorification to include non-positive structures like the Jones polynomial, minus signs are lifted via the shift functor [1] for some triangulated category, with the shift functor [1] inducing the map of multiplication by −1 at the level of the Grothendieck group.
In his proposal for categorification at roots of unity, Khovanov showed that the traditional world of dg-categories, together with their homotopy and derived categories of modules, fits into a framework of Hopfological algebra. For our purposes, Hopfological algebra will provide a valuable perspective on the possible decategorifications of graded dg-2-supercategories. We quickly review the relevant details of Hopfological algebra needed for these purposes. For a more detailed review see [Kho16,Qi14].
3.1. Basic setup. Let H be a finite-dimensional Hopf algebra. Then H is also a Frobenius algebra and every injective H-module is automatically projective. Define the stable category H−mod as the quotient of the category H−mod by the ideal of morphisms that factor through a projective (equivalently injective) module. The category H−mod is triangulated, see for example [Hap88].
The shift functor for the triangulated structure on H−mod is defined by the cokernel of an inclusion of M as a submodule into an injective (projective) module I. We can fix this inclusion by noting that for any H-module M , the tensor product H ⊗ M with a free module is a free module, and the tensor product P ⊗ M with a projective module is always projective [Kho16, Proposition 2]. A left integral Λ for a Hopf algebra H is an element Λ ∈ H satisfying hΛ = ε(h)Λ.
Using the left integral, any H-module M admits a canonical embedding into a projective module via M → H ⊗ M sending m → Λ ⊗ m. Since HΛ = kΛ, HΛ is a one-dimensional submodule of the free module H, hence it is projective. This allows us to define a shift functor on the category of stable H-modules via
T : H−mod → H−mod (3.1) M → (H/(HΛ)) ⊗ M.
We now define the basic objects of interest in the theory of Hopfological algebra that generalize dg-algebras and their modules. The reader may find Figure 3.2 helpful for tracking the analogy. An H-module algebra B is and algebra equipped with an action of H by algebra automorphisms. A left H-comodule algebra is an associative k-algebra A equipped with a map
∆ A : A → H ⊗ A making A an H-comodule and such that ∆ A is a map of algebras.
There is a natural construction to form a left H-comodule algebra from a right H-module algebra by forming the smash product algebra A := H#B. As a k-vector space A is just H ⊗B, with multiplication given by
(h ⊗ b)(ℓ ⊗ c) = hℓ (1) ⊗ (b · ℓ (2) )c,
where we use Sweedler notation for the coproduct ∆
(ℓ) = (ℓ) ℓ (1) ⊗ℓ (2) ∈ H ⊗H. The left H-comodule structure on A = H#B is given by ∆ A (h ⊗ b) = ∆(h) ⊗ b. Let A−mod denote∆(1) = 1 ⊗ 1, ε(1) = 1 (3.3) ∆(D) = 1 ⊗ D + D ⊗ 1, ε(D) = 0.[Y ] = [Y ] + [Z], whenever there is a distinguished triangle inside D c (B, H) of the form X −→ Y −→ Z −→ T (X).
Both the Grothendieck rings of categories C(B, H) and D(B, H) are left modules over the Grothendieck ring K 0 (H−mod) (see [Kho16, Corollary 1 and 2]). Hence, the ground ring for decategorification provided by the theory of Hopfological algebra associated to the Hopf algebra H is determined by K 0 (H−mod). Note this group has a ring structure because H−mod has an exact tensor product. When H is quasi-triangular then K(H−mod) is commutative, so that we do not need to distinguish between left and right moduels [Qi14, Remark 7.17].
3.3.1. Ground ring for Grothendieck group from the Hopfological perspective. In the special case when A = k, the Grothendieck group for D(k, H) is the same as H−mod since H acts trivially on k [Qi14, Corollary 9.11]. Since K 0 (A H −mod) is a module over K 0 (H−mod) ∼ = K 0 (D(k, H)), the Grothendieck ring of D(k, H) determines the ground ring for the Grothendieck group of A H −mod. In the language of dg-algebras, this just says that K 0 of the derived category of chain complexes of vector spaces determines the ground ring for K 0 of the category of dg-modules.
Consider the category of complexes of k-vector spaces. Considering the homological degree modulo two gives rise to a 2 grading for the dg homotopy category of (ungraded) chain complexes D(k) of vector spaces where the differential has degree deg(d) =1. Assuming k = or a field, it follows that any complex in D(k) is isomorphic to a direct sum of indecomposable chain complexes of the following form:
• a single copy of k in any bidegree; • a copy of S = 0 → k d −→ kΠ → 0 where we include the parity shift of Π on the right hand side to accommodate the degree of the differential.
Then the Grothendieck group is generated as a [π]/(π 2 −1)-module by the symbol [k] with [kΠ] = π[k]. If the differential d in the complex S is given by multiplication by a unit in k, then S is contractible and therefore isomorphic to 0 in K 0 (D(k)). The contractibility of S imposes the additional relation
(1 + π)[k] = 0.
(3.5)
The classication of objects in D(k) implies that this is the only relation, and it forces the symbol of S to be zero even when d is not multiplication by an invertible element. Hence, π = −1 and
K 0 (D(k)) ∼ = [π]/(1 + π) = .
(3.6)
The homological shift k[1] is given by the cokernel of the inclusion into H ⊗ k with k → Λk = D ⊗ k. The injective envelope H ⊗ k is two dimensional as a vector space spanned by the identity and D. We can represent H ⊗ k by the complex kΠ k D G G where k includes into the right most term via the map D ⊗ 1. Hence, the cokernel of this inclusion gives that k[1] = kΠ. So we have recovered from the hopfological perspective the fact that the shift [1] is just the parity shift Π and at the level of the Grothendieck group we have
(k[1]) = kΠ = π k = − k .
We carefully reviewed the usual dg-case to set the stage for our treatment in the 'mixed complex' setting.
3.4. Gaussian integers. The following section is an extension of the discussion in [EQ16c, Section 2.2.4] that was explained to us by You Qi. Consider the category of × 2 graded modules. We denote by 1 a shift of the quantum (or -grading), and by Π the parity shift functor. Define a differential between such modules to be a map of bidegree (2,1) that squares to 0. The main difference between this case and the previous is that our Hopf algebra input into Hopfological algebra is now the super Hopf aglebra H = k[D]/D 2 where D has mixed degree (2,1). A chain complex is a k-module equipped with such a differential. Following [EQ16c] we call such complexes half-graded complexes for reasons that will become clear. Denote the corresponding homotopy category by C(k) and the derived category by D(k).
Any category of × 2 graded dg-modules with differentials of bidegree (2,1) will have a Grothendieck ring that is a module over K 0 (D(k)), so this Grothendieck ring controls the ground ring that appears in categorification via half-graded complexes. Assuming k = or a field, it follows that any complex in D(k) is isomorphic to a direct sum of indecomposable chain complexes of the following form:
• a single copy of k in any bidegree;
• a copy of S = 0 → Π b k a d −→ Π b+1 k a + 2 → 0 with the first term in any bidegree (a, b) and the right most copy in bidegree (a + 2, b +1). Then the Grothendieck group is generated as a [q, q −1 , π]/(π 2 − 1)-module by the symbol [k] with [k 1 ] = q[k] and [kΠ] = π[k]. If the differential d in the complex S is given by multiplication by a unit in k, then S is contractible and therefore isomorphic to 0 in K 0 (D(k)). For simplicity take a = b = 0 in S, the contractibility of S imposes the additional relation
(1 + q 2 π)[k] = 0.
(3.7)
The classication of objects in D(k) implies that this is the only relation, and it forces the symbol of S to be zero even when d is not multiplication by an invertible element. Hence, K 0 (D(k)) ∼ = [q, q −1 , π]/(π 2 − 1, 1 + q 2 π). If we specialize π = −1, then the equation imposed by the contractible complex implies that q 2 = 1, so the ground ring for reduces to . If we specialize π = 1 then we have the relation q 2 = −1 and we get that q must be a fourth root of unity. Hence, we have the following result.
Proposition 3.2. Given a × 2 graded algebra equipped with a differential d of bidegree (2,1). Then the Grothendieck group associated with the category of × 2 -graded dg-modules is a module over the ring [q, q −1 , π]/(π 2 − 1, 1 + q 2 π).
At π = −1 this is just and at π = 1 this [
√ −1].
Corollary 3.3. The Grothendieck ring of the (Q, Π)-envelope of a graded 2-supercategory equipped with a differential of bidegree (2,1) is a module over the ring [q, q −1 , π]/(π 2 − 1, 1 + q 2 π).
At π = −1 this is just and at π = 1 this [ √ −1].
4.
Results on Grothendieck groups of super dg-algebras 4.1. Grothendieck group of a super dg-algebra. Despite our protracted discussion of Hopfological algebra, the decategorification of categories of super dg-modules is not so unlike the decategorification of normal dg-modules. We detoured through Hopfological algebra to highlight the fact that the Grothendieck ring will have the structure of a module over the Gaussian integers [ The corresponding Grothendieck rings are defined via direct sums of the hom dg-categories
K 0 (U, ∂) := λ,µ∈Ob(U) K 0 (D c ( µ U λ )) . (4.2)
If U q,π is a dg (Q, Π)-2-category (see section 2.2) , then K 0 (U q,π , ∂) is a [q, q −1 , π]/(π 2 − 1) with [Q m X] = q m [X] and [XΠ a ] = π a [X] for X ∈ Hom U (λ, µ).
4.3.
Positively graded dg-algebras. A -graded dg-algebra is called a positive dg-algebra (see [Sch11]) if it satisfies the following (1) the algebra A = ⊕ i∈ A i is non-negatively graded, (2) the degree zero part A 0 is semisimple, and (3) the differential acts trivially on A 0 . The calculation of the Grothendieck ring of a positively graded dg-algebra is greatly simplified.
Fantastic filtrations.
In this section, we give a review of the fantastic filtration and recall the related theorems from [EQ16a]. Fantastic filtration are an essential tool in this work for determining the Grothendieck ring of the odd dg 2-category U q,π . The key issue is that if A is a dg-algebra the direct sum decomposition of A-modules does not necessarily commute with the differential. However, if there exists a fantastic filtration F • on an A-module Ae, where e is an idempotent, then the direct sum decomposition of Ae as A-modules becomes a direct sum decomposition of dg-modules. We collect several important results on fantastic filtrations from [EQ16a, Section 5] that are easily adapted to the super dg-setting.
Lemma 4.3. Let R be a ring and the elements u i , v i ∈ R, where i ∈ I is a finite set, satisfy the following conditions:
u i v i u i = u i (4.3) v i u i v i = v i (4.4) v i u j = δ i,j (4.5)
then e = i u i v i is an idempotent and we have a direct sum decomposition Re ∼ = ⊕ i Rv i u i .
Note that u i v i is an idempotent for each i ∈ I, as u i v i u i v i = u i v i , and moreover {u i v i } i∈I is a set of orthogonal idempotents, as for any i = j, u i v i u j v j = u j v j u i v i = 0. It follows that e is an idempotent and Re ∼ = ⊕ i Rv i u i .
For a dg-algebra A and any idempotent e ∈ A, the A-module Ae is an A ∂ −dmod summand if for any a ∈ A, we have ∂(abe) ∈ Ae for any be ∈ Ae. By the Leibniz rule, ∂(abe) = ∂(a)be + (−1) |a| a∂(b)e + (−1) |a|+|b| ab∂(e) = ∂(ab)e + (−1) |a|+|b| ab∂(e) so that ∂(abe) ∈ Ae if ∂(e) = 0. The computation of the differential of an idempotent e is important for determining if Ae is compact in the derived category D(A), since ∂(e) = 0 implies that Ae is cofibrant and has a compact image in D(A).
The following is a straight-forward adaptation of Lemma 5.3 in [EQ16a]. (1) F • is a filtration by super dg-modules, so that Av i u i is a super dg-module and the subquotient isomorphism is an isomorphism of super dg-modules.
(2) The following equations are satisfied for all i ∈ I, v i ∂(u i ) (4.6)
u i ∂(v i ) ∈ F <i .
(4.7)
Definition 4.5. If the filtration F • in Proposition 4.4 satisfies ∂(e) = 0 and ∂(v i u i ) = 0 for all i ∈ I, then it is called a fantastic filtration on the dg-module Ae.
The main advantage of the fantastic filtration is that it gives a direct sum decomposition of the images of idempotents as dg-modules. By a straightforward extension of [EQ16a, Corollary 5.8] the following theorem holds. 5. The odd 2-category for sl (2) 5.1. The odd nilHecke ring. The odd nilHecke ring ONH a is the graded unital associative ring generated by elements x 1 , . . . , x a of degree 2 and elements ϕ 1 , . . . , ϕ a−1 of degree −2, subject to the relations
ϕ 2 i = 0, ϕ i ϕ i+1 ϕ i = ϕ i+1 ϕ i ϕ i+1 , (5.1) x i ϕ i + ϕ i x i+1 = 1, ϕ i x i + x i+1 ϕ i = 1,(5.
2)
x i x j + x j x i = 0 (i = j), ϕ i ϕ j + ϕ j ϕ i = 0 (|i − j| > 1), (5.3) x i ϕ j + ϕ j x i = 0 (i = j, j + 1).
(5.4) 5.2. The odd categorified quantum group. In [BE17b] Ellis and Brundan give a minimal presentation of the 2-category U q,π that requires the invertibility of certain maps. Here we give a more traditional presentation by including the additional relations on 2-morphisms that are equivalent to the invertibility of these maps. Ellis and Brundan also first define a graded 2-supercategory U and then pass to its (Q, Π)-envelope U q,π in the sense of section 2.2. Here we define the (Q, Π)-envelope directly adopting the convention that a 1-morphism of the form Q m Π a F is written as Π a F m ; that is, we use the grading shift notation m , rather than Q m .
Definition 5.1. The odd 2-supercategory U q,π = U q,π (sl 2 ) is the 2-supercategory consisting of
• objects λ for λ ∈ ,
• for a signed sequence ε = (ε 1 , ε 2 , . . . , ε m ), with ε 1 , . . . , ε m ∈ {+, −}, define
E ε := E ε1 E ε2 . . . E εm
where E + := E and E − := F . A 1-morphisms from λ to λ ′ is a formal finite direct sum of strings Π a E ε 1 λ t = Π a 1 λ ′ E ε 1 λ t for any a, t ∈ and signed sequence ε such that λ ′ = λ + 2 m j=1 ε j 1.
• 2-morphisms are -modules spanned by (vertical and horizontal) composites of identity 2morphisms and the following tangle-like diagrams
y y • λ λ+2 : Π a E1 λ t → Π a+1 E1 λ t + 2 y y y y λ : Π a EE1 λ t → Π a+1 EE1 λ t − 2 # # t t λ : Π a 1 λ t → Π a F E1 λ t + 1 + λ × × λ : Π a 1 λ t → Π a+λ+1 EF 1 λ t + 1 − λ λ : Π a F E1 λ t → Π a+λ+1 1 λ t + 1 + λ q q λ : Π a EF 1 λ t → Π a 1 λ t + 1 − λ (5.5)
for every a, t, λ ∈ . The ( × 2 )-degree of a 2-morphism is the difference between the degrees of the target and the source. Note in particular, that the 2 degree of the right pointing cap and cup are not the mod 2 reductions of the -degree.
Diagrams are read from right to left and bottom to top. The rightmost region in our diagrams is usually colored by λ. The identity 2-morphism of the 1-morphism E1 λ is represented by an upward oriented line (likewise, the identity 2-morphism of F 1 λ is represented by a downward oriented line). The fact that we are defining a 2-supercategory means that diagrams with odd parity skew commute.
The 2-morphisms satisfy the following relations (see [BE17b] for more details).
(1) (Odd nilHecke) The E's carry an action of the odd nilHecke algebra. Using the adjoint structure this induces an action of the odd nilHecke algebra on the F 's. We use the following notation for the dotted bubbles:
• * +m λ := λ • m+λ−1 , • * +m λ := λ • m−λ−1 , so that deg • * +m λ = deg • * +m λ = 2m.
The degree 2 bubbles are given a special notation as follows:
λ := • * +1 λ = λ • λ , λ ≥ 0, • * +1 λ 1 = λ • −λ , λ ≤ 0. (5.12)
By the superinterchange law this bubble squares to zero
λ 2 = 0 (5.13)
We call a clockwise (resp. counterclockwise) bubble fake if m + n − 1 < 0 and (resp. if m − n − 1 < 0). The fake bubbles are defined recursively by the homogeneous terms of the equation
r,s≥0 r+s=t • * +2r • * +2s λ = δ t,0 . (5.14) • * +2n+1 λ = λ • * +2n , • * +2n+1 λ = λ • * +2n(−1) f2 y y • f1 • * +f2 • f3 λ , y y λ = − y y λ + f 1 +f 2 +f 3 =−λ−1 (−1) f2 y y • f1 • * +f2 • f3
λ .
(5.21)
Remark 5.2. There are no 1-morphisms that change the weight λ by an odd number. This implies that the 2-category splits U q,π ∼ = U even q,π ⊕ U odd q,π (5.22)
where U even q,π only has even weights and U odd q,π only has odd weights.
We denote by U the underlying graded super 2-category of U q,π . That is,
Hom U (x, y) := a,t∈
Hom Uq,π (x, Π a x t ).
5.3. Additional properties of U q,π . For later convenience we record several relations that follows from those in the previous section, see [BE17b] for more details.
(1) (Dot Slide Relations)
y y • n λ = (−1) ⌊ n 2 ⌋ y y • n λ • n λ = (−1) ⌊ n 2 ⌋ • n λ (5.23) (−1) ⌊ n 2 ⌋ y y • n λ = y y • n λ if n is even (−1) λ y y • n λ + 2 y y • n−1 λ if n is odd (5.24) (−1) ⌊ n 2 ⌋ • n λ = •n λ if n is even (−1) λ •n λ + 2 • n−1 λ if n is oddy y λ = λ r=0 (−1) (λ+r+1) x x • r λ • * +(λ−r) (5.32) λ = −λ r=0 (−1) (λ+r) • (λ−r) λ • * +r (5.33) follow.
5.4. The nondegeneracy conjecture. A spanning set for the space Hom Uq,π (x, y) between arbitrary 1-morphisms x, y was defined in [EL16, Section 3.4] and simplified in [BE17b, Section 8]. In both instances it was conjectured that this spanning set is a basis. For our classification of differentials we need a basis for certain hom space that is a subset of the full nondegeneracy conjecture.
Weak nondegeneracy conjecture The following Hom spaces are spanned over k by the elements predicted by the non-degeneracy conjecture:
Hom 2 Uq,π (½ λ , Π1 λ ) = λ Hom 4 Uq,π (E½ λ , EΠ1 λ ) = y y • λ 2 , y y • λ , y y λ • * +2 Hom 2 Uq,π (EE½ λ , EE½ λ ) = y y y y • λ , y y • y y λ , y y y y λ , y y y y •2 λ , y y y y • • λ , y y • y y 2 λ , y y y y • λ , y y • y y λ , • * +2 y y y y λ (5.34)
The results of [EL16,Theorem 7.1] and [BE17b] coupled with the results from [KKO13,KKO14] imply that the 2-category U q,π admits a 2-representation on categories of modules over cyclotomic odd nilHecke algebras. It should be possible to show the spanning sets above are a basis using this action. However, it is difficult to extract formulas for the bubbles under this 2-representation so the weak form of the nondegeneracy conjecture remains open. Note that from these assumptions and the adjunction axioms it is possible to deduce bases for hom spaces involving caps and cups.
Derivations on the odd 2-category
In this section we give a classification of derivations on the odd 2-category U q,π assuming the weak nondegeneracy conjecture from Section 5.4. Assuming these spanning sets form a basis we are able to reduce degrees of freedom by comparing coefficients of basis elements. Even without the weak nondegeneracy conjecture, we arrive at well defined derivations that suit our purposes for categorification.
Here we look for derivations that are compatible with a natural dg-structure on odd (skew) polynomials which was shown by Ellis and Qi to extend to the odd nilHecke algebra. To that end, we restrict our attention to differentials of bidgree (2,1). Recall that a derivation on a 2-category is just a derivation on the space of 2-morphisms which satisfies the Leibniz rule for both horizontal and vertical composition of 2-morphisms.
6.1. General form of derivations. The most general form of a bidgree (2,1) differential on the generating 2-morphisms by (6.1) ∂ y y y y λ := β 1,λ y y y y λ + β 2,λ y y y y • λ + β 3,λ y y • y y λ + β 4,λ y y y y λ (6.2)
∂ λ := a λ−2 • λ + b λ−2 λ ∂ λ :=ā λ • λ +b λ x x λ (6.3) ∂ λ := c λ • λ + d λ λ ∂ x x λ :=c λ−2 x x • λ +d λ−2 x x λ (6.4)
for some coefficients in k. The image of all identity 2-morphisms are zero. This definition is extended to arbitrary composites using the Leibniz rule. By Remark 5.2 the derivations can be defined independently on U even q,π and on U odd q,π . In order for this assignment to define a derivation on U q,π it must respect the defining relations of the 2-category U q,π . For example, let us consider the right adjunction axiom (5.8). The left-hand-side is vertical composite of two 2-morphsism, call them x and y. The image of the right hand side of (5.8) under ∂ is zero, hence, using the linear independence of the 2-morphisms in (6.6) we obtain a relationship between the coefficients (a λ +ā λ ) = 0, (b λ +b λ ) = 0.
Lemma 6.1. For the map ∂ : U q,π → U q,π defined by (6.1)-(6.4) to preserve the odd nilHecke relations, the right adjunction axioms, and the parity left adjoint relations, the coefficients must take the form ∂ y y
• λ := α 1,λ y y • λ 2 + α 2 y y • λ (6.7) ∂ y y y y λ := β 1 , λ y y y y λ + (β 1,λ − α 1,λ ) y y y y • λ + (α 1,λ − β 1,λ ) y y • y y λ + α 2 y y y y λ (6.8) ∂ λ := a λ−2 • λ + b λ−2 λ ∂ λ := −a λ • λ − b λ x x λ (6.9) ∂ λ := c λ • λ + d λ λ ∂ x x λ := (−1) λ c λ−2 x x • λ − d λ−2 x x λ (6.10)
where 2β 1,λ = α 1,λ+2 + α 1,λ .
Proof. This is a direct computation using the Leibniz rule. The right adjunction axiom impliesā λ = a λ andb λ = −b λ . Similarly, the pairity left adjoint equation impliesc λ = (−1) λ c λ andd λ = −d λ . The first nilHecke relation in (5.6) implies
0 = ∂ y y y y λ = β 1,λ y y y y λ + β 2,λ y y y y • λ + β 3,λ y y • y y λ + β 4,λ y y y y λ − β 1 , λ y y y y λ − β 2,λ • y y y y λ − β 3,λ • y y y y λ − β 4,λ y y y y λ = (−β 2,λ − β 3,λ )
y y y y λ which implies β 3,λ = −β 2,λ . Making these substitutions the odd nilHecke relation (5.7) involves the terms
∂ y y y y • λ = α 1,λ+2 y y y y • 2 λ + α 2,λ+2 y y y y • λ + α 3,λ+2 y y y y • * +2 λ − β 1,λ y y y y • λ − β 2,λ y y y y • 2 λ + β 2,λ y y y y • • λ − β 3,λ y y y y • λ = (α 1,λ+2 − β 2,λ ) y y y y • 2 λ − (α 2,λ+2 + β 3,λ ) y y y y • λ + α 3,λ+2 • * +2 y y y y λ + 3α 3,λ+2 y y y y • 2 λ − β 1,λ y y y y • λ + β 2,λ y y y y • • λ
where the last equality follows from bubble slide relation (2). Similarly,
∂ y y y y • λ = −(β 2,λ + α 1,λ ) y y y y • 2 λ + (β 3,λ + α 2,λ ) y y y y • λ − α 3,λ • * +2 y y y y λ + (β 1,λ − β 2,λ − α 1,λ ) y y • y y λ + (β 2,λ + α 1,λ ) y y y y • λ − β 2,λ y y y y • • λ − (β 3,λ + α 2,λ )
y y y y λ Therefore, (5.7) implies ∂ y y y y • λ + ∂ y y y y • λ = 0 so assuming the weak non-degeneracy conjecture we get the following set of equations:
α 1,λ+2 − α 1,λ − 2β 2,λ = 0 α 2,λ+2 − α 2,λ = 0 α 3,λ+2 − α 3,λ = 0 α 3,λ+2 = 0 β 3,λ + α 2,λ = 0 β 1,λ − β 2,λ − α 1,λ = 0 (6.11)
From which we can deduce that α 2,λ does not depend on the weight λ in U even q,π or λ in U even q,π , so we set α 2 := α 2,λ = α 2,λ+2 , and α 3,λ = 0 for all λ. If we combine the first and the last equations we get 2β 1,λ = α 1,λ+2 + α 1,λ .
(6.12) Equation (6.12) is redundantly implied by preserving the second nilhecke relation of (5.6).
Lemma 6.2. For n ≥ 0, the map ∂ in Lemma 6.1 satisfies
∂ y y • λ+2 λ n = α 1,λ δ n,odd y y • n+1 λ+2 λ + (−1) n+1 nα 2 y y •n λ+2 λ (6.13) ∂ • n λ−2 λ = (−2a λ − α 1,λ )δ n,odd • λ−2 λ n+1 + (−1) n+1 nα 2
• n λ−2 λ (6.14)
Proof. The claim follows by induction on the number of dots using the Leibniz rule.
6.2. Derivations and bubble relations. The remaining relations in U q,π involve dotted bubbles. We first compute the image of the map defined in Lemma 6.1 on the odd bubble defined in (6.15). By a direct computation we have
∂ λ = (a λ−2 + c λ−2 + α 1,λ−2 δ λ,odd ) λ • * +2 if λ ≥ 0 (a λ + c λ + α 1,λ δ λ,odd ) λ • * +2
if λ ≤ 0 (6.15) Lemma 6.3. For the map ∂ defined in Lemma 6.1 to preserve the odd cyclicity relation (5.17)
∂ y y λ • = 2 ∂ λ − ∂ y y λ • we must have c λ = −a λ − δ λ,odd α 1,λ . (6.16)
Proof. Applying ∂ to (5.17) implies
(−2a λ − α 1,λ ) • 2 λ + α 2 • λ = (2c λ + (−1) λ+1 α 1,λ ) • 2 λ + α 2
• λ so comparing coefficients of the basis elements in the weak nondegeneracy conjecture implies 2c λ + (−1) λ+1 α 1,λ = −2a λ − α 1,λ and the result follows.
The lemma implies that any derivation ∂ must kill the odd bubble ∂ λ = 0, (6.17) so that the centrality of the odd bubble relation (5.16) holds trivially. Note that the real odd bubble is equal to the fake odd bubble using the relations of odd 2-category U q,π
λ • * +1 = λ • * +1
for all λ ∈ . This is an immediate consequence of [BE17b, equation (5.8)].
Lemma 6.4. The derivation of an odd labeled (real) bubble is zero. That is, for n ≥ 0,
∂ λ • * +2n+1 = 0, for λ ≥ 0 ∂ λ • * +2n+1 = 0 for λ ≤ 0. (6.18)
Proof. The proof of the statement follows easily using the relation (5.15), the previous Lemma, and the Leibniz rule.
Lemma 6.5. For the map ∂ defined in Lemma 6.1 to preserve the degree zero bubble relation (5.11)
we have − a λ − b λ + c λ − (−1) λ d λ − α 1,λ δ λ,even + (λ + 1)α 2 = 0 (6.19)
for all λ ∈ , so that any derivation of a dotted bubble must be given by Proof. For n ≥ 0 the image under ∂ of the n-labelled dotted bubble is given by ∂ λ • * +n = δ n,even (a λ−2 + b λ−2 − c λ−2 + (−1) λ d λ−2 + α 1,λ−2 δ λ,even − (n + λ − 1)α 2 ) λ • * +n+1 (6.22) for λ ≥ 0, and
∂ λ • * +n = δ n,even (−a λ − b λ + c λ + (−1) λ+1 d λ − α 1,λ δ λ,even − (n − λ − 1)α 2 ) λ • * +n+1 (6.23)
for λ ≤ 0. The identity by (5.11) then implies that the degree zero bubble vanishes in the image of ∂
0 = ∂ λ • * +0 = a λ−2 + b λ−2 − c λ−2 + (−1) λ d λ−2 + α 1,λ−2 δ λ,even − (λ − 1)α 2 λ for λ ≥ 1, 0 = ∂ λ • * +0
= − a λ − b λ + c λ − (−1) λ d λ − α 1,λ δ λ,even + (λ + 1)α 2 λ for λ ≤ 1, so the result follows.
Remark 6.6. The computations above are technically for real bubbles -those with a positive number of dots. However, using odd infinite Grassmannian relation (5.14) and (5.15) to express fake bubbles in terms of the real bubbles, the same formulas given in Lemma 6.4 and 6.5 will apply to fake bubbles as well.
If we combine (6.16) with the equation (6.19) obtained from ∂ of degree-0 bubble is zero, we can express d λ as
d λ = (−1) λ+1 (2a λ + α 1,λ + b λ − (λ + 1)α 2 )
for all λ ∈ . (6.24) 6.3. Derivations and curl relations. Before proving the odd sl(2)-relations it is convenient to study the image of some of the curl relations under the map ∂. We continue using the definition Lemma 6.1 imposing the additional constraints from (6.16) and (6.24).
Lemma 6.7. Fix either U even q,π or U odd q,π . For the map ∂ defined in Lemma 6.1 to preserve the curl relations
•−λ λ = λ for λ ≤ 0, y y • λ λ = λ for λ ≥ 0, (6.25)
we must have
α 1 := α 1,λ = α 1,λ+2 = β 1,λ = β 1,λ+2 , for all λ ∈ . (6.26)
Proof. This is a straightforward computation after deriving the formulas for sideways crossings. For the λ ≥ 0 case we have
∂ y y • λ λ = −a λ • λ + (2a λ−2 + α 1,λ − b λ + b λ−2 + (1 − λ)α 2 + (−1) λ d λ−2 ) λ whereas ∂ λ = −a λ • λ − b λ λ
equating coefficients of the corresponding terms implies
−b λ = 2a λ−2 + α 1,λ − b λ + b λ−2 + (1 − λ)α 2 + (−1) λ d λ−2 or (−1) λ+1 d λ−2 = 2a λ−2 + α 1,λ + b λ−2 + (1 − λ)α 2 for all λ ≥ 0.
Likewise, the λ ≤ 0 case implies
(−1) λ d λ = (λ + 1)α 2 − b λ − 2a λ − α 1,λ+2 for all λ ≤ 0.
Hence, (6.26) must hold for all values of λ. Then combining (6.26) with (6.24) implies α 1 := α 1,λ−2 = α 1,λ which together with (6.12) implies β 1,λ = β 1,λ−2 = α 1 .
6.4. Derivations and odd sl2 relations.
Lemma 6.8. The map ∂ defined in Lemma 6.1 with the constraints from (6.26) satisfy the following identities:
∂ y y y y λ = (−a λ−2 − α 1 ) y y λ − (a λ−2 + α 1 δ λ,even ) x x λ ∂ y y y y λ = (a λ + α 1 δ λ,even ) y y λ + (a λ + α 1 ) λ
Proof. The sideways crossings take the form
∂ y y λ = (a λ − a λ−2 ) y y • λ + (b λ − b λ−2 − α 2 ) y y λ + (α 1 + a λ ) λ ∂ y y λ = (−a λ + a λ−2 ) • y y λ − (b λ − b λ−2 − α 2 ) y y λ + (−a λ−2 − α 1 δ λ,even )
x x λ (6.27) and the result follows by direct computation.
Lemma 6.9. The map ∂ defined in Lemma 6.1 with the constraints from (6.26) preserves the odd sl(2) relations (5.21) without any additional constraints.
Proof. We prove the first relation in (5.21). The second can be proven similarly. First we compute
∂ r+n+k =λ−1 (−1) k y y • r • * +k • n λ = r ′ +n+k =λ r ′ ≥1
(−1) n (a λ−2 + α 1 δ λ+r,even )
y y • r ′ • * +k • n λ + r+n ′ +k =λ n ′ ≥1 (−1) n ′ +1+k (a λ−2 + α 1 δ n ′ ,even ) y y • r • * +k • n ′ λ + r+n+k ′ =λ (−1) n δ k ′ ,odd (−2a λ−2 − α 1 ) y y • r • * +k ′ • n λ
After simplifying this reduces to = −(a λ−2 + α 1 δ λ,even ) n+k =λ (−1) n y y
• * +k • n λ − (a λ−2 + α 1 ) r+k =λ (−1) 1+k y y • r • * +k λ
The claim follow using Lemma 6.8 and the curl relations (5.32) and (5.33).
6.5. Classification of derivations. We summarize our results up to this point in the following: Proposition 6.10. Assuming the weak nondegeneracy conjecture from section 5.4, the most general bidegree (2,1) derivation ∂ of the odd 2-category U even q,π or U odd q,π has following form on generating 2morphisms:
∂ y y • λ = α 1 y y • λ 2 + α 2 y y • λ ∂ y y y y λ = α 1 y y y y λ − α 2 y y y y λ (6.28) ∂ λ = a λ−2 • λ + b λ−2 λ ∂ λ = −a λ • λ − b λ x x λ (6.29) ∂ λ = c λ • λ + d λ λ ∂ x x λ = (−1) λ c λ−2 x x • λ − d λ−2 x x λ (6.30)
with relations
c λ = −a λ − δ λ,odd α 1 (6.31) d λ = (−1) λ+1 (2a λ + α 1 + b λ − (λ + 1)α 2 ) (6.32)
7. Differentials and fantastic filtrations 7.1. Classification of differentials.
Proposition 7.1. Assuming the weak nondegeneracy conjecture from section 5.4, the most general bidegree (2,1) differential ∂ (i.e. ∂ 2 = 0) on the space of 2-morphisms of the odd 2-category U even q,π or λ in U odd q,π has following form on generating 2-morphisms:
∂ y y • λ = α 1 y y • λ 2 ∂ y y y y λ = α 1 y y y y λ (7.1) ∂ λ = a λ−2 • λ + b λ−2 λ ∂ λ = −a λ • λ − b λ x x λ (7.2) ∂ λ = c λ • λ + d λ λ ∂ x x λ = (−1) λ c λ−2 x x • λ − d λ−2 x x λ (7.3)
with relations
c λ = −a λ − δ λ,odd α 1 (7.4) d λ = (−1) λ+1 (2a λ + α 1 + b λ ) (7.5) a λ (a λ + α 1 ) = 0. (7.6)
Proof. We compute ∂ 2 of each generating 2-morphism from the general derivation in Proposition 6.10 and set the resulting equation equal to zero. This produces the equations α 2 (2 + α 1 ) = 0 α 1 α 2 = 0 a λ (a λ + α 1 ) = 0 a λ α 2 = 0 (a λ + α 1 δ λ,odd )(a λ + α 1 δ λ,even ) = 0 α 2 (a λ + α 1 δ λ,odd ) = 0. (7.7)
Hence, for ∂ 2 = 0 we must have α 2 = 0 and a λ (a λ + α 1 ) = 0.
Note that Lemmas 6.4 and 6.5 imply that the differential kills all dotted bubbles:
∂ λ • * +n = ∂ λ • * +n = 0
for all λ ∈ and n ≥ 0. 7.2. Fantastic filtrations on EF and F E. In this section we show that the odd sl(2)-isomorphisms (5.21) give rise to differentials on U q,π providing fantastic filtrations for EF 1 λ and F E1 λ . We refer the reader to Section 4.4 for the preliminaries on the Fantastic filtration.
For each λ ∈ define I = {0, 1, . . . , |λ|}. We define data {u i , v i } i∈I giving rise to an idempotent factorization determined by the odd sl(2)-relation. We begin with case λ ≥ 0 corresponding to the first relation in (5.21). Recall the family of 2-categorical differentials defined in Proposition 6.10.
Consider the set of objects
X λ := {EF 1 λ , F E1 λ , 1 λ 1 − λ + 2c |c = 0, 1, . . . λ − 1},
and its endomorphism dg-algebra R = END Uq,π (X λ ). Here our investigation departs from [EQ16a] in that the most natural filtration
u n := r≥0 (−1) (λ+n+r+1) x x • r λ •−n−r−2 (0 ≤ n ≤ λ − 1), u λ := y y λ (7.8) v n := • λ n (0 ≤ n ≤ λ − 1), v λ := −
y y λ on the morphism EF 1 λ leads to a trivial differential when we impose the fantastic filtration condition v i ∂(u j ) = 0, for i ≤ j. (7.9) In Definition 7.2 we define an order ≺ on I for which the maps in (7.8) give rise to fantastic filtrations.
We
check v i ∂(u j ) = 0 for 0 ≤ i ≤ j ≤ λ − 1. 0 = v i ∂(u j ) = r≥0 (−1) λ+j+r+1 (α 1 δ r,odd + (−1) r+λ c λ−2 ) • (r+i+2−λ)+ * λ • * +(λ−j−r−1) + (−1) r+1 d λ−2 • (r+i+1−λ)+ * λ • (λ−j−r−1)+ * = i−j+1 r ′ =max(0,i−λ+2) (−1) i+j+r ′ −1 (α 1 δ r ′ −λ+i,odd + (−1) r ′ +i c λ−2 ) • r ′ + * λ • * +(i−j+1−r ′ ) + i−j r ′ =0 (−1) λ+j d λ−2 • r ′ + * λ • i−j−r ′ + *
where we set r ′ = r − λ + 2 + i in the first sum and r ′ = r − λ + 1 + i in the second. Note that only the even bubbles are nonzero in the second sum by (5.13), so that by (5.14) this term simplifies
= i−j+1 r=max(0,i−λ+2) (−1) i+j+r−1 (α 1 δ r−λ+i,odd + (−1) r+i c λ−2 ) • r+ * λ • * +(i−j+1−r) + δ i,j (−1) λ+j d λ−2 λ = i−j+1 r=max(0,i−λ+2) (−1) i+j+r+1 α 1 δ i+r,odd + (−1) j a λ−2 • r+ * λ • * +(i−j+1−r) (7.10) + δ i,j (−1) j−1 (2a λ−2 + α 1 + b λ−2 ) λ
where we used (7.4) and (7.5) to eliminate c λ−2 and d λ−2 .
If we are interested in the case when i ≤ j then this equation only provides constraints when i = j and when j = i + 1. At i = j we get
−α 1 δ i,odd + (−1) i a λ−2 + α 1 δ i,even + (−1) i a λ−2 − (−1) i (2a λ−2 + α 1 + b λ−2 ) = 0, if i ≤ λ − 2 α 1 δ i,even + (−1) i a λ−2 + (−1) i−1 (2a λ−2 + α 1 + b λ−2 ) = 0 if i = λ − 1 which imply b λ−2 = 0 (7.11)
a λ−2 = −α 1 δ λ,even (7.12)
At j = i + 1 ≤ λ − 1 we must have r = 0 in (7.10) which requires
α 1 δ i,odd + (−1) i+1 a λ−2 = 0 (7.13) or α 1 δ i,odd = −(−1) i+1 a λ−2 = (−1) i+1 α 1 δ λ,even .
(7.14)
If λ and i are both even, or if they are both odd, this implies that α 1 = 0 and the differential collapses. Note that if i is odd this reduces to (7.12). To avoid the collapse of the differential we modify the total order on I. With this modified order we still must verify that v i+1 ∂(u i ) = 0 when i and λ have the same parity. Expressed in our previous i, j notation this condition says v i ∂(u j ) = 0 when i = j + 1 ≤ λ − 1 and j, λ both even, or both odd. From (7.10) we see that this amounts to checking that 2 r=max(0,j+1−λ+2)
(−1) 2+r α 1 δ j+1+r,odd + (−1) j a λ−2 • r+ * λ • * +(2−r) (7.17)
which requires α 1 δ j+1,odd + (−1) j a λ−2 = 0 (7.18) since the odd bubble squares to zero. Since we assume j and λ have the same parity this agrees with (7.12).
Next we consider the case i = j = λ. Using the derivation of the sideways crossing from (6.27) implies
v λ ∂(u λ ) = (a λ − a λ−2 ) • y y y y λ + (b λ − b λ−2 )
y y y y λ + (a λ−2 + α 1 δ λ,even ) y y λ Together with (7.11) and (7.12) the termwise vanishing of the coefficients above imply that
a λ = a λ−2 = −α 1 δ λ,even b λ = b λ−2 = 0
Then we can further simplify the remaining coefficients from (7.4) and (7.5) to
c λ = (−1) λ α 1 , d λ = α 1 (7.19)
and all the coefficients have been reduced to a single parameter α 1 .
The only remaining cases are v i ∂(u λ ) for i < λ. With the constraints derived thus far it is not hard to show that ∂(u λ ) = 0, so that v i ∂(u λ ) = 0 is satisfied for all i < λ.
Definition 7.3. Define a bidegree (2,1) differential ∂ α on the space of 2-morphisms of the odd 2category U even q,π or λ in U odd q,π given on generating 2-morphisms:
∂ α y y • λ = α y y • λ 2 ∂ α y y y y λ = α y y y y λ ∂ α λ = −αδ λ,even • λ ∂ α λ = αδ λ,even • λ ∂ α λ = (−1) λ α • λ + α λ ∂ α x x λ = α x x • λ − α x x λ
Proposition 7.4. Consider either U even q,π or U odd q,π and supposed that ∂ α is as in Definition 7.3. Then the data {u c , v c } c∈I , with the total order (I, ≺) from Definition 7.2, yield a fantastic filtration on EF 1 λ when λ ≥ 0 and on F E1 λ when λ ≤ 0.
Proof. The requirements ∂(u n v n ) = 0 for all 0 ≤ n ≤ λ v s u t = 0, for s = t,
for λ > 0 follow immediately from the axioms of U q,π using (5.21) , (5.31) and (5.14), see for example [BE17b,Equations (5.13) and (5.14)]. We have proven above that for λ > 0 we have v i ∂(u j ) = 0. The case for λ ≤ 0 is proven similarly using the second equation in (5.21).
Covering Kac-Moody algebras
In this section we review the rank one covering Kac-Moody algebra from [CHW13], see also [Cla14].
8.1. Covering quantum group. Set Q(q) π = Q(q)[π]/(π 2 − 1).
Definition 8.1. The covering quantum group U q,π = U q,π (sl 2 ) associated to sl 2 is the Q(q) π -algebra with generators E, F , K, K −1 , J, and J −1 and relations (1) KK −1 = 1 = K −1 K, JJ −1 = 1 = J −1 J,
(2) KE = q 2 EK, KF = q −2 F K, (3) JE = π 2 EK, JF = π −2 F K, (4) EF − πF E = JK−K −1 πq−q −1 .
Define the (q, π)-analogues of integers, factorials, and binomial coefficients by
[n] = (πq) n − q −n πq − q −1 , [a]! = a i=1 [i], n a = a i=1 [n + i − a]
[a]! .
Note as in [CHW13] that n a =
[n]!
[a]![n−a]! for n ≥ a ≥ 0 and [−n] = −π n [n]. Let A = [q, q −1 ], A π = [q, q −1 , π]/(π 2 − 1), and Q(q) π = Q(q)[π]/(π 2 − 1).
The idempoteneted (or modified) formU q,π of the covering algebra U q,π is obtained by replacing the unit of U q,π with a collection of orthogonal idempotents {1 λ : λ ∈ } indexed by the weight lattice of U q,π . In particular, there is no need for generators K or J since Definition 8.2. The idempotented formU q,π of quantum covering sl 2 is the (non-unital) Q(q) π -algebra generated by orthogonal idempotents {1 λ : λ ∈ } and elements
K ± 1 λ = q ±λ 1 λ , J ± 1 λ = π ±λ 1 λ ,(8.1 λ+2 E1 λ = E1 λ = 1 λ+2 E, 1 λ F 1 λ+2 = F 1 λ+2 = 1 λ F, λ ∈ , (8.2)
subject to the covering sl 2 relation,
EF 1 λ − πF E1 λ = [λ]1 λ . (8.3)
The integral idempotented form is the A π -subalgebra AUq,π ⊂U q,π generated by the divided powers
E (a) 1 λ = E a 1 λ [a]! , 1 λ F (a) = 1 λ F a [a]! . (8.4)
There are direct sum decompositions of algebraṡ U q,π = λ,µ∈ 1 µUq,π 1 λ AUq,π = λ,µ∈ 1 µ ( AUq,π )1 λ with 1 µ ( AUq,π )1 λ the [q, q −1 , π]-subalgebra spanned by 1 µ E (a) F (b) 1 λ and 1 µ F (b) E (a) 1 λ for a, b ∈ + .
8.2. Canonical basis. Clark and Wang show in [CW13, Theorem 6.2] that the algebraU q,π has a A π -canonical basisḂ q,π , extending Lusztig's basis [Lus93, Proposition 25.3.2] for sl 2 , given by
(i) E (a) F (b) 1 λ for a,b ∈ + , n ∈ , λ ≤ b − a, (ii) π ab F (b) E (a) 1 λ for a,b ∈ + , λ ∈ , λ ≥ b − a, where E (a) F (b) 1 b−a = π ab F (b) E (a) 1 b−a .
The importance of this basis is that the structure constants are in AE[q, q −1 , π]/(π 2 − 1). In particular, for x, y ∈Ḃ q,π xy = x∈Ḃq,π m z
x,y z with z ∈Ḃ q,π and m z x,y ∈ AE[q, q −1 , π]/(π 2 − 1). Let µ (Ḃ q,π ) λ denote the set of elements inḂ q,π belonging to 1 µ (U q,π )1 λ . Then the setḂ q,π is a unioṅ
B q,π = λ,µ∈ µ (Ḃ q,π ) λ .
8.3. Quotients of the covering algebra. The following can be found in [CW13, Section 7.3]. For our purposes we take this as the definition of the (super)algebrasU(sl 2 ) andU(osp(1|2).
Proposition 8.3. Specializing π = 1, the quotientU q,π / π − 1 is isomorphic to the quantum grouṗ U(sl 2 ). Specializing π = −1, the quotientU q,π / π + 1 is isomorphic toU(osp(1|2) -the idempotent form of the quantum superalgebra for osp(1|2). The canonical basis ofU q,π specializes at π = 1, respectively π = −1, to a canonical basis forU(sl 2 ), resp.U(osp(1|2) 1 .
We now describe various further specializations of the q parameter. Define a quotient of A π given by R = [q, q −1 , π]/(π 2 − 1, 1 + q 2 π). Here we have set q 2 = −π with π 2 = 1. Hence, at π = −1 we have q 2 = 1 so that R = . At π = 1, q 2 = −1, so that R = [ √ −1]. In R we have πq = −q −1 so that the (q, π) quantum integers become
[n] R = π n q n − q −n πq − q −1 == (−1) n q −n − q −n −2q −1 = q −n+1 δ n,odd . (8.5)
Since AUq,π s has an A π -canonical basis (see [CW13,Section 7.1]) we change basė
U R := AUq,π ⊗ Aπ R. (8.6) Equation (8.5) implies E 2 = [2]E (2) = 0, F 2 = F (2) = 0 (8.7) inU R .
This implies E (a) = F (a) = 0 in R for a > 1. Further, from the presentation of AUq,π given in [CW13, Proposition 6.1] we see that there are no other relations. Hence, we have the following.
Proposition 8.4. The R-algebraU R has a presentation given as the nonunital associative R-algebra given by generators {E1 λ , F 1 λ , 1 λ , λ ∈ } subject to the relations
(i) 1 λ 1 µ = δ λ,µ (ii) E1 λ = 1 λ+2 E, F 1 λ = 1 λ−2 F , (iii) EF 1 λ − πF E1 λ = [λ] R 1 λ , (iv) E 2 = 0,
F 2 = 0. Further,U R has an R-basis given by the elements 2
B R := E (a) F (b) 1 λ | a, b ∈ {0, 1}, λ ≤ b − a ∪ π ab F (b) E (a) 1 λ | a, b ∈ {0, 1}, λ ≥ b − a , (8.8) over all λ ∈ with it understood that E (a) F (b) 1 b−a = π ab F (b) E (a) 1 b−a .
The algebraU R splits as a direct sumU R =U even R ⊕U odd R whereU even R , respectivelyU odd R corresponds to the subalgebra containing only even, respectively odd, weights λ ∈ . 8.4. Small quantum sl 2 . In this section we connect the covering algebra at parameters (q, π) = ( √ −1, 1) with the small quantum group. The small quantum group introduced by Lusztig is a finite dimensional Hopf algebra over the field of cyclotomic integers [Lus90]. Here we consider the small quantum group at a fourth root of unity.
Let √ −1 be a primitive fourth root of unity and consider the ring of cyclotomic integers
[ √ −1] = [q, q −1 ]/Ψ 4 (q) = [q, q −1 ]/(1 + q 2 ),(8.1 λ = [k] √ −1 E (k) 1 λ , F k 1 λ = [k] √ −1 F (k) 1 λ (8.10)
2 Our use of divided power notation is not needed in the case of the fourth root of unity. We use this notation for ease in converting between the canonical basis at generic q.
are only nonzero when 0 ≤ k ≤ 2.
The following Proposition follows immediately from Proposition 8.3 and 8.4.
Proposition 8.5. The specializationU R | π=1 =U q,π | π=1,q= √ −1 is isomorphic to the small quantum groupu √ −1 (sl 2 ). 8.5. q-less subalgebra. In this section we consider the specialization (q, π) = (−1, −1), corresponding to setting the quantum parameter q = −1 inU(osp(1|2)). We show this specialization has a connection with the superalgebra gl(1|1) via its sl(1|1) subalgebras.
The quantum group U q (sl(1|1) is the unital associative Q(q)-algebra with generators E, F , H, H −1 and relations HH −1 = H −1 H = 1,
E 2 = F 2 = 0, HE = EF, HF = F H, EF + F E = H − H −1 q − q −1 (8.11)
This algebra also admits a modified form [Tia16] given below.
Definition 8.6. The modified formU(sl(1|1)) of quantum sl(1|1) the (non-unital) Q(q)-algebra obtained from U q (sl(1|1) by replacing the unit by a collection of orthogonal idempotents 1 λ for λ ∈ such that
1 λ 1 µ = δ λ,µ , H1 λ = 1 λ H = q n 1 λ , 1 λ E = E1 λ , 1 λ F = F 1 λ so that EF 1 λ + F E1 λ = [λ]1 λ , where here [λ]
denotes the usual quantum integer.
Since the action of E and F does not change the weight space λ, there is clearly a decomposition of algebrasU (sl(1|1)) = λ∈ U (sl(1|1))1 λ .
The algebraU(sl(1|1)) admits an integral form AU (sl(1|1)) defined over A = [q, q −1 ]. The relations inU(sl(1|1)) are very similar to the relations inU R at parameters (q, π) = (−1, −1). However, there isn't a specialization of q in the usual quantum integers (π = 1) that agree with the (q, π) = (−1, −1) covering integers [n] R . Instead, we see from (8.5) that at q = −1, the integers [λ] R are either 0 or 1.
Proposition 8.7. There are -algebra isomorphismṡ U even R | π=−1 =U even q,π | (q=−1,π=−1) ∼ =U(sl(1|1))10 U odd R | π=−1 =U odd q,π | (q=−1,π=−1) ∼ =U(sl(1|1))11 (8.12) determined by sending E1 λ , F 1 λ ∈U R to the corresponding element inU(sl(1|1)).
Proof. By (8.5) the quantum integer [λ] R at q = −1 is either 0 or 1. The result follows immediately from Proposition 8.3 and 8.4.
Remark 8.8. In Kauffman and Saleur's work constructing the Alexander-Conway polynomial from U q (sl(1|1)) they restrict their attention to a specialization (λ = 1 in their notation, see [KS91, Equation (2.1)]), that corresponds in our notation to restricting toU(sl(1|1))1 1 . As noted above, the entire algebraU(sl(1|1))1 1 has a presentation over , rather than Q(q). The quantum parameter enters the Alexander story in the work of Kauffman and Saleur via the coproduct on U q (sl(1|1)).
Recall the modified form of quantum gl(1|1), defined for example in [TVW17, Definition 3.2].
Definition 8.9. The idempotented formU(gl(1|1)) of quantum gl(1|1) is the (non-unital) Q(q)-algebra generated by orthogonal idempotents {1 (λ1,λ2) : (λ 1 , λ 2 ) ∈ 2 } so that λ2) , and elements 1 λ1+1,λ2−1 E1 (λ1,λ2) = E1 (λ1,λ2) = 1 (λ1+1,λ2−1) E,
1 (λ1,λ2) 1 (λ ′ 1 ,λ ′ 2 ) = δ λ1,λ ′ 1 δ λ2,λ ′ 2 1 (λ1,1 λ1−1,λ2+1 F 1 (λ1,λ2) = F 1 (λ1,λ2) = 1 (λ1−1,λ2+1) F, (8.13)
for (λ 1 , λ 2 ) ∈ 2 , subject to the relation, EF 1 (λ1,λ2) + F E1 (λ1,λ2) = [λ 1 − λ 2 ]1 (λ1,λ2) .
(8.14)
Note that the action of E and F preserves the lines in 2 of slope (λ 1 − λ 2 ). In particular, if we restrict to weights (λ 1 , λ 2 ) such that λ 1 − λ 2 = µ, then this subalgebra ofU(gl(1|1)) is isomorphic tȯ U(sl(1|1))1 µ . Hence, we have shown that the covering algebraU q,π specializes at (q, π) = ( √ −1, 1) to the small quantum group for sl 2 and to a "q-less subalgebra" of modified gl(1|1) at parameters (−1, −1).
Categorification results
9.1. Divided power modules. In [EKL14] it was shown that ONH n has a unique graded indecomposable projective module P n and that there is an algebra isomorphism ONH n ∼ = Mat OΛn (P n ), (9.1)
where OΛ n is the superalgebra of odd symmetric polynomials. In [EQ16c] they equip P n with a dg-module structure compatible with the differential on ONH n and denote the resulting (OPol n , OΛ n )bimodule by Z n .
Theorem 9.1.
(1) There is an equivalence of dg algebras (Corollary 3.9 [EQ16c]) (ONH n , ∂) −→ END OΛ op n (Z n ). (9.2)
(2) For any n ≥ 0, Z n is a finite-cell right dg-module over OΛ n ([EQ16c, Proposition 3.16]).
(3) If n ≥ 2, then ONH n is an acyclic dg-algebra. Consequently, the derived category D(ONH n ) is equivalent to the zero category ([EQ16c] Proposition 3.16). (4) As a left ONH n dg module, Z n is only cofibrant if n = 0, 1 and is acyclic otherwise [EQ16c, Proposition 3.17].
In light of the above theorem, we denote the dg-module Z n by E (n) + as (9.2) gives a dg-categorification of the divided power relation E n = [n]!E (n) . Likewise, one has the dg-module F where ω is the Chevalley involution on U from [BE17b, Section 3]. 9.2. The DG-Grothendieck ring. This section closely follows Section 5 of [EQ16a]. Denote the abelian category of DG-modules over (U, ∂) by U ∂ −dmod. It decomposes into a direct sum of dgcategories
U ∂ −dmod = λ,µ ( µ U λ ) ∂ −dmod. (9.3)
Composition of 1-morphisms induces induction functors (1) The left super dg-module 1 λ E (n) over (U q,π , ∂) is the induced module 1 λ E (n) := Ind λ U ONHn (E (n) + ), where the induction comes from the composition of inclusions
( λ4 U λ3 ⊗ λ2 U λ1 ) ∂ −dmod −→ δ λ2,λ3 ( λ4 U λ1 ) ∂ −dmodONH n −→ Sym[d] ⊗ ONH n −→ END λ Uq,π (1 λ E n ) −→ λ U λ−2n .
(2) The left super dg-module F (n) 1 λ over (U q,π , ∂) is the induced module (1) Ther representable module 1 λ E n (resp. F n 1 λ ) admits an n!-step filtration whose subquotients are isomorphic to grading and parity shifts of the divided power module 1 λ E (n) (resp. F (n) 1 λ ). (2) The divided power modules are acyclic whenever n ≥ 2.
(3) The dg supermodule 1 λ E (n) (resp. F (n) 1 λ ) is cofibrant over the dg category ( λ U, ∂) (resp. (U λ )) for n = 0, 1, and its image in the derived category D( λ U, ∂) (resp. D(U λ , ∂)) is compact.
Proof. This follows from the corresponding properties of E (n) + and F (n) − from Theorem 9.1. Definition 9.4. For any a, b ∈ and λ ∈ , define E (a) F (b) 1 λ to be the induced dg-module E (a) F (b) 1 λ := Ind U λ U λ−2b ⊗U λ E (a) 1 λ−2b ⊠ F (b) 1 λ , with induction defined along the inclusion U λ−2b ⊗ U λ −→ U λ , ζ 1 1 λ−2b ⊗ 1 µ ζ 2 1 λ → δ λ−2b,µ ζ 1 ζ 2 1 λ .
The dg-supermodule F (b) E (a) 1 λ is defined similarly. Following [EQ16a] we refer to these modules as canonical modules over U λ .
The fantastic filtrations on EF 1 λ and F E1 λ established in Section 7.2 give rise to a filtration on an arbitrary reprentable module of the form E ε 1 λ t ∈ U λ by dg modules of the form E a F b 1 λ s or F b E a 1 λ s for a, b ∈ AE and s ∈ . Define
X λ := E (a) F (b) 1 λ | a, b ∈ {0, 1}, λ ≤ b − a ∪ F (b) E (a) 1 λ | a, b ∈ {0, 1}, λ ≥ b − a .
(9.11) Proposition 9.5. There is a derived equivalence D(U λ ) ∼ = D(END U λ (X λ )) (9.12)
Proof. The statements in Corollary 9.3 apply to the modules E (a) F (b) 1 λ and F (b) E (a) 1 λ ; in particular, X λ consists of compact and cofibrant modules. Hence, [EQ16a, Proposition 2.10] provides a dg-Mortia equivalence establishing the isomorphism.
The cofibrance of the modules in X λ enables us to compute the derived endomorphism ring D(END U λ (X λ )) in the usual manner. The following lemma then follows as a direct consequence of [EL16, Proposition 8.3], which characterizes dimensions of homs between modules in X λ .
Lemma 9.6. The endomorphism algebra END U λ (X λ ) is a strongly positive DG-algebra.
Recall that by Corollary 3.3 the Grothendieck ring of the (Q, Π)-envelope of a graded 2-supercategory equipped with a differential of bidegree (2,1) is a module over the ring R = [q, q −1 , π]/(π 2 − 1, 1 + q 2 π).
Corollary 9.7. For any weight λ ∈ , the Grothendieck group K 0 (U, ∂) of the dg-category U λ is isomorphic to the corresponding R-span of canonical basis elements
K 0 (U λ ) ∼ = R Ḃ R 1 λ wherė B R 1 λ := E (a) F (b) 1 λ | a, b ∈ {0, 1}, λ ≤ b − a ∪ π ab F (b) E (a) 1 λ | a, b ∈ {0, 1}, λ ≥ b − a .
The isomorphism sends the class E (a) F (b) 1 λ or F (b) E (a) 1 λ from X λ to the corresponding element inḂ R 1 λ .
As a consequence of strong positivity we also have the following result.
Corollary 9.8. For any weights λ 1 , λ 2 , λ 3 , λ 4 ∈ , the dg-categories λ4 U λ3 , and λ2 U λ1 have the Kunneth property K (λ4 U λ3 ) ⊗ R λ2 U λ1 ∼ = K 0 ( λ4 U λ3 ⊗ λ2 U λ1 ).
It follows that K 0 (U) := µ,λ∈ K 0 ( µ U λ ) is idempotented R-algebra, with multiplication given by the induction funtor:
[Ind] : K 0 (U) ⊗ R K 0 (U) −→ K 0 (U).
Theorem 9.9. There is an isomorphism of R-algebraṡ U R −→ K 0 (U, ∂) (9.13) that sends E1 λ → [E1 λ ] and 1 λ F → [1 λ F ] for any weights λ ∈ .
Proof. We first must show that the defining relations forU R hold in K 0 (U, ∂). The nontrivial relations from Proposition 8.4 to check are (iii) and (iv). The fantastic filtrations on EF 1 λ and F E1 λ from Proposition 7.4 give rise to convolution diagrams establishing (iii) in D(U, ∂), see [EQ16a, Remark 2.7, Theorem 6.11]. Relation (iv) follows from the acyclicity results in Corollary 9.3. The resulting homomorphism of algebras is an isomorphism because it sendsḂ R 1 λ to the symbols of modules in X λ which form a basis for K 0 (U, ∂) by Corollary 9.7. (2)) −→ K 0 (U, ∂)| π=1 (9.14)
at π = 1, and (ii) an isomorphism of -algebrasU R | π=−1 −→ K 0 (U, ∂)| π=−1 (9.15) at π = −1, whereU R | π=−1 is a -subalgebra ofU(sl(1|1)) by Proposition 8.7.
viewed as a superspace with addition given by x n,b m,a + y n,b m,a := (x + y) n,b m,a and scalar multiplication given by c(x n,b m,a ) := (cx) n,b m,a . The degrees are given by deg(x n,b m,a ) = deg(x) + n − m, |x n,b m,a | = |x| + a + b. The horizontal composition is given by y n,d m,c • x l,b k,a := (−1) c|x|+b|y|+ac+bc (y • x) l+n,b+d k+m,a+c . The (Q, Π)-envelope of a graded 2-supercategory carries the structure of a (Q, Π)-2-category in the sense of [BE17a, Definition 6.14].
the category of left A-modules and define A H −mod to be the quotient of A−mod by the ideal of morphisms that factor through an A-module of the form H ⊗ N . The category A H −mod is triangulated [Kho16, Theorem 1] with shift functor inherited from H−mod defined by sending an object M in A H −mod to the module T (M ) := (H/(kΛ)) ⊗ M. (3.2) Since H is a subalgebra of A = H#B, we can restrict an A-module to an H-module, which descends to an exact functor A H −mod to H−mod. In the context of the H-comodule algebra A = H#B we write C(B, H) = A H −mod. Define a morphism f : M → N in A H −mod to be a quasi-isomorphism if it restricts to an isomorphism in H−mod. Denote by D(B, H) the localization of A with respect to quasi-isomorphisms. It is shown in [Kho16, Corollary 2] and [Qi14, Corollary 7.15] that D(B, H) is a triangulated category whose Grothendieck group is a module over K(H−mod). 3.2. DG-algebras from the Hopfological perspective. The standard theory of dg-algebras and their modules is equivalent to the Hopfological algebra of the -graded Hopf super algebra H = k[D]/D 2 in the category of super vector spaces. Here deg(D) =1 and
.
super Hopf algebra k[D]/D 2 the left integral is spanned by Λ = D. For a graded k-superalgebra B to admit an H-module structure this is equivalent to B having a degree1 map ∂ : B → B satisfying ∂(ab) = ∂(a)b(−1) |a| a∂(b), ∂ 2 (a) = 0, for all a, b ∈ B. Hence, an H-module algebra is the same thing as a dg-algebra. In a similar way, if we set A := B#H then an A comodule algebra is the same thing as a B-dg-module. Further, one can show that C(B, H) = A H −mod is equivalent to the homotopy category C(B) of B-dg modules and that D(B, H) is equivalent to the derived category D(B) of B-dg-modules. Decategorification from the Hopfological perspective. To have an interesting notion of Grothendieck group for the triangulated categories A H −mod it is important that we restrict the classes of modules under consideration to avoid pathologies that can arise. In the context of Hopfological algebra the correct notion is that of compact hopfological modules from [Qi14, Section 7.2]. Denote by D c (A, H) the strictly full subcategory of compact hopfological modules in D(A, H).
Definition 3. 1
1([Qi14]). Let B be an H-module algebra over a finite dimensional Hopf algebra H over a base field k. Define the Grothendieck group K 0 (D c (B, H)) to be the abelian group generated by symbols of isomorphism classes of objects in D c (B, H), modulo the relation
shift is now given by the inclusion of k into H ⊗ k via D ⊗ 1 k[1] := k −2 Π and at the level of the Grothendieck group we have k[1] = k −2 Π = k q −2 π = − k since 1 + q 2 π = 0. Hence, the homological shift is multiplication by −1 on K 0 .
Just as in the usual theory of dg-modules over a dg-algebra A, to have a sensible notion of Grothendieck group of D(A), we pass to the compact or perfect derived category D c (A). The category D c (A) is a subcategory of D(A) consisting compact dg modules, that is, those super dg modules M such that the functor HOM D(A) (M, −) commutes with infinite direct sums. This is the same as considering D c (A, H) in the Hopfological setup with H defined in section 3.4. For our purposes the connection between compact dg modules and finite-cell modules will be of particular relevance. See for example [EQ16a, Example 2.4]. The Grothendieck group K 0 (A) of a dg algebra A is the quotient of the free abelian group on the isomorphism classes [M ] of compact dg-modules M by the relation [M ] = [M 1 ] + [M 2 ] whenever M 1 → M → M 2 → M 1 [1] is an exact triangle of compact objects in D c (A). This is the same as D c (A, H) for H defined in section 3.4. 4.2. Grothendieck ring of super dg-2-categories. Definition 4.1. For a dg 2-category (U, ∂) define the homotopy and derived categories as
Theorem 4. 2
2([Sch11] and[EQ16a] Corollary 2.6). Let A be a positive dg algebra, and A 0 be its homogeneous degree zero part. Then K 0 (A) ∼ = K 0 (A 0 ).
Proposition 4. 4 .
4Let (A, ∂) be a super dg-algebra, i ∈ I a finite index set, u i , v i ∈ A satisfying the hypothesis of Lemma 4.3. Suppose that e = i u i v i , and < is a total order on I. An I-indexed super A-module filtration F • of Ae is defined byF ≤i := j≤i Ru j v jand F ∅ := 0, so that F ≤i /F i ∼ = Av i u i as A modules. Then the following conditions are equivalent:
Theorem 4 . 6 .
46Let A be a dg superalgebra, {u i , v i } i∈I a finite set of elements of A satifying Proposition 4.4, then there is a fantastic filtration on the dg module Ae if and only if there exists a total order on I such that v i ∂(u j ) = 0 f orj ≥ i. Moreover, in K 0 (A), we have the relation [Ae] = i∈I [Av i u i ].
0
0Dotted bubbles of negative degree are zero, so that for all m ≥
Centrality of odd bubbles) By the super interchange law it follows that the odd bubble squares to zero. Further, we have
the exact form of the dotted curl relation depends on the placement of the dots inside the curl. See for example, [BE17b, (5.18) -(5.21)]. Using the adjunctions the relations
Leibniz rule for this vertical composition x • y of x and y gives that ∂(x • y) = ∂(x)y + (−1) |x| ∂(y), and the parity of x is even, |x| = 0. Hence,
Definition 7. 2 .
2Define a total order ≺ on the set I = I λ = {0, 1, . . . , |λ|} by modifying the standard order i < j by declaring that i + 1 ≺ i if i, λ are both even, or both odd. (7.15)With the order (I, ≺), the condition (7.9) becomes v i ∂(u j ) = 0, for i j.(7.16)
M. 2 .
2⊠ N → Ind(M ⊠ N ) for any λ 1 , λ 2 , λ 3 , λ 4 ∈ . At the level of derived categories, the induction functor gives rise to an exact functor Ind : D(U ⊗ U, ∂) −→ D(U, ∂) (9.5) and R-linear maps [Ind] : K 0 (D(U ⊗ U, ∂)) → K 0 (U, ∂). (9.6) Let Sym[d] denote the supercommutative superalgebra obtained from the ring of symmetric functions Sym by adjoining an odd generator d with d 2 = 0 . Then there is a surjective superalgebra homomorphism β λ : Sym[d] −→ End U (1 Fix n ∈ AE.
F
(induction comes from the composition of inclusionsONH n −→ ONH n ⊗ Sym[d] −→ END U λ (F n 1 λ ) −→ λ−2n U λ .
Corollary 9 . 3 .
93Fix λ ∈ and n ∈ AE.
1 )
1inU q,π , see for example [CW13, Section 6.1] or [Cla14, Definition 3.1].
9 )
9where Ψ n denote the nth cyclotomic polynomial. Denote byU [√ −1] = AU ⊗ [q,q −1 ] [ √ −1].Set [k] √ −1 to be the quantum integer [k] evaluated at √ −1. The divided power relation implies that inU [√
−1] the idempotented [
√
−1]-algebra
defined by change of basisU
[
√
−1] the elements
E k
Corollary 9.10. The map sending E1 λ → [E1 λ ] and 1 λ F → [1 λ F ] for any weights λ ∈ defines (i) an isomorphism of [√
−1]-algebraṡ
u [
√
−1] (sl
It is important to note that the positivity of the canonical basis for the superalgebraU(osp(1|2) is quite unexpected and would not be possible without the parameter π.
Proof. Using Lemma 9.6 and [EQ16a, Corolarry 2.22] at p=2 the result follows.
On the spectral sequence from Khovanov homology to Heegaard Floer homology. J A Baldwin, arXiv:0809.3293Int. Math. Res. Not. IMRN. 15J.A. Baldwin, On the spectral sequence from Khovanov homology to Heegaard Floer homology, Int. Math. Res. Not. IMRN (2011), no. 15, 3426-3470, arXiv:0809.3293.
Monoidal supercategories. J Brundan, A P Ellis, arXiv:1603.05928Comm. Math. Phys. 3513J. Brundan and A.P. Ellis, Monoidal supercategories, Comm. Math. Phys. 351 (2017), no. 3, 1045-1089, arXiv:1603.05928.
Super Kac-Moody 2-categories. arXiv:1701.04133Proc. Lond. Math. Soc. 3, Super Kac-Moody 2-categories, Proc. Lond. Math. Soc. (3) 115 (2017), no. 5, 925-973, arXiv:1701.04133.
An integral lift, starting in odd Khovanov homology, of Szabó's spectral sequence. S Beier, arXiv:1205.2256S. Beier, An integral lift, starting in odd Khovanov homology, of Szabó's spectral sequence, arXiv:1205.2256.
A categorification of the Temperley-Lieb algebra and Schur quotients of U(sl 2 ) via projective and Zuckerman functors. J Bernstein, I B Frenkel, M Khovanov, arXiv:math/0002087Selecta Math. (N.S.). 52J. Bernstein, I. B. Frenkel, and M. Khovanov, A categorification of the Temperley-Lieb algebra and Schur quotients of U(sl 2 ) via projective and Zuckerman functors, Selecta Math. (N.S.) 5 (1999), no. 2, 199-241, arXiv:math/0002087.
A link surgery spectral sequence in monopole Floer homology. J M Bloom, arXiv:0909.0816Adv. Math. 2264J.M. Bloom, A link surgery spectral sequence in monopole Floer homology, Adv. Math. 226 (2011), no. 4, 3216-3281, arXiv:0909.0816.
Khovanov homology and knot Floer homology for pointed links. J A Baldwin, A S Levine, S Sarkar, arXiv:1512.05422J. Knot Theory Ramifications. 26249J.A. Baldwin, A.S. Levine, and S. Sarkar, Khovanov homology and knot Floer homology for pointed links, J. Knot Theory Ramifications 26 (2017), no. 2, 1740004, 49, arXiv:1512.05422.
On the Uq(osp(1|2n)) and U −q (so(2n + 1)) uncolored quantum link invariants. S C Blumen, arXiv:0901.3232J. Knot Theory Ramifications. 193S.C. Blumen, On the Uq(osp(1|2n)) and U −q (so(2n + 1)) uncolored quantum link invariants, J. Knot Theory Ramifications 19 (2010), no. 3, 335-353, arXiv:0901.3232.
On Khovanov's categorification of the Jones polynomial. D Bar-Natan, arXiv:math/0201043Algebr. Geom. Topol. 2D. Bar-Natan, On Khovanov's categorification of the Jones polynomial, Algebr. Geom. Topol. 2 (2002), 337-370 (electronic), arXiv:math/0201043.
Khovanov's homology for tangles and cobordisms. arXiv:math/0410495Geom. Topol. 9, Khovanov's homology for tangles and cobordisms, Geom. Topol. 9 (2005), 1443-1499, arXiv:math/0410495.
Highest weight categories arising from Khovanov's diagram algebra I: cellularity. J Brundan, C Stroppel, arXiv:0806.1532Mosc. Math. J. 114J. Brundan and C. Stroppel, Highest weight categories arising from Khovanov's diagram algebra I: cellularity, Mosc. Math. J. 11 (2011), no. 4, 685-722, 821-822, arXiv:0806.1532.
Highest weight categories arising from Khovanov's diagram algebra III: category O, Represent. Theory. arXiv:0812.109015, Highest weight categories arising from Khovanov's diagram algebra III: category O, Represent. The- ory 15 (2011), 170-243, arXiv:0812.1090.
Quantum supergroups III. Twistors. S Clark, Z Fan, Y Li, W Wang, arXiv:1307.7056Comm. Math. Phys. 3321S. Clark, Z. Fan, Y. Li, and W. Wang, Quantum supergroups III. Twistors, Comm. Math. Phys. 332 (2014), no. 1, 415-436, arXiv:1307.7056.
Quantum supergroups I. S Clark, D Hill, W Wang, arXiv:1301.1665Foundations, Transform. Groups. 184S. Clark, D. Hill, and W. Wang, Quantum supergroups I. Foundations, Transform. Groups 18 (2013), no. 4, 1019-1053, arXiv:1301.1665.
Quantum supergroups II. Canonical basis, Represent. arXiv:1304.7837Theory. 18, Quantum supergroups II. Canonical basis, Represent. Theory 18 (2014), 278-309, arXiv:1304.7837.
Knot homology via derived categories of coherent sheaves. I. The sl(2)-case. S Cautis, J Kamnitzer, arXiv:math/0701194Duke Math. J. 1423S. Cautis and J. Kamnitzer, Knot homology via derived categories of coherent sheaves. I. The sl(2)-case, Duke Math. J. 142 (2008), no. 3, 511-588, arXiv:math/0701194.
An invariant of tangle cobordisms via subquotients of arc rings. Y Chen, M Khovanov, arXiv:math/0610054Y. Chen and M. Khovanov, An invariant of tangle cobordisms via subquotients of arc rings, 2014, arXiv:math/0610054, pp. 23-44.
Quantum supergroups IV: the modified form. S Clark, arXiv:1312.4855Math. Z. 2781-2S. Clark, Quantum supergroups IV: the modified form, Math. Z. 278 (2014), no. 1-2, 493-528, arXiv:1312.4855.
Odd knot invariants from quantum covering groups. Algebr. Geom. Topol. 175, Odd knot invariants from quantum covering groups, Algebr. Geom. Topol. 17 (2017), no. 5, 2961- 3005.
Fixing the functoriality of Khovanov homology. D Clark, S Morrison, K Walker, arXiv:0701339Geom. Topol. 133D. Clark, S. Morrison, and K. Walker, Fixing the functoriality of Khovanov homology, Geom. Topol. 13 (2009), no. 3, 1499-1582, arXiv:0701339.
Canonical basis for quantum osp(1|2). S Clark, W Wang, arXiv:math.QA/1204.3940Lett. Math. Phys. 1032S. Clark and W. Wang, Canonical basis for quantum osp(1|2), Lett. Math. Phys. 103 (2013), no. 2, 207-231, arXiv:math.QA/1204.3940.
The Hopf algebra of odd symmetric functions. A P Ellis, M Khovanov, arXiv:math.QA/1107.5610Advances in Mathematics. 2312A. P. Ellis and M. Khovanov, The Hopf algebra of odd symmetric functions, Advances in Mathematics 231 (2012), no. 2, 965-999, arXiv:math.QA/1107.5610.
The odd nilHecke algebra and its diagrammatics. A P Ellis, M Khovanov, A D Lauda, arXiv:1111.1320Int. Math. Res. Not. IMRN. 4A.P. Ellis, M. Khovanov, and A.D. Lauda, The odd nilHecke algebra and its diagrammatics, Int. Math. Res. Not. IMRN (2014), no. 4, 991-1062, arXiv:1111.1320.
An odd categorification of Uq(sl 2. A P Ellis, A D Lauda, arXiv:1307.7816Quantum Topol. 7A.P. Ellis and A.D. Lauda, An odd categorification of Uq(sl 2 ), Quantum Topol. 7 (2016), no. 2, 329-433, arXiv:1307.7816.
A P Ellis, I Petkova, V Vrtesi, arXiv:1510.03483Quantum gl(1|1) and tangle Floer homology. A.P. Ellis, I. Petkova, and V. Vrtesi, Quantum gl(1|1) and tangle Floer homology, arXiv:1510.03483.
An approach to categorification of some small quantum groups II. B Elias, Y Qi, arXiv:1302.5478Adv. Math. 288B. Elias and Y. Qi, An approach to categorification of some small quantum groups II, Adv. Math. 288 (2016), 81-151, arXiv:1302.5478.
A categorification of quantum sl(2) at prime roots of unity. arXiv:1503.05114Adv. Math. 299, A categorification of quantum sl(2) at prime roots of unity, Adv. Math. 299 (2016), 863-930, arXiv:1503.05114.
The differential graded odd nilHecke algebra. A P Ellis, Y Qi, arXiv:1504.01712Comm. Math. Phys. 3441A.P. Ellis and Y. Qi, The differential graded odd nilHecke algebra, Comm. Math. Phys. 344 (2016), no. 1, 275-331, arXiv:1504.01712.
A geometric setting for quantum osp(1|2). Z Fan, Y Li, arXiv:1305.0710Trans. Amer. Math. Soc. 36711Z. Fan and Y. Li, A geometric setting for quantum osp(1|2), Trans. Amer. Math. Soc. 367 (2015), no. 11, 7895-7916, arXiv:1305.0710.
Khovanov-Rozansky homology and topological strings. S Gukov, A Schwarz, C Vafa, arXiv:hep-th/0412243Lett. Math. Phys. 741S. Gukov, A. Schwarz, and C. Vafa, Khovanov-Rozansky homology and topological strings, Lett. Math. Phys. 74 (2005), no. 1, 53-74, arXiv:hep-th/0412243.
Triangulated categories in the representation theory of finite-dimensional algebras. D Happel, London Mathematical Society Lecture Note Series. 119Cambridge University PressD. Happel, Triangulated categories in the representation theory of finite-dimensional algebras, London Math- ematical Society Lecture Note Series, vol. 119, Cambridge University Press, Cambridge, 1988.
A rank inequality for the knot Floer homology of double branched covers. K Hendricks, arXiv:1107.2154Algebr. Geom. Topol. 124K. Hendricks, A rank inequality for the knot Floer homology of double branched covers, Algebr. Geom. Topol. 12 (2012), no. 4, 2127-2178, arXiv:1107.2154.
Categorification of quantum Kac-Moody superalgebras. D Hill, W Wang, arXiv:1202.2769Trans. Amer. Math. Soc. 3672D. Hill and W. Wang, Categorification of quantum Kac-Moody superalgebras, Trans. Amer. Math. Soc. 367 (2015), no. 2, 1183-1216, arXiv:1202.2769.
Basic concepts of enriched category theory. G Kelly, London Math. Soc. Lec. Note Ser. 64Cambridge U. PressG. Kelly, Basic concepts of enriched category theory, London Math. Soc. Lec. Note Ser., vol. 64, Cambridge U. Press, 1982.
On differential graded categories. B Keller, arXiv:math/0601185International Congress of Mathematicians. IIEur. Math. Soc.B. Keller, On differential graded categories, International Congress of Mathematicians. Vol. II, Eur. Math. Soc., Zürich, 2006, arXiv:math/0601185, pp. 151-190.
A categorification of the Jones polynomial. M Khovanov, arXiv:9908171Duke Math. J. 1013M. Khovanov, A categorification of the Jones polynomial, Duke Math. J. 101 (2000), no. 3, 359-426, arXiv:9908171.
A functor-valued invariant of tangles. arXiv:0103190Algebr. Geom. Topol. 2, A functor-valued invariant of tangles, Algebr. Geom. Topol. 2 (2002), 665-741 (electronic), arXiv:0103190.
Crossingless matchings and the cohomology of (n, n) Springer varieties. arXiv:math/0202110Commun. Contemp. Math. 64, Crossingless matchings and the cohomology of (n, n) Springer varieties, Commun. Contemp. Math. 6 (2004), no. 4, 561-577, arXiv:math/0202110.
How to categorify one-half of quantum gl(1|2). arXiv:1007.3517Polish Acad. Sci. Inst. Math. 103Banach Center Publ., How to categorify one-half of quantum gl(1|2), Knots in Poland III. Part III, Banach Center Publ., vol. 103, Polish Acad. Sci. Inst. Math., Warsaw, 2014, arXiv:1007.3517, pp. 211-232.
Hopfological algebra and categorification at a root of unity: the first steps. arXiv:math/0509083J. Knot Theory Ramifications. 25326, Hopfological algebra and categorification at a root of unity: the first steps, J. Knot Theory Ramifi- cations 25 (2016), no. 3, 1640006, 26, arXiv:math/0509083.
Supercategorification of quantum Kac-Moody algebras. S.-J Kang, M Kashiwara, S.-J Oh, arXiv:math.RT/1206.5933Adv. Math. 242S.-J. Kang, M. Kashiwara, and S.-J. Oh, Supercategorification of quantum Kac-Moody algebras, Adv. Math. 242 (2013), 116-162, arXiv:math.RT/1206.5933.
Supercategorification of quantum Kac-Moody algebras II. S J Kang, M Kashiwara, S Oh, arXiv:1303.1916Adv. Math. 265S.J. Kang, M. Kashiwara, and S. Oh, Supercategorification of quantum Kac-Moody algebras II, Adv. Math. 265 (2014), 169-240, arXiv:1303.1916.
Quiver Hecke superalgebras. S J Kang, M Kashiwara, S Tsuchioka, arXiv:1107.1039J. Reine Angew. Math. 711S.J. Kang, M. Kashiwara, and S. Tsuchioka, Quiver Hecke superalgebras, J. Reine Angew. Math. 711 (2016), 1-54, arXiv:1107.1039.
A diagrammatic approach to categorification of quantum groups III. M Khovanov, A Lauda, arXiv:0807.3250Quantum Topology. 1M. Khovanov and A. Lauda, A diagrammatic approach to categorification of quantum groups III, Quantum Topology 1 (2010), 1-92, arXiv:0807.3250.
Extended graphical calculus for categorified quantum sl. M Khovanov, A Lauda, M Mackaay, M Stošić, arXiv:1006.2866Memoirs of the AMS. 2192M. Khovanov, A. Lauda, M. Mackaay, and M. Stošić, Extended graphical calculus for categorified quantum sl(2), Memoirs of the AMS 219 (2012), arXiv:1006.2866.
Khovanov homology is an unknot-detector. P B Kronheimer, T S Mrowka, arXiv:1005.4346Publ. Math. Inst. Hauteś Etudes Sci. 113P. B. Kronheimer and T. S. Mrowka, Khovanov homology is an unknot-detector, Publ. Math. Inst. Hauteś Etudes Sci. (2011), no. 113, 97-208, arXiv:1005.4346.
An approach to categorification of some small quantum groups. M Khovanov, Y Qi, arXiv:1208.0616Quantum Topol. 62M. Khovanov and Y. Qi, An approach to categorification of some small quantum groups, Quantum Topol. 6 (2015), no. 2, 185-311, arXiv:1208.0616.
Matrix factorizations and link homology. M Khovanov, L Rozansky, arXiv:0401268Fund. Math. 1991M. Khovanov and L. Rozansky, Matrix factorizations and link homology, Fund. Math. 199 (2008), no. 1, 1-91, arXiv:0401268.
Matrix factorizations and link homology. arXiv:0505056Geom. Topol. II3, Matrix factorizations and link homology. II, Geom. Topol. 12 (2008), no. 3, 1387-1425, arXiv:0505056.
Free fermions and the Alexander-Conway polynomial. L H Kauffman, H Saleur, Comm. Math. Phys. 1412L. H. Kauffman and H. Saleur, Free fermions and the Alexander-Conway polynomial, Comm. Math. Phys. 141 (1991), no. 2, 293-327.
A categorification of the positive half of quantum gl(m|1). M Khovanov, J Sussan, arXiv:1406.1676Trans. Amer. Math. Soc. 3693M. Khovanov and J. Sussan, A categorification of the positive half of quantum gl(m|1), Trans. Amer. Math. Soc. 369 (2017), no. 3, 1627-1664, arXiv:1406.1676.
Hecke-Clifford algebras and spin Hecke algebras I: The classical affine type. T Khongsap, W Wang, arXiv:math.RT/0704.0201Transf. Groups. 13T. Khongsap and W. Wang, Hecke-Clifford algebras and spin Hecke algebras I: The classical affine type, Transf. Groups 13 (2008), 389-412, arXiv:math.RT/0704.0201.
Hecke-Clifford algebras and spin Hecke algebras II: The rational double affine type. arXiv:math.RT/0710.5877Pacific J. Math. 238, Hecke-Clifford algebras and spin Hecke algebras II: The rational double affine type, Pacific J. Math. 238 (2008), 73-103, arXiv:math.RT/0710.5877.
Hecke-Clifford algebras and spin Hecke algebras IV: Odd double affine type. arXiv:math.RT/0810.2068SIGMA. 5, Hecke-Clifford algebras and spin Hecke algebras IV: Odd double affine type, SIGMA 5 (2009), arXiv:math.RT/0810.2068.
A categorification of quantum sl(2). A D Lauda, arXiv:0803.3652Adv. Math. 225A. D. Lauda, A categorification of quantum sl(2), Adv. Math. 225 (2008), 3327-3424, arXiv:0803.3652.
Tour of bordered Floer theory. R Lipshitz, P Ozsváth, D P Thurston, arXiv:1107.5621Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA108R. Lipshitz, P. Ozsváth, and D.P. Thurston, Tour of bordered Floer theory, Proc. Natl. Acad. Sci. USA 108 (2011), no. 20, 8085-8092, arXiv:1107.5621.
Open-closed TQFTS extend Khovanov homology from links to tangles. A D Lauda, H Pfeiffer, arXiv:math/0606331J. Knot Theory Ramifications. 181A. D. Lauda and H. Pfeiffer, Open-closed TQFTS extend Khovanov homology from links to tangles, J. Knot Theory Ramifications 18 (2009), no. 1, 87-150, arXiv:math/0606331.
R Laugwitz, Y Qi, arXiv:1804.01478A categorification of cyclotomic rings. R. Laugwitz and Y. Qi, A categorification of cyclotomic rings, arXiv:1804.01478.
Oddification of the cohomology of type A Springer varieties. A D Lauda, H Russell, arXiv:1203.0797Int. Math. Res. Not. IMRN. 17A.D. Lauda and H. Russell, Oddification of the cohomology of type A Springer varieties, Int. Math. Res. Not. IMRN (2014), no. 17, 4822-4854, arXiv:1203.0797.
A Khovanov stable homotopy type. R Lipshitz, S Sarkar, arXiv:1112.3932J. Amer. Math. Soc. 274R. Lipshitz and S. Sarkar, A Khovanov stable homotopy type, J. Amer. Math. Soc. 27 (2014), no. 4, 983-1042, arXiv:1112.3932.
A refinement of Rasmussen's S-invariant. arXiv:1206.3532Duke Math. J. 1635, A refinement of Rasmussen's S-invariant, Duke Math. J. 163 (2014), no. 5, 923-952, arXiv:1206.3532.
A Steenrod square on Khovanov homology. arXiv:1204.5776J. Topol. 73, A Steenrod square on Khovanov homology, J. Topol. 7 (2014), no. 3, 817-848, arXiv:1204.5776.
Finite-dimensional Hopf algebras arising from quantized universal enveloping algebra. G Lusztig, J. Amer. Math. Soc. 31G. Lusztig, Finite-dimensional Hopf algebras arising from quantized universal enveloping algebra, J. Amer. Math. Soc. 3 (1990), no. 1, 257-296.
Introduction to quantum groups. Progress in Mathematics. 110Birkhäuser Boston Inc, Introduction to quantum groups, Progress in Mathematics, vol. 110, Birkhäuser Boston Inc., Boston, MA, 1993.
On the decategorification of Ozsváth and Szabó's bordered theory for knot floer homology. A Manion, arXiv:1611.08001A. Manion, On the decategorification of Ozsváth and Szabó's bordered theory for knot floer homology, arXiv:1611.08001.
Khovanov-Seidel quiver algebras and Ozsváth-Szabó's bordered theory. arXiv:1605.08082J. Algebra. 488, Khovanov-Seidel quiver algebras and Ozsváth-Szabó's bordered theory, J. Algebra 488 (2017), 110- 144, arXiv:1605.08082.
The multi-variable Alexander polynomial and a one-parameter family of representations of Uq(sl(2, C)) at q 2 = −1, Quantum groups (Leningrad, 1990). J Murakami, Lecture Notes in Math. 1510SpringerJ. Murakami, The multi-variable Alexander polynomial and a one-parameter family of representations of Uq(sl(2, C)) at q 2 = −1, Quantum groups (Leningrad, 1990), Lecture Notes in Math., vol. 1510, Springer, Berlin, 1992, pp. 350-353.
A state model for the multivariable Alexander polynomial. Pacific J. Math. 1571, A state model for the multivariable Alexander polynomial, Pacific J. Math. 157 (1993), no. 1, 109- 135.
Branes and supergroups. V Mikhaylov, E Witten, arXiv:1410.1175Comm. Math. Phys. 3402V. Mikhaylov and E. Witten, Branes and supergroups, Comm. Math. Phys. 340 (2015), no. 2, 699-832, arXiv:1410.1175.
Odd Khovanov homology. P Ozsváth, J Rasmussen, Z Szabó, arXiv:0710.4300Algebr. Geom. Topol. 133P. Ozsváth, J. Rasmussen, and Z. Szabó, Odd Khovanov homology, Algebr. Geom. Topol. 13 (2013), no. 3, 1465-1488, arXiv:0710.4300.
On the Heegaard Floer homology of branched double-covers. P Ozsváth, Z Szabó, arXiv:math/0309170Adv. Math. 1941P. Ozsváth and Z. Szabó, On the Heegaard Floer homology of branched double-covers, Adv. Math. 194 (2005), no. 1, 1-33, arXiv:math/0309170.
P Ozsvath, Z Szabo, arXiv:1707.00597Bordered knot algebras with matchings. P. Ozsvath and Z. Szabo, Bordered knot algebras with matchings, arXiv:1707.00597.
Kauffman states, bordered algebras, and a bigraded knot invariant. P Ozsváth, Z Szabó, arXiv:1603.06559Adv. Math. 328P. Ozsváth and Z. Szabó, Kauffman states, bordered algebras, and a bigraded knot invariant, Adv. Math. 328 (2018), 1088-1198, arXiv:1603.06559.
Hopfological algebra. Y Qi, arXiv:1205.1814Compos. Math. 1501Y. Qi, Hopfological algebra, Compos. Math. 150 (2014), no. 1, 1-45, arXiv:1205.1814.
Categorification at prime roots of unity and hopfological finiteness, Categorification and higher representation theory. Y Qi, J Sussan, arXiv:1509.00438Contemp. Math. 683Amer. Math. SocY. Qi and J. Sussan, Categorification at prime roots of unity and hopfological finiteness, Categorification and higher representation theory, Contemp. Math., vol. 683, Amer. Math. Soc., Providence, RI, 2017, arXiv:1509.00438, pp. 261-286.
Knot polynomials and knot homologies, Geometry and topology of manifolds. J Rasmussen, arXiv:math/0504045Fields Inst. Commun. 47Amer. Math. SocJ. Rasmussen, Knot polynomials and knot homologies, Geometry and topology of manifolds, Fields Inst. Commun., vol. 47, Amer. Math. Soc., Providence, RI, 2005, arXiv:math/0504045, pp. 261-280.
On knot Floer homology in double branched covers. L P Roberts, arXiv:0706.0741Geom. Topol. 171L.P. Roberts, On knot Floer homology in double branched covers, Geom. Topol. 17 (2013), no. 1, 413-467, arXiv:0706.0741.
. R Rouquier, arXiv:0812.5023-Kac-Moody algebrasR. Rouquier, 2-Kac-Moody algebras, 2008, arXiv:0812.5023.
S-and T -matrices for the super U(1, 1) WZW model. Application to surgery and 3-manifolds invariants based on the Alexander-Conway polynomial. L Rozansky, H Saleur, arXiv:hep-th/9203069Nuclear Phys. B. 3892L. Rozansky and H. Saleur, S-and T -matrices for the super U(1, 1) WZW model. Application to surgery and 3-manifolds invariants based on the Alexander-Conway polynomial, Nuclear Phys. B 389 (1993), no. 2, 365-423, arXiv:hep-th/9203069.
Categorification of tensor powers of the vector representation of Uq(gl(1|1)), Selecta Math. A Sartori, arXiv:1305.616222A. Sartori, Categorification of tensor powers of the vector representation of Uq(gl(1|1)), Selecta Math. (N.S.) 22 (2016), no. 2, 669-734, arXiv:1305.6162.
Instantons and odd Khovanov homology. C W Scaduto, arXiv:1401.2093J. Topol. 83C.W. Scaduto, Instantons and odd Khovanov homology, J. Topol. 8 (2015), no. 3, 744-810, arXiv:1401.2093.
Perfect derived categories of positively graded DG algebras. O Schnürer, arXiv:0809.4782Appl. Categ. Structures. 195O. Schnürer, Perfect derived categories of positively graded DG algebras, Appl. Categ. Structures 19 (2011), no. 5, 757-782, arXiv:0809.4782.
Patterns in odd Khovanov homology. A Shumakovitch, arXiv:1101.5607J. Knot Theory Ramifications. 201A. Shumakovitch, Patterns in odd Khovanov homology, J. Knot Theory Ramifications 20 (2011), no. 1, 203-222, arXiv:1101.5607.
A link invariant from the symplectic geometry of nilpotent slices. P Seidel, I Smith, arXiv:0405089Duke Math. J. 1343P. Seidel and I. Smith, A link invariant from the symplectic geometry of nilpotent slices, Duke Math. J. 134 (2006), no. 3, 453-514, arXiv:0405089.
Categorification of the Temperley-Lieb category, tangles, and cobordisms via projective functors. C Stroppel, Duke Math. J. 1263C. Stroppel, Categorification of the Temperley-Lieb category, tangles, and cobordisms via projective functors, Duke Math. J. 126 (2005), no. 3, 547-596.
Parabolic category O, perverse sheaves on Grassmannians, Springer fibres and Khovanov homology. arXiv:math/0608234Compos. Math. 1454, Parabolic category O, perverse sheaves on Grassmannians, Springer fibres and Khovanov homology, Compos. Math. 145 (2009), no. 4, 954-992, arXiv:math/0608234.
2-block Springer fibers: convolution algebras and coherent sheaves. C Stroppel, B Webster, arXiv:0802.1943Comment. Math. Helv. 872C. Stroppel and B. Webster, 2-block Springer fibers: convolution algebras and coherent sheaves, Comment. Math. Helv. 87 (2012), no. 2, 477-520, arXiv:0802.1943.
A geometric spectral sequence in Khovanov homology. Z Szabó, arXiv:1010.4252J. Topol. 84Z. Szabó, A geometric spectral sequence in Khovanov homology, J. Topol. 8 (2015), no. 4, 1017-1044, arXiv:1010.4252.
Y Tian, A categorification of sl(1|1) via contact topology, ProQuest LLC. Ann Arbor, MIThesis (Ph.D.)-University of Southern CaliforniaY. Tian, A categorification of sl(1|1) via contact topology, ProQuest LLC, Ann Arbor, MI, 2014, Thesis (Ph.D.)-University of Southern California.
A categorification of U T (sl(1|1)) and its tensor product representations. arXiv:1301.3986Geom. Topol. 183, A categorification of U T (sl(1|1)) and its tensor product representations, Geom. Topol. 18 (2014), no. 3, 1635-1717, arXiv:1301.3986.
Categorification of Clifford algebras and Uq(sl(1|1)). J. Symplectic Geom. 142, Categorification of Clifford algebras and Uq(sl(1|1)), J. Symplectic Geom. 14 (2016), no. 2, 541-585.
Super q-Howe duality and web categories. D Tubbenhauer, P Vaz, P Wedrich, arXiv:1504.05069Algebr. Geom. Topol. 176D. Tubbenhauer, P. Vaz, and P. Wedrich, Super q-Howe duality and web categories, Algebr. Geom. Topol. 17 (2017), no. 6, 3703-3749, arXiv:1504.05069.
O Viro, arXiv:0204290Quantum Relatives of Alexander Polynomial. O. Viro, Quantum Relatives of Alexander Polynomial, arXiv:0204290 .
Double affine Heke algebras for the spin symmetric group. W Wang, arXiv:math.RT/0608074Math. Res. Lett. 16W. Wang, Double affine Heke algebras for the spin symmetric group, Math. Res. Lett. 16 (2009), 1071-1085, arXiv:math.RT/0608074.
. E Witten, Quantum Topol, arXiv:1101.3216E. Witten, Fivebranes and knots, Quantum Topol. 3 (2012), no. 1, 1-137, arXiv:1101.3216.
Khovanov , arXiv:1108.3103Proceedings of the Freedman Fest. the Freedman FestCoventry18Geom. Topol. Publ., Khovanov homology and gauge theory, Proceedings of the Freedman Fest, Geom. Topol. Monogr., vol. 18, Geom. Topol. Publ., Coventry, 2012, arXiv:1108.3103, pp. 291-308.
|
[] |
[
"Thermodynamic restrictions on linear reversible and irreversible thermo-electro- magneto-mechanical processes",
"Thermodynamic restrictions on linear reversible and irreversible thermo-electro- magneto-mechanical processes"
] |
[
"Sushma Santapuri [email protected]. \nDepartment of Applied Mechanics\nIndian Institute of Technology\n110016Delhi, New DelhiIndia\n\nDepartment of Mechanical Engineering\nPolytechnic University of Puerto Rico\n00918San Juan, Puerto Rico\n\nCorrespondence to: Department of Applied Mechanics\nIndian Institute of Technology\n110016Delhi, New DelhiIndia\n"
] |
[
"Department of Applied Mechanics\nIndian Institute of Technology\n110016Delhi, New DelhiIndia",
"Department of Mechanical Engineering\nPolytechnic University of Puerto Rico\n00918San Juan, Puerto Rico",
"Correspondence to: Department of Applied Mechanics\nIndian Institute of Technology\n110016Delhi, New DelhiIndia"
] |
[] |
A unified thermodynamic framework for the characterization of functional materials is developed. This framework encompasses linear reversible and irreversible processes with thermal, electrical, magnetic, and/or mechanical effects coupled. The comprehensive framework combines the principles of classical equilibrium and nonequilibrium thermodynamics with electrodynamics of continua in the infinitesimal strain regime.In the first part of this paper, linear Thermo-Electro-Magneto-Mechanical (TEMM) quasistatic processes are characterized. Thermodynamic stability conditions are further imposed on the linear constitutive model and restrictions on the corresponding material constants are derived. The framework is then extended to irreversible transport phenomena including thermoelectric, thermomagnetic and the state-of-the-art spintronic and spin caloritronic effects. Using Onsager's reciprocity relationships and the dissipation inequality, restrictions on the kinetic coefficients corresponding to charge, heat and spin transport processes are derived. All the http://dx.constitutive models are accompanied by multiphysics interaction diagrams that highlight the various processes that can be characterized using this framework.
|
10.1016/j.heliyon.2016.e00164
| null | 18,726,418 |
1504.00646
|
85b7bae4bb60639c9cb0072bb1fd0fae1a749f92
|
Thermodynamic restrictions on linear reversible and irreversible thermo-electro- magneto-mechanical processes
Sushma Santapuri [email protected].
Department of Applied Mechanics
Indian Institute of Technology
110016Delhi, New DelhiIndia
Department of Mechanical Engineering
Polytechnic University of Puerto Rico
00918San Juan, Puerto Rico
Correspondence to: Department of Applied Mechanics
Indian Institute of Technology
110016Delhi, New DelhiIndia
Thermodynamic restrictions on linear reversible and irreversible thermo-electro- magneto-mechanical processes
10.1016/j.heliyon.2016.e00164Received: 1 February 2016 Revised: 15 July 2016 Accepted: 14 September 2016 Heliyon 2 (2016) e00164Applied mathematicsMaterials scienceThermodynamics
A unified thermodynamic framework for the characterization of functional materials is developed. This framework encompasses linear reversible and irreversible processes with thermal, electrical, magnetic, and/or mechanical effects coupled. The comprehensive framework combines the principles of classical equilibrium and nonequilibrium thermodynamics with electrodynamics of continua in the infinitesimal strain regime.In the first part of this paper, linear Thermo-Electro-Magneto-Mechanical (TEMM) quasistatic processes are characterized. Thermodynamic stability conditions are further imposed on the linear constitutive model and restrictions on the corresponding material constants are derived. The framework is then extended to irreversible transport phenomena including thermoelectric, thermomagnetic and the state-of-the-art spintronic and spin caloritronic effects. Using Onsager's reciprocity relationships and the dissipation inequality, restrictions on the kinetic coefficients corresponding to charge, heat and spin transport processes are derived. All the http://dx.constitutive models are accompanied by multiphysics interaction diagrams that highlight the various processes that can be characterized using this framework.
Introduction
Functional materials are engineered materials that are designed to exhibit desired functionalities (e.g., sensing, actuation, energy harvesting, self-healing) in response to a controllable stimulus. These materials have wide spread applications in fields like aerospace, automotive, medicine, electronics and defense [9,39]. Some examples of such materials include multiferroic materials, bio-mimetic materials, semiconductors and spintronic materials.
Design and characterization of functional materials is at the forefront of materials research. These materials often exhibit coupling of various physical effects and are typically tailored to exhibit unusual electrical, magnetic, chemical, optical and/or thermal properties. In order to optimally design such materials the relationships between processing, structure, property, and performance of the material need to be established [15]. Such relationships are obtained through a combination of experiments, theory and computational models that range from atomic scale to macro/continuum scale [27,28].
To this end, this paper aims to address one particular aspect of the characterization of functional materials, i.e., the development of an overarching thermodynamic framework that characterizes the thermal, electrical, magnetic and mechanical effects occurring in these materials. The early models presented in [13,16,18,29,38] for fully coupled Thermo-Electro-Magneto-Mechanical (TEMM) materials are used as a starting point in this work. These seminal mathematical models combine the principles of classical electrodynamics with thermomechanical conservation laws and can be applied to several materials ranging from linear piezoelectric materials, magnetostrictive materials to nonlinear electro-elastic solids, and electro-rheological fluids. Some of these applications were studied in [12,17,21,22,26,30,31,37,42].
In the more recent literature, unified thermodynamic models were developed to characterize a broader range of multiphysical processes. For instance, a thermoelectro-magnetic system with specific application to dielectric materials in the presence of memory effects was studied by Amendola [4] and the conditions for thermodynamic stability were investigated. This work was extended by Yu Li who studied the uniqueness and reciprocity of coupled thermo-electro-magneto-elastic behavior in smart materials [19]. More recently, Yu and Shen proposed a variational principle for coupled thermal-electrical-chemical-mechanical problems [41]. This framework described heat conduction, mass diffusion, electrochemical reactions and electrostatic processes. Characterization of dissipative functional materials with quasistatic electro-magneto-mechanical couplings was presented by Miehe et al. [23] based on incremental variational principles and stability analysis was performed on the macroscopic level based on the convexity/concavity of potentials. An internal variable based irreversible thermodynamics framework was formulated by Oates et al. [25] for ferroic materials which incorporated hysteretic behavior.
The examples discussed above demonstrate the theoretical development for a specific class of materials or processes. To this end, in this paper, a unified approach to thermodynamic modeling of a general thermo-electro-magneto-mechanical system is presented. Specifically, the first principles based thermodynamic framework developed in [32,33] is utilized to characterize TEMM processes and subsequently specialized to model linear reversible and irreversible transport processes using classical equilibrium and non-equilibrium thermodynamics principles. Within the quasistatic regime, this work unifies all the known coupled and uncoupled TEMM processes and studies the stability conditions. Within the irreversible regime, this work unifies memoryless heat, charge, as well as the stateof-the-art spin transport phenomena to obtain a comprehensive set of modeling equations [7]. While this work does not deal with higher order effects like large deformation or hysteresis, these effects could be incorporated into the framework by adding additional independent variables and using similar characterization techniques.
This paper is structured as follows: Section 2 describes the first principle equations governing a fully coupled thermo-electro-magneto-mechanical medium in a small strain and small electromagnetic field regime. Section 3 deals with the development part of the thermodynamic framework wherein Section 3.1 describes the constitutive modeling of a near-equilibrium TEMM process. Conditions for stability of the thermodynamic equilibrium are investigated in 3.1.1. Utility of this framework is subsequently demonstrated through an example of a multiferroic material of hexagonal crystal symmetry wherein the restrictions on material constants are derived in Section 3.1.2. The framework is then extended to characterize irreversible transport processes in Section 3.2. These processes include the thermoelectric, galvanomagnetic/thermomagnetic and the spintronic/spin caloritronic effects. In Section 3.2.1, a modified version of the dissipation inequality is posited to incorporate the new variables corresponding to the spin transport phenomenon. The relationships between the various process constants are obtained through the use of Onsager's reciprocity relationships and the bounds on the process constants are derived from the dissipation inequality in Section 4. Finally, in Section 5 concluding remarks and the overall contributions of this work are discussed.
Background
Description of a fully coupled thermo-electro-magneto-mechanical process
In this section, the fundamental balance laws, governing the evolution of TEMM fields in a deformable, polarizable and magnetizable medium, are presented in the Cartesian component notation. These equations include the thermomechanical balance laws and the Maxwell's equations specialized to a small strain and small electromagnetic fields regime 1 :
= , (Conservation of Mass) (1a) , = + , , (Balance of Linear Momentum) (1b) = , (Balance of Angular Momentum) (1c) , = , + , + ℎ , + + − , , (First Law of Thermodynamics) (1d) , ≥ Θ − ( Θ ) , ,
(Second Law of Thermodynamics) (1e)
, = 0, (Gauss's Law for Magnetism) (1f) , + , = 0, (Faraday's Law) (1g) , = , (Gauss's Law for Electricity) (1h) , + = ℎ , . (Ampere-Maxwell Law) (1i)
The notation ( ) , denotes partial differentiation with respect to time, e.g.,
, ≡ ( 1 , 2 , 3 , ) .
The TEMM fields appearing in (1a)-(1i) include the density , specific internal energy (internal energy per unit mass), specific entropy , the absolute temperature Θ, the thermally and electromagnetically induced specific energy supply rates and and the Cartesian components of the displacement , the Cauchy stress tensor , the specific body force and the heat flux vector . Also, , , ℎ , , , and represent the Cartesian components of electric field intensity, electric displacement, magnetic field intensity, magnetic induction, free charge density, and free current density, respectively. Additionally,
= − , = 1 − ℎ ,(2)
are the Cartesian components of the electric polarization and magnetization vectors. Also, and are the permittivity and permeability constants in vacuo. Finally, the infinitesimal strain tensor is related to the displacement as 1 Derivation of the small strain theory of TEMM materials is presented in [32] (cf. Section 9.7.1).
= 1 2 ( , + , ) .(3)
The Cauchy stress (and strain) tensors are typically non-symmetric in the presence of electromagnetically induced body force and body couple. However, their contributions to the balance of linear momentum (1b) and angular momentum (1c) emerge at higher orders and can be ignored for small electromagnetic fields.
To complete the mathematical model, the governing equations need to be supplemented with the material specific constitutive equations as well as the boundary conditions. In the subsequent sections, constitutive equations will be developed for fully coupled TEMM reversible and irreversible processes operating in the small strain, small EM (electromagnetic) field regime. Furthermore, the ramifications of the second law of thermodynamics as well as the thermodynamic stability restrictions on the proposed constitutive models will be studied.
Characterization of quasistatic thermo-electro-magneto-mechanical material processes
The development of a continuum thermodynamic framework for fully coupled TEMM materials was presented in [33]. The principles of classical thermodynamics and electrodynamics of continua were utilized to develop the thermodynamics state equations corresponding to various combinations of independent variables. In this paper, starting from the state equations derived in [33], constitutive models are developed for TEMM materials in a linear regime.
As a starting point, the reduced form of the Clausius-Duhem inequality (obtained by combining the first law of thermodynamics (1d) and the second law of thermodynamics (1e) via the elimination of ) is presented below:
−̇+ ⋅̇+̇+ ⋅̇+ ⋅⏟ ⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞ ⏟⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞ ⏟ TEMM Conjugate Variables + ⋅ − 1 ⋅ grad ≥ 0.(4)
The reduced Clausius-Duhem inequality can be utilized to develop the thermodynamic state equations corresponding to any free energy with a desired combination of independent variables, as demonstrated in [33]. In this paper, the internal energy based formulation, namely, the free energy
≡ ( , , , )(5)
characterized by infinitesimal strain , entropy , polarization and magnetization as independent variables will be utilized to develop the constitutive formulation. 2 The corresponding thermodynamic state equations as demonstrated in [33] include
= , = , = , = .(6)
3. Methodology
Constitutive model development I: quasistatic TEMM processes
In what follows, starting from the state equations (6), linear TEMM constitutive equations are formulated in the near-equilibrium regime. The functional form of internal energy = is derived by performing a Taylor series expansion
+ 22 | | | |̄,(11)
where ̄= and ̄= represent the internal energy per unit volume and the entropy per unit volume of the system, respectively. The coefficients arising in the linear constitutive equations (8)- (11) are material specific constants corresponding to different TEMM processes (described in Figure 1). For instance, the coefficient
( 2̄∕ ) | | |
can be identified as the stiffness matrix or elasticity constant of a material. In Table 1, the nomenclature of all the coefficients is presented. Throughout the subsequent development, internal energy per unit volume and entropy per volume are used and the bars are dropped for a simplified presentation.
The resulting free energy function for a fully coupled linear TEMM process is
= 1 2 + 1 2 + 1 2 + 1 2 2 + + + + + + .(12)
The class of materials and coupled processes that can be characterized using this linear framework are described through the Multiphysics Interaction Diagram (MPID) shown in Figure 1. This diagram identifies all the known reversible thermoelectro-magneto-mechanical processes [20,32]. Specifically, the TEMM extensive variables are marked on the corners of the inner quadrilateral and the intensive variables are marked on the outside. The green lines highlight the TEMM processes that couple any two of the four physical effects, whereas the blue lines represent the uncoupled processes. Furthermore, the arrows denote the direction of the processes. For example, piezoelectricity, defined as the accumulation of electric charge in response to an applied stress, is represented in the MPID by the green line that connects the electric polarization and the Cauchy stress . The direction of the arrow signifies the generation of material polarization (effect) in response to an applied mechanical stress (cause). As stated earlier, within the infinitesimal strain regime, the stress and strain tensors are symmetric, i.e., they have only 6 independent components. This allows us to simplify the representation of the stress and strain tensors as well as the corresponding material constants using the Voigt notation, wherein the tensor indices are replaced as shown below:
11 → 1, 22 → 2, 33 → 3, 12, 21 → 4, 23, 32 → 5, 13, 31 → 6.
Using this shorthand notation, the constitutive equations (8)-(11) can be simplified
further = + + + ,(13a)= + + + ,(13b)ℎ = + + + ,(13c)= + + + ,(13d)
where , ∈ 1, 2..., 6 and , ∈ 1, 2, 3.
The linear model presented here characterizes a class of materials called ferroic materials that exhibit spontaneous polarization or magnetization in the presence of external electromagnetic fields. It is noted that the constitutive models developed here have a limited regime of applicability, i.e., within a small perturbation of an equilibrium state, often approximated as a linear, reversible process. Thus, the linearized constitutive models will not be able to predict effects like nonlinearity, irreversibility, dissipation or large deformations. For instance, piezomagnetism, i.e., the magneto-mechanical coupling effect occurring in ferromagnetic materials, is a linear approximation of magnetostriction which is in fact a highly nonlinear and hysteretic (irreversible) effect. In order to predict the complete nonlinear regime of magnetostriction accurately, additional independent variables, known as the internal variables, need to be used to describe the microstructural evolution and the associated losses in a material at lower scales.
Thermodynamic stability
Any spontaneous change in the parameters of a system in stable equilibrium will result in processes that aim to restore the system to its prior equilibrium state [36]. In other words, a thermodynamically stable system cannot grow rapidly from small perturbations about the equilibrium.
In this section, we look into the conditions required for such a stable equilibrium state. A consequence of this requirement is that the internal energy of the material must be a convex function of the extensive variables, which is imposed as follows [6]:
Convexity of ( , , , ) ⇔ = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ is positive definite
for all possible values of independent variables , , , and within a small perturbation of the equilibrium state.
is the Hessian matrix corresponding to the linear constitutive equations (8)- (11) and can be expressed as a block matrix consisting of all the coefficient matrices defined in Table 1:
= ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ ( ) ( ) ( ) ( ) ( ) ( ) ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ .(14)
A necessary (but not sufficient) condition for convexity of is the Legendre-Hadamard condition
( ) ≥ 0.(15)
In the following section, we demonstrate the application of these restrictions on a linear multiferroic material with a specified crystallographic symmetry.
Example: multiferroic material with hexagonal symmetry
A multiferroic material exhibits coupling of two or more ferroic orders. In this example, a general multiferroic material with hexagonal crystal symmetry that exhibits a fully coupled TEMM behavior is considered. The linear TEMM constitutive equations (8)- (11) reduce to the following form for hexagonal symmetry (6mm crystallographic symmetry and 6m'm' magnetic point symmetry) 3 :
⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 1 2 3 4 5 6 1 2 3 ℎ 1 ℎ 2 ℎ 3 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣3 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⏟⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏟⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏟ Hessian Matrix ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 1 2 3 4 5 6 1 2 3 1 2 3 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦(16)
where 66 = 1∕2( 11 − 12 ). All the material constants follow the same notation as defined in Table 1.
For the linear constitutive model (16), the coefficient matrix coincides with the Hessian . Restricting the determinant of to positive values generates bounds on the material constants. This condition must be valid for any combination of the TEMM independent variables. Thus, the stability requirements corresponding to some of the special cases are presented below:
• For ≠ , = = , = 0, stability restrictions reduce to
11 ≥ 0, 44 ≥ 0, − 11 ≤ 12 ≤ 11 , 2 ( 13 ) 2 ≤ 22 ( 11 + 12 ).(17)
• For ≠ , ≠ , = , = 0, we obtain 11 ≥ 0, 33 ≥ 0, ( 51 ) 2 ≤ 11 44 , 2 ( 13 ) 2 ≤ 33 ( 11 + 12 ).
• Similarly, for ≠ , ≠ , = , = 0, we obtain 11 ≥ 0, 33 ≥ 0, ( 51 ) 2 ≤ 11 44 , 2 ( 13 ) 2 ≤ 33 ( 11 + 12 ).
• Now considering ≠ , ≠ , = , = 0, we get
( 11 ) 2 ≤ 11 11 , ( 33 ) 2 ≤ 33 33 .(20)
• Also, for ≠ 0, ≠ , = = , we have
≥ 0, ( 3 ) 2 ≤ 33 , ( 1 ) 2 ≤ 11 .(21)
• Finally, considering ≠ 0, ≠ or ≠ , we obtain
( 3 ) 2 ≤ 33 , ( 3 ) 2 ≤ 33 .(22)
In what follows, we extend the framework to characterize irreversible processes characterized by heat, charge and spin transport.
Constitutive model development II: transport processes
The framework presented in Section 3.1 assumes a slow and thermodynamically reversible process. In this section, characterization of irreversible transport processes (associated with rates and gradients of physical quantities) will be developed starting from the Clausius-Duhem inequality (4) and utilizing irreversible thermodynamics principles.
Characterization of transport processes
Revisiting the reduced Clausius-Duhem inequality (4):
−̇+ ⋅̇+̇+ ⋅̇+ ⋅̇+ ⋅ − 1 ⋅ grad ≥ 0(23)
where and correspond to the internal energy and entropy per unit volume.
While the quasistatic processes are characterized in terms of the TEMM extensiveintensive conjugate variables, the transport processes are characterized in terms of the thermodynamic forces that drive the process, and the resulting thermodynamic flow terms that are generated as a response to the input forces. The free energy formulation described in Section 3.1 is thus extended to include additional independent and dependent variables, i.e.,
wherein the gradient of temperature grad and the gradient of electrochemical potential grad are added as the independent variables (i.e., the thermodynamic forces) whereas the electric current density and the heat current density are added as the dependent variables (i.e., the thermodynamic flow terms). As is customary, we now apply the chain rule on the free energy function (24) 1 :
= ⋅̇+ ⋅̇+ ⋅̇+ ⋅̇+ (grad ) ⋅ġrad + (grad ) ⋅ġrad .(25)
Substituting in the Clausius-Duhem inequality and using the Coleman and Noll approach [10], we obtain
( − ) ⋅̇+ ( − ) ⋅̇+ ( − ) ⋅̇+ ( − ) ⋅+ (grad ) ⋅ġrad + (grad ) ⋅ġrad + 1 ⋅ − ⋅ grad 2 ≥ 0.(26)
Since the rates ̇, ̇, ̇, and ̇are mutually independent and may be varied arbitrarily, it follows from (26) that the coefficients of the rates must vanish, i.e.,
= , = , = , = 1 , (27a) (grad ) = , (grad ) = ,(27b)
along with the residual inequality,
⋅ − ⋅ grad ≥ 0.(28)
It is evident from (27b) that free energy is independent of grad and grad .
Thus, the irreversible transport processes are characterized using the functional descriptions for and of the form (24) 2,3 and subsequently restricted by the residual inequality (28).
Classification of transport processes
Three types of memoryless transport processes are studied in this paper, namely,
Spin-induced processes or Spintronics:
The transport processes resulting from a net polarization of the spin-up and the spin-down electrons. 4 In most materials, electron spins are equally present in both the up ( = +1∕2) and the down ( = −1∕2) states. An imbalance between these states can be created by putting a magnetic material in a large magnetic field (Zeeman effect) or by utilizing the exchange energy present in a ferromagnet [14,35]. In what follows, the characterization of transport processes will be modified to incorporate additional variables arising from spin-polarization of the electron population.
Characterization of spin transport
The thermodynamic formulation presented in Section 3.2.1 does not characterize spin transport. In order to extend the formulation to spin-dependent processes, additional independent and dependent variables are required for the complete 4 Spin of an electron is associated with its intrinsic angular momentum, which is different from the angular momentum generated by the electron orbital motion. Experimental evidence suggests that the spin of electron can exist in two possible states, namely, the spin-up and the spin-down states. characterization. The following spin-dependent current and force quantities are thus defined:
• The charge and spin-induced currents defined by
= ↑ + ↓ and = ↑ − ↓ ,(29)
where ↑ and ↓ denote the currents generated due to the motion of the spin-up and the spin-down charges, respectively. Also, and are defined as the charge current and the spin-polarized current, respectively [5]. In a spin-independent system ↑ = ↓ , which leads to zero spin-currents.
• The charge and spin chemical potentials are defined as [5]
= ↑ + ↓ 2 , and = ↑ − ↓(30)
such that
grad = , grad 2 = ,(31)
wherein ↑ and ↓ denote the electrochemical potentials induced by the motion of the spin-up and the spin-down electrons, respectively. Also, and are defined as the spin-induced and charge-induced electric fields, respectively.
Modified dissipation inequality
The modified form of dissipation inequality (28) that incorporates spin transport is now posited as
↑ ⋅ ↑ + ↓ ⋅ ↓ − 1 ⋅ grad ≡ ⋅ + ⋅ − 1 ⋅ grad ≥ 0.(32)
The equivalence of the two forms of dissipation inequality can be proved using the relationships (29)- (31). The choice of spin-dependent variables and the corresponding second law statement presented in this work are consistent with the spintronic formulations in [5,7,34,40]. The thermodynamic driving force vector , consisting of the complete set of independent variables associated with thermal, electromagnetic, and spin transport processes, is defined as
= grad = { grad , grad , grad } .(33)
Additional independent variables like the external magnetic field or the spin polarization vector ̂maybe required for complete characterization, depending on the physical process. These are usually accommodated within the process constants called the kinetic coefficients. The corresponding dependent variables, i.e., the flow vector is given by
= { , , } .(34)
Linear constitutive model
The constitutive equations describing the transport phenomena can now be posited in the general form
= ∑ =0 ( ,̂) + 1 2! ∑ =0 ∑ =0 ( ,̂) + ....(35)
where and represent the components of the thermodynamic flow and the thermodynamic force vectors described by (34) and (33), respectively. Also, represents the spin-polarization unit vector and , denote the kinetic coefficients that correlate the fluxes and the driving forces. These coefficients depend on the material property as well as other external factors like applied magnetic field or spin-polarization.
Irreversible transport processes with no memory are known as Markovian processes and can be described using only the leading order terms in (35), i.e.,
= ∑ =0 ( ,̂) .(36)
The constitutive equation (36) will now be specialized to linear spin, charge and current transport processes. Since the purely thermoelectric processes occur in the absence of external magnetic fields and spin-polarization, the corresponding kinetic coefficients are assumed to be material specific constants. On the other hand, the kinetic coefficients corresponding to the thermomagnetic and spintronic processes are dependent on external factors like applied magnetic field or spin-polarization.
Equation (36) can be considerably simplified for these processes by noting that only the components of magnetic field and spin-polarization vectors orthogonal to both the flow and force quantities contribute to the transport phenomenon, i.e., they only appear as cross-product terms in the constitutive model. The constitutive equations are thus specialized to the form
= ⏟ ⏟⏟ thermoelectric + ℎ ⏟⏞⏞⏞⏞⏞⏟⏞⏞⏞⏞⏞⏟ thermomagnetic/galvanomagnetic +̂ ⏟⏞⏞⏞⏞⏞⏟⏞⏞⏞⏞⏞⏟ spin-induced(37)
wherein, the constants describe the thermoelectric effects. Also, represent the process constants corresponding to galvanomagnetic and thermomagnetic effects and describe the material constants for the spin-induced processes. Substituting (37) and specializing to the transport phenomena highlighted in Figure 2, the constitutive equations are reduced to
= ⋅ + ⋅ × + ⋅̂× + ⋅ grad + ⋅ × grad ,(38)= ′ ⋅̂× + ⋅ + ⋅ grad + ⋅̂× grad ,(39)= ′ ⋅ + ′ ⋅ × + ′ ⋅ + ′ ⋅̂× + ⋅ grad + ⋅ × grad ,(40)
wherein all the process constants are described in Table 2. Specializing further to isotropic material, the constitutive equations (38)- (40) can be presented in the matrix
form ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣− 31 ℎ 31 ℎ 11 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ (grad ) ∕ (grad ) ∕ (grad ) ∕ ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ .(41)
Results and discussion
Restrictions imposed by the Clausius-Duhem inequality and Onsager equations
In what follows, we derive the restrictions imposed by the dissipation inequality (32) and Onsager Reciprocal Relations on the system of equations (41).
Onsager's reciprocal relations
Onsager reciprocal relations express the equality of certain ratios between the thermodynamic flows and forces in a linear transport process [8]. Specifically, Onsager relations state that the kinetic coefficients corresponding to these processes can be related as
= for = ,(42)( ) = (− ) for ≠ and ≠ .(43)
Crystallographic symmetry of the material comes into play when deducing the inverse relations using Onsager equations [2,3]. Thus, in order to simplify the presentation, isotropic crystal symmetry is assumed here. We now apply these relationships to the process constants in (41
⇒ = ′ .(52)
Thus, the Onsager relationships reduce all the kinetic coefficients for an isotropic material to scalar quantities. The transport equations (38)- (40) are thus reduced to
= + × +̂× + grad + × grad ,(53)= −̂× + + grad +̂× grad ,(54)= + × + −̂× + grad + × grad .(55)
Restrictions imposed by the second law of thermodynamics
The dissipation inequality (32) is now imposed on the reduced constitutive equations (53)-(55). The dissipation inequality is rewritten as
Γ( ) ≡ ( ) ⋅ + ( ) ⋅ − ( ) ⋅ grad ≥ 0(56)
wherein equality occurs only at equilibrium. Thus, at equilibrium the function Γ( )
is minimized with respect to the independent variables , and , ∕ , i.e.,
Γ | | | | = Γ | | | | = Γ (grad ∕ ) | | | | = ,(57)
where ()| denotes the value of the enclosed quantity at equilibrium. Substituting (53)-(55) into the equilibrium conditions (57) and solving the resulting system of equations, we obtain
| = | = (grad ) | = ⇔ | = | = | =(58)
at equilibrium. Rewriting dissipation inequality (56) using constitutive equations (53)-(55) and rearranging the terms
Γ( ) ≡ ( ) ⋅ + ( ) ⋅ − ( ) ⋅ 1 grad ≥ 0 (59) ⇒ ( + × +̂× + grad + × grad ) ⋅ + ( −̂× + + grad +̂× grad ) ⋅ − ( + × + −̂×
A note on the governing equations for transport phenomena
In order to utilize the thermodynamic framework for transport device modeling, the constitutive equations need to be supplemented with the appropriate governing equations and boundary conditions. For instance, the complete model for heat transport (in the absence of charge or spin transport) includes the constitutive equation for heat conduction, the conservation of energy (1d) and the appropriate material boundary conditions (e.g., insulation). Similarly, for charge transport conservation of charge is invoked, which is a result of the combination of the Gauss's law for electricity (1h) and the Ampère-Maxwell law (1i):
+ div = 0,
which reduces to div = 0 (66)
in the absence of time varying charge density.
Spin transport: Spin transport differs from charge transport in that spin is a nonconserved quantity in solids due to the spin-flip mechanism of decay for a spinpolarized electron population. Evolution of spin-voltage is instead described through the phenomenological Valet-Fert equation [34] where ̂is the magnetic moment vector and is the real part of the spin-mixing conductance at the NM|FM interface [1].
Conclusion
A unified thermodynamic framework was developed for the characterization of functional materials exhibiting thermo-electro-magneto-mechanical (TEMM)
behavior. Particularly, this overarching framework combines electrodynamics of continua, classical equilibrium and non-equilibrium thermodynamics principles to enable the characterization of a broad range of linear reversible and irreversible TEMM processes highlighted in Figures 1-2.
In the first part of the paper, starting from the state equations presented in [33], a constitutive modeling framework was developed for a fully coupled reversible (or quasistatic) TEMM system. Stability conditions were further imposed on the resulting internal energy function. The utility of this framework was demonstrated by specializing the TEMM material to a multiferroic with hexagonal crystal symmetry and subsequently deducing the bounds on the material constants. In the second part of the paper, principles of irreversible thermodynamics were used to develop constitutive models for linear charge, heat and spin transport phenomena wherein the dissipation inequality was modified to incorporate the spin-polarized physical quantities. As a result of this modification, in addition to the standard thermoelectric, thermomagnetic and galvanomagnetic transport phenomena, the characterization of spintronic and spin caloritronic effects emerged as a part of this formalism. Onsager's reciprocal relations and second law of thermodynamics were invoked to deduce bounds on the kinetic coefficients.
Applications of this framework are envisioned in design and characterization of functional materials. Also, the restrictions derived in this work, like (17)- (22) and (61)-(65), can be imposed as design constraints while optimizing the material properties.
Declarations
Author contribution statement
Figure 1 .
1Multiphysics interaction diagram demonstrating linear thermo-electro-magneto-mechanical effects[32].
=̃( , , , , grad , grad ), =̃( , , , , grad , grad ), =̃( , , , , grad , grad ),
1 .
1Thermoelectric processes: The transport phenomena associated with the flow of electric current and heat current in the absence of external magnetic field. These include physical effects like heat conductivity, electrical conductivity, Seebeck effect and Peltier effect. 2. Thermomagnetic and Galvanomagnetic processes: The transport processes that arise in the presence of an externally applied magnetic field. These effects are a result of the Lorentz forces acting on the moving free electrons, which in turn are generated due to the thermal or electrical potential gradients orthogonal to the applied magnetic field. Examples include Nernst effect, Ettinghausen effect, Hall effect and Righi-Leduc effect (or Thermal Hall effect).
Motion of such a spin-polarized population of electrons can result in a plethora of spin-induced transport phenomena that include the Spin Hall effect, the Inverse Spin Hall effect, Spin-dependent Seebeck effect, Spin-dependent Peltier effect and the Spin Nernst effect. The multiphysics interaction diagrams (MPID) corresponding to all the transport processes described above are highlighted in the irreversible multiphysics interaction diagram, Figure 2. The irreversible MPID is divided into two parts wherein Figure 2(a) describes the purely thermoelectric transport processes (i.e., no magnetic field or spin polarization), whereas Figure 2(b) describes the thermomagnetic, galvanomagnetic and spintronic transport processes (i.e., non-zero external magnetic field or spin-polarization). The flow terms are marked on the inside and the thermodynamic force terms are marked on the outside. Similar to Figure 1, the individual processes are represented by lines connecting the appropriate flow-force quantities and the arrow signifies the direction of the process. Also, the uncoupled processes are represented by the blue lines whereas the coupled processes are marked in green.
Figure 2 .
2Multiphysics interaction diagram demonstrating (a) thermoelectric transport processes in the absence of external magnetic fields, and (b) galvanomagnetic, thermomagnetic, and spin-induced transport processes in the presence of external magnetic fields.
the spin-flip diffusion length and is the spin voltage. Boundary conditions: At the free or vacuum interface the spin current vanishes, i.e., ( ) = . Across the nonmagnetic (NM) and ferromagnetic (FM) boundary, the spin current is described as ( ) =̂×̂(68)
The Author. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).13
http://dx.doi.org/10.1016/j.heliyon.2016.e00164
2405-8440/© 2016 Article No~e00164
Table 2 .
2Process constants for thermoelectric, thermomagnetic and spin-induced effects.Thermoelectric
Thermomagnetic
Spin-induced
-Electrical Conductivity
-Hall Effect
-Spin-Hall Effect
-Thermal Conductivity
′ -Inverse Hall Effect
′ -Inverse Spin-Hall Effect
-Seebeck Effect
-Nernst Effect
-Spin Conductivity
′ -Peltier Effect
′ -Inverse Nernst Effect
-Spin-Induced Nernst
-Righi-Leduc Effect
′ -Inverse Spin Nernst
-Spin-Induced Seebeck
′ -Spin-Induced Peltier
(33)-(34) into
).Thus, 12 = 21 = 31 = .• The Thermal Hall (or Righi-Leduc) coefficients can be related in a similar manner, i.e.,• Onsager relations can also be used to relate inverse processes. For instance, Seebeck and Peltier ′ coefficients can be related using(42) asSimilarly, the Nernst coefficient is compared to Inverse Nernst coefficient ′ as • For spin transport processes, we first compare Spin Hall and Inverse Spin Hall• The Hall coefficients
are related to each other using (43), i.e.,
12 ℎ = − 21 (−ℎ ) ⇒
12 = 21
(44)
similarly 12 = 31 .
(45)
12 = 21 = 31 = .
(46)
11 =
′
11
⇒
=
′ .
(47)
12 = 21 = 31 =
′
12 =
′
21 =
′
31
⇒
=
′ .
(48)
coefficients for an isotropic material, using (42), i.e.,
and
′ are related as
− 12̂=
′
21̂⇒
12 = −
′
21 .
(49)
Applying similar arguments to all components, we obtain
12 = 21 = 31 = −
′
12 = −
′
21 = −
′
31
⇒
= −
′ .
(50)
• Finally, comparing the Spin-induced Seebeck
and Spin Nernst
effects
to their respective inverses
′ and
′ , we obtain
12 = 21 = 31 = −
′
12 = −
′
21 = −
′
31
⇒
= −
′
(51)
Finally, 11 =
′
11
http://dx.doi.org/10.1016/j.heliyon.2016.e00164 2405-8440/© 2016 The Author. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Article No~e00164
http://dx.doi.org/10.1016/j.heliyon.2016.e00164 2405-8440/© 2016 The Author. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Article No~e00164
http://dx.doi.org/10.1016/j.heliyon.2016.e00164 2405-8440/© 2016 The Author. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Article No~e00164
http://dx.doi.org/10.1016/j.heliyon.2016.e00164 2405-8440/© 2016 The Author. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Article No~e00164
http://dx.doi.org/10.1016/j.heliyon.2016.e00164 2405-8440/© 2016 The Author. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Article No~e00164
The coefficient matrices corresponding to hexagonal symmetry are derived in[24].
http://dx.doi.org/10.1016/j.heliyon.2016.e00164 2405-8440/© 2016 The Author. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Article No~e00164
http://dx.doi.org/10.1016/j.heliyon.2016.e00164 2405-8440/© 2016 The Author. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Article No~e00164
http://dx.doi.org/10.1016/j.heliyon.2016.e00164 2405-8440/© 2016 The Author. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Article No~e00164
http://dx.doi.org/10.1016/j.heliyon.2016.e00164 2405-8440/© 2016 The Author. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Article No~e00164
http://dx.doi.org/10.1016/j.heliyon.2016.e00164
http://dx.doi.org/10.1016/j.heliyon.2016.e00164 2405-8440/© 2016 The Author. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
http://dx.doi.org/10.1016/j.heliyon.2016.e00164 2405-8440/© 2016 The Author. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Article No~e00164
-8440/© 2016 The Author. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
AcknowledgementsThe author would like to thank Professor Joseph Heremans, Professor Stephen Bechtel, Dr. Robert Lowe and Professor Ernesto Ulloa for valuable discussions and input.Article No~e00164The inequality (60) must be valid for any arbitrary values of , and (grad )∕ .This is utilized to derive the restrictions on kinetic coefficients using the techniques presented in Section 3.1.2. The inequality (32) is investigated for the following special cases:• Case I: For ≠ , = and grad = the inequality reduces toRestrictions on spin conductivity and thermal conductivity are derived using similar arguments, i.e.,Since thermal conductivity is always less than or equal to zero, we define• Case II: When any two or more of the three independent variables are non-zero, the inequality (60) can be rewritten aswhere 1 is the angle between the spin-polarization vector ̂and the charge electric field and 2 is the angle between the magnetic field and . Also, the magnitude of spin polarization vector |̂| = 1 and is defined by (63). In order for the inequality (64) to hold for any values of , and grad ∕ the coefficients of the square terms must be always positive. This impliesalong with the conditions derived in (61)-(62). Sushma Santapuri: Conceived and designed the analysis; Analyzed and interpreted the data; Contributed analysis tools or data; Wrote the paper.Funding statementThis work was supported by Institute for Functional Nanomaterials, University of Puerto Rico and NSF grant (EPS-1002410).Competing interest statementThe authors declare no conflict of interest.Additional informationNo additional information is available for this paper.
Theory of the spin Seebeck effect. Hiroto Adachi, Ken-Ichi Uchida, Eiji Saitoh, Sadamichi Maekawa, Rep. Prog. Phys. 76336501Hiroto Adachi, Ken-ichi Uchida, Eiji Saitoh, Sadamichi Maekawa, Theory of the spin Seebeck effect, Rep. Prog. Phys. 76 (3) (2013) 036501.
Space-time symmetry restrictions on the form of transport tensors. I. Galvanomagnetic effects. Y C Akgoz, G A Saunders, J. Phys. C, Solid State Phys. 891387Y.C. Akgoz, G.A. Saunders, Space-time symmetry restrictions on the form of transport tensors. I. Galvanomagnetic effects, J. Phys. C, Solid State Phys. 8 (9) (1975) 1387.
Space-time symmetry restrictions on the form of transport tensors. ii. thermomagnetic effects. Y C Akgoz, G A Saunders, J. Phys. C, Solid State Phys. 8182962Y.C. Akgoz, G.A. Saunders, Space-time symmetry restrictions on the form of transport tensors. ii. thermomagnetic effects, J. Phys. C, Solid State Phys. 8 (18) (1975) 2962.
On thermodynamic conditions for the stability of a thermoelectromagnetic system. Giovambattista Amendola, Math. Methods Appl. Sci. 231Giovambattista Amendola, On thermodynamic conditions for the stability of a thermoelectromagnetic system, Math. Methods Appl. Sci. 23 (1) (2000) 17-39.
. E W Gerrit, Eiji Bauer, Bart J Saitoh, Van Wees, Spin caloritronics. 115Nat. Mater.Gerrit E.W. Bauer, Eiji Saitoh, Bart J. van Wees, Spin caloritronics, Nat. Mater. 11 (5) (2012) 391-399.
Connections between stability, convexity of internal energy, and the second law for compressible Newtonian fluids. S E Bechtel, F J Rooney, M G Forest, J. Appl. Mech. 722S.E. Bechtel, F.J. Rooney, M.G. Forest, Connections between stability, convexity of internal energy, and the second law for compressible Newtonian fluids, J. Appl. Mech. 72 (2) (2005) 299-300.
. R Stephen, Roberto C Boona, Joseph P Myers, Heremans, Spin caloritronics, Energy Environ. Sci. 73Stephen R. Boona, Roberto C. Myers, Joseph P. Heremans, Spin caloritronics, Energy Environ. Sci. 7 (3) (2014) 885-910.
H B Callen, Thermodynamics and an Introduction to Thermostatistics. New YorkWiley2nd editionH.B. Callen, Thermodynamics and an Introduction to Thermostatistics, 2nd edition, Wiley, New York, 1985.
D L Deborah, Chung, Functional Materials: Electrical, Dielectric, Electromagnetic, Optical and Magnetic Applications. SingaporeWorld Scientific PublishingDeborah D.L. Chung, Functional Materials: Electrical, Dielectric, Electromagnetic, Optical and Magnetic Applications, World Scientific Publishing, Singapore, 2010.
On the thermodynamics of electromagnetic fields in materials with memory. B D Coleman, E H Dill, Arch. Ration. Mech. Anal. 412B.D. Coleman, E.H. Dill, On the thermodynamics of electromagnetic fields in materials with memory, Arch. Ration. Mech. Anal. 41 (2) (1971) 132-162.
Thermodynamic restrictions on the constitutive equations of electromagnetic theory. B D Coleman, E H Dill, Z. Angew. Math. Phys. 224B.D. Coleman, E.H. Dill, Thermodynamic restrictions on the constitutive equations of electromagnetic theory, Z. Angew. Math. Phys. 22 (4) (1971) 691-702.
. 10.1016/j.heliyon.2016.e00164http://dx.doi.org/10.1016/j.heliyon.2016.e00164
Nonlinear magnetoelastic deformations. A Dorfmann, R W Ogden, Q. J. Mech. Appl. Math. 574A. Dorfmann, R.W. Ogden, Nonlinear magnetoelastic deformations, Q. J. Mech. Appl. Math. 57 (4) (2004) 599-622.
Aspects of the second law of thermodynamics in the presence of electromagnetic effects. A E Green, P M Naghdi, Q. J. Mech. Appl. Math. 372A.E. Green, P.M. Naghdi, Aspects of the second law of thermodynamics in the presence of electromagnetic effects, Q. J. Mech. Appl. Math. 37 (2) (1984) 179-193.
Landau-Lifshitz theory of the longitudinal spin Seebeck effect. Silas Hoffman, Koji Sato, Yaroslav Tserkovnyak, Phys. Rev. B. 88664408Silas Hoffman, Koji Sato, Yaroslav Tserkovnyak, Landau-Lifshitz theory of the longitudinal spin Seebeck effect, Phys. Rev. B 88 (6) (2013) 064408.
F Mark, Horstemeyer, Integrated Computational Materials Engineering (ICME) for Metals: Using Multiscale Modeling to Invigorate Engineering Design with Science. Hoboken, New JerseyJohn Wiley & SonsMark F. Horstemeyer, Integrated Computational Materials Engineering (ICME) for Metals: Using Multiscale Modeling to Invigorate Engineering Design with Science, John Wiley & Sons, Hoboken, New Jersey, 2012.
K Hutter, A A F Van De Ven, A Ursescu, Electromagnetic Field Matter Interactions in Thermoelastic Solids and Viscous Fluids. Berlin and HeidelbergSpringerK. Hutter, A.A.F. van de Ven, A. Ursescu, Electromagnetic Field Matter Interactions in Thermoelastic Solids and Viscous Fluids, Springer, Berlin and Heidelberg, 2006.
On finitely strained magnetorheological elastomers. S V Kankanala, N Triantafyllidis, J. Mech. Phys. Solids. 5212S.V. Kankanala, N. Triantafyllidis, On finitely strained magnetorheological elastomers, J. Mech. Phys. Solids 52 (12) (2004) 2869-2908.
L D Landau, E M Lifshitz, L P Pitaevskii, Electrodynamics of Continuous Media. Oxford and New YorkPergamon Press2nd editionL.D. Landau, E.M. Lifshitz, L.P. Pitaevskii, Electrodynamics of Continuous Media, 2nd edition, Pergamon Press, Oxford and New York, 1984.
Uniqueness and reciprocity theorems for linear thermo-electromagneto-elasticity. Li Jiang Yu, Q. J. Mech. Appl. Math. 561Jiang Yu Li, Uniqueness and reciprocity theorems for linear thermo-electro- magneto-elasticity, Q. J. Mech. Appl. Math. 56 (1) (2003) 35-43.
Multi-physics interactions for coupled thermo-electro-magnetomechanical effects. E Lui, The Ohio State UniversityMaster's thesisE. Lui, Multi-physics interactions for coupled thermo-electro-magneto- mechanical effects, Master's thesis, The Ohio State University, 2011.
Electrostatic forces and stored energy for deformable dielectric materials. R M Mcmeeking, C M Landis, J. Appl. Mech. 724R.M. McMeeking, C.M. Landis, Electrostatic forces and stored energy for deformable dielectric materials, J. Appl. Mech. 72 (4) (2005) 581-590.
A principle of virtual work for combined electrostatic and mechanical loading of materials. R M Mcmeeking, C M Landis, S M A Jimenez, Int. J. Non-Linear Mech. 426R.M. McMeeking, C.M. Landis, S.M.A. Jimenez, A principle of virtual work for combined electrostatic and mechanical loading of materials, Int. J. Non- Linear Mech. 42 (6) (2007) 831-838.
Variational principles in dissipative electromagneto-mechanics: a framework for the macro-modeling of functional materials. C Miehe, D Rosato, B Kiefer, Int. J. Numer. Methods Eng. 8610C. Miehe, D. Rosato, B. Kiefer, Variational principles in dissipative electro- magneto-mechanics: a framework for the macro-modeling of functional materials, Int. J. Numer. Methods Eng. 86 (10) (2011) 1225-1276.
Robert E Newnham, Properties of Materials: Anisotropy, Symmetry, Structure: Anisotropy, Symmetry, Structure. Oxford and New YorkOxford University PressRobert E. Newnham, Properties of Materials: Anisotropy, Symmetry, Structure: Anisotropy, Symmetry, Structure, Oxford University Press, Oxford and New York, 2004.
Unusual field-coupled nonlinear continuum mechanics of smart materials. W S Oates, H Wang, R L Sierakowski, J. Intell. Mater. Syst. Struct. 235W.S. Oates, H. Wang, R.L. Sierakowski, Unusual field-coupled nonlinear continuum mechanics of smart materials, J. Intell. Mater. Syst. Struct. 23 (5) (2012) 487-504.
. 10.1016/j.heliyon.2016.e00164http://dx.doi.org/10.1016/j.heliyon.2016.e00164
Mechanics and Electrodynamics of Magneto-and Electro-Elastic Materials. Raymond Ogden, David Steigmann, CISM International Centre for Mechanical Sciences. 527Springer Science & Business MediaRaymond Ogden, David Steigmann, Mechanics and Electrodynamics of Magneto-and Electro-Elastic Materials, CISM International Centre for Mechanical Sciences, vol. 527, Springer Science & Business Media, Wien & New York, 2011.
Computational design of hierarchically structured materials. Gregory B Olson, Science. 2775330Gregory B. Olson, Computational design of hierarchically structured materials, Science 277 (5330) (1997) 1237-1242.
Key computational modeling issues in integrated computational materials engineering. H Jitesh, Panchal, R Surya, David L Kalidindi, Mcdowell, Comput. Aided Des. 451Jitesh H. Panchal, Surya R. Kalidindi, David L. McDowell, Key computational modeling issues in integrated computational materials engineering, Comput. Aided Des. 45 (1) (2013) 4-25.
Electromagnetic forces in deformable continua. Y.-H Pao, Mechanics Today. S. Nemat-NasserNew YorkPergamon Press, Inc4Y.-H. Pao, Electromagnetic forces in deformable continua, in: S. Nemat- Nasser (Ed.), Mechanics Today, vol. 4, Pergamon Press, Inc., New York, 1978, pp. 209-305.
Mathematical modeling of electrorheological materials. K R Rajagopal, M Růžička, Contin. Mech. Thermodyn. 131K.R. Rajagopal, M. Růžička, Mathematical modeling of electrorheological materials, Contin. Mech. Thermodyn. 13 (1) (2001) 59-78.
Constitutive modeling of electrostrictive polymers using a hyperelasticity-based approach. A W Richards, G M Odegard, J. Appl. Mech. 77114502A.W. Richards, G.M. Odegard, Constitutive modeling of electrostrictive polymers using a hyperelasticity-based approach, J. Appl. Mech. 77 (1) (2010) 014502.
Modeling of thermo-electro-magneto-mechanical behavior, with application to smart materials. Sushma Santapuri, Robert L Lowe, Stephen E Bechtel, Fundamentals of Continuum Mechanics. BostonAcademic PressSushma Santapuri, Robert L. Lowe, Stephen E. Bechtel, Chapter 9 -Modeling of thermo-electro-magneto-mechanical behavior, with application to smart materials, in: Fundamentals of Continuum Mechanics, Academic Press, Boston, 2015, pp. 249-303.
Thermodynamic modeling of fully coupled finite-deformation thermo-electromagneto-mechanical behavior for multifunctional applications. Sushma Santapuri, Robert L Lowe, Stephen E Bechtel, Marcelo J Dapino, Int. J. Eng. Sci. 720Sushma Santapuri, Robert L. Lowe, Stephen E. Bechtel, Marcelo J. Dapino, Thermodynamic modeling of fully coupled finite-deformation thermo-electro- magneto-mechanical behavior for multifunctional applications, Int. J. Eng. Sci. 72 (0) (2013) 117-139.
Modeling of thermal spin transport and spin-orbit effects in ferromagnetic/nonmagnetic mesoscopic devices. Abraham Slachter, Frank Lennart Bakker, Bart Jan Van Wees, Phys. Rev. B. 8417174408Abraham Slachter, Frank Lennart Bakker, Bart Jan van Wees, Modeling of thermal spin transport and spin-orbit effects in ferromagnetic/nonmagnetic mesoscopic devices, Phys. Rev. B 84 (17) (2011) 174408.
The renaissance of magnetoelectric multiferroics. N A Spaldin, M Fiebig, Science. 3095733N.A. Spaldin, M. Fiebig, The renaissance of magnetoelectric multiferroics, Science 309 (5733) (2005) 391-392.
H E Stanley, Introduction to Phase Transitions and Critical Phenomena. New YorkOxford University PressH.E. Stanley, Introduction to Phase Transitions and Critical Phenomena, Oxford University Press, New York, 1971.
Equilibrium theory for magnetic elastomers and magnetoelastic membranes. D J Steigmann, 10.1016/j.heliyon.2016.e00164Int. J. Non-Linear Mech. 397D.J. Steigmann, Equilibrium theory for magnetic elastomers and magnetoelastic membranes, Int. J. Non-Linear Mech. 39 (7) (2004) 1193-1216. 25 http://dx.doi.org/10.1016/j.heliyon.2016.e00164
The classical field theories. C Truesdell, R Toupin, Principles of Classical Mechanics and Field Theory (Prinzipien der Klassischen Mechanik und Feldtheorie). Berlin HeidelbergSpringerC. Truesdell, R. Toupin, The classical field theories, in: Principles of Classical Mechanics and Field Theory (Prinzipien der Klassischen Mechanik und Feldtheorie), Springer Berlin Heidelberg, 1960, pp. 226-858.
Zhong-Lin Wang, Zhen Chuan Kang, Functional and Smart Materials: Structural Evolution and Structure Analysis. New York & LondonZhong-lin Wang, Zhen Chuan Kang, Functional and Smart Materials: Structural Evolution and Structure Analysis, 2012, New York & London.
Spin-currents and spin-pumping forces for spintronics. Jean-Eric Wegrowe, Henri-Jean Drouhin, Entropy. 132Jean-Eric Wegrowe, Henri-Jean Drouhin, Spin-currents and spin-pumping forces for spintronics, Entropy 13 (2) (2011) 316-331.
A fully coupled theory and variational principle for thermal-electrical-chemical-mechanical processes. Pengfei Yu, Shengping Shen, J. Appl. Mech. 8111111005Pengfei Yu, Shengping Shen, A fully coupled theory and variational principle for thermal-electrical-chemical-mechanical processes, J. Appl. Mech. 81 (11) (2014) 111005.
Electrostriction in elastic dielectrics undergoing large deformation. X Zhao, Z Suo, J. Appl. Phys. 10412123530X. Zhao, Z. Suo, Electrostriction in elastic dielectrics undergoing large deformation, J. Appl. Phys. 104 (12) (2008) 123530.
|
[] |
[
"ASTRONOMY AND ASTROPHYSICS Unusual radio variability in the BL Lac object 0235+164",
"ASTRONOMY AND ASTROPHYSICS Unusual radio variability in the BL Lac object 0235+164"
] |
[
"A Kraus \nMax-Planck-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany\n",
"A Quirrenbach \nMax-Planck-Institut für Extraterrestrische Physik\nGiessenbachstr1603, 85740Postfach, GarchingGermany\n\nDept. of Physics\nCenter for Astrophysics and Space Sciences\nUniversity of California San Diego\nMail Code 0424, La Jolla92093-0424CAUSA\n",
"A P Lobanov \nMax-Planck-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany\n",
"T P Krichbaum \nMax-Planck-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany\n",
"M Risse \nMax-Planck-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany\n",
"P Schneider \nMax-Planck-Institut für Astrophysik\nKarl-Schwarzschild-Str. 185740GarchingGermany\n",
"S J Qian \nMax-Planck-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany\n\nBeijing Astronomical Observatory\nChinese Academy of Science\n100080BeijingChina\n",
"S J Wagner \nLandessternwarte Heidelberg\n69117Königstuhl, HeidelbergGermany\n",
"A Witzel \nMax-Planck-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany\n",
"J A Zensus \nMax-Planck-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany\n",
"J Heidt \nLandessternwarte Heidelberg\n69117Königstuhl, HeidelbergGermany\n",
"H Bock \nLandessternwarte Heidelberg\n69117Königstuhl, HeidelbergGermany\n",
"M Aller \nAstronomy Department\nUniversity of Michigan\n830 Dennison Building, Ann Arbor48109-1090MIUSA\n",
"H Aller \nAstronomy Department\nUniversity of Michigan\n830 Dennison Building, Ann Arbor48109-1090MIUSA\n"
] |
[
"Max-Planck-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany",
"Max-Planck-Institut für Extraterrestrische Physik\nGiessenbachstr1603, 85740Postfach, GarchingGermany",
"Dept. of Physics\nCenter for Astrophysics and Space Sciences\nUniversity of California San Diego\nMail Code 0424, La Jolla92093-0424CAUSA",
"Max-Planck-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany",
"Max-Planck-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany",
"Max-Planck-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany",
"Max-Planck-Institut für Astrophysik\nKarl-Schwarzschild-Str. 185740GarchingGermany",
"Max-Planck-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany",
"Beijing Astronomical Observatory\nChinese Academy of Science\n100080BeijingChina",
"Landessternwarte Heidelberg\n69117Königstuhl, HeidelbergGermany",
"Max-Planck-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany",
"Max-Planck-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany",
"Landessternwarte Heidelberg\n69117Königstuhl, HeidelbergGermany",
"Landessternwarte Heidelberg\n69117Königstuhl, HeidelbergGermany",
"Astronomy Department\nUniversity of Michigan\n830 Dennison Building, Ann Arbor48109-1090MIUSA",
"Astronomy Department\nUniversity of Michigan\n830 Dennison Building, Ann Arbor48109-1090MIUSA"
] |
[] |
We present radio observations at three frequencies and contemporaneous optical monitoring of the peculiar BL Lac object AO 0235+164. During a three-week campaign with the VLA we observed intraday variability in this source and found a distinct peak which can be identified throughout the radio frequencies and tentatively connected to the R-band variations. This event is characterized by unusual properties: its strength increases, and its duration decreases with wavelength, and it peaks earlier at 20 cm than at 3.6 and 6 cm. We discuss several generic models (a "standard" shock-in-jet model, a precessing beam, free-free-absorption in a foreground screen, interstellar scattering, and gravitational microlensing), and explore whether they can account for our observations. Most attempts at explaining the data on 0235+164 require an extremely small source size, which can be reconciled with the 10 12 K inverse Compton limit only when the Doppler factor of the bulk flow is of order 100. However, none of the models is completely satisfactory, and we suggest that the observed variability is due to a superposition of intrinsic and propagation effects.
| null |
[
"https://arxiv.org/pdf/astro-ph/9902261v1.pdf"
] | 12,811,407 |
astro-ph/9902261
|
2dcb4dd8213ba81fb35d35ef33be4ebb7d6f3763
|
ASTRONOMY AND ASTROPHYSICS Unusual radio variability in the BL Lac object 0235+164
18 Feb 1999
A Kraus
Max-Planck-Institut für Radioastronomie
Auf dem Hügel 6953121BonnGermany
A Quirrenbach
Max-Planck-Institut für Extraterrestrische Physik
Giessenbachstr1603, 85740Postfach, GarchingGermany
Dept. of Physics
Center for Astrophysics and Space Sciences
University of California San Diego
Mail Code 0424, La Jolla92093-0424CAUSA
A P Lobanov
Max-Planck-Institut für Radioastronomie
Auf dem Hügel 6953121BonnGermany
T P Krichbaum
Max-Planck-Institut für Radioastronomie
Auf dem Hügel 6953121BonnGermany
M Risse
Max-Planck-Institut für Radioastronomie
Auf dem Hügel 6953121BonnGermany
P Schneider
Max-Planck-Institut für Astrophysik
Karl-Schwarzschild-Str. 185740GarchingGermany
S J Qian
Max-Planck-Institut für Radioastronomie
Auf dem Hügel 6953121BonnGermany
Beijing Astronomical Observatory
Chinese Academy of Science
100080BeijingChina
S J Wagner
Landessternwarte Heidelberg
69117Königstuhl, HeidelbergGermany
A Witzel
Max-Planck-Institut für Radioastronomie
Auf dem Hügel 6953121BonnGermany
J A Zensus
Max-Planck-Institut für Radioastronomie
Auf dem Hügel 6953121BonnGermany
J Heidt
Landessternwarte Heidelberg
69117Königstuhl, HeidelbergGermany
H Bock
Landessternwarte Heidelberg
69117Königstuhl, HeidelbergGermany
M Aller
Astronomy Department
University of Michigan
830 Dennison Building, Ann Arbor48109-1090MIUSA
H Aller
Astronomy Department
University of Michigan
830 Dennison Building, Ann Arbor48109-1090MIUSA
ASTRONOMY AND ASTROPHYSICS Unusual radio variability in the BL Lac object 0235+164
18 Feb 1999received;acceptedA&A manuscript no. (will be inserted by hand later) Your thesaurus codes are: 03 (11.01.2; 11.02.2 AO 0235+164; 13.18.1) (present address)Galaxies: active -BL Lacertae objects: indi- vidual: AO 0235+164 -Radio continuum: galaxies
We present radio observations at three frequencies and contemporaneous optical monitoring of the peculiar BL Lac object AO 0235+164. During a three-week campaign with the VLA we observed intraday variability in this source and found a distinct peak which can be identified throughout the radio frequencies and tentatively connected to the R-band variations. This event is characterized by unusual properties: its strength increases, and its duration decreases with wavelength, and it peaks earlier at 20 cm than at 3.6 and 6 cm. We discuss several generic models (a "standard" shock-in-jet model, a precessing beam, free-free-absorption in a foreground screen, interstellar scattering, and gravitational microlensing), and explore whether they can account for our observations. Most attempts at explaining the data on 0235+164 require an extremely small source size, which can be reconciled with the 10 12 K inverse Compton limit only when the Doppler factor of the bulk flow is of order 100. However, none of the models is completely satisfactory, and we suggest that the observed variability is due to a superposition of intrinsic and propagation effects.
Introduction
The radio source AO 0235+164 was identified by Spinrad & Smith (1975) as a BL Lac object due to its almost featureless optical spectrum at the time of the observation, and due to its pronounced variability. Long-term flux density monitoring in the radio and optical regimes have revealed strong variations and repeated outbursts with large amplitudes and timescales ranging from years down to Send offprint requests to: A. Witzel weeks (e.g. Chu et al. 1996, O'Dell et al. 1988, Teräsranta et al. 1992, Schramm et al. 1994, Webb et al. 1988, this paper, Fig. 2). Furthermore, intraday variability in the radio (Quirrenbach et al. 1992, Romero et al. 1997, in the IR (Takalo et al. 1992), and in the optical regime (Heidt & Wagner 1996, Rabbette et al. 1996 has also been observed in this object. In the high energy regime, 0235+164 was detected with EGRET on board of the CGRO (v. Montigny et al. 1995), showing variability between the individual observations. Madejski et al. (1996) report variability by a factor of 2 in the soft X-rays during a ROSAT PSPC observation in 1993. VLBI observations (e.g. Shen et al. 1997, Chu et al. 1996, Bååth 1984, Jones et al. 1984) reveal a very compact structure and superluminal motion with extremely high apparent velocities (perhaps up to β app ≃ 30).
Three distinct redshifts have been measured towards 0235+164 (e.g. Cohen et al. 1987). Whereas the emission lines at z = 0.940 have been attributed to the object itself, two additional systems are present in absorption (z = 0.851) and in emission and absorption (z = 0.524). Smith et al. (1977) observed a faint object located about 2 ′′ south of 0235+164, and measured narrow emission lines at a redshift of z = 0.524. Continued studies on the field of 0235+164 have revealed a number of faint galaxies, mostly at a redshift of z = 0.524, including an object located 1. ′′ 3 to the east and 0. ′′ 5 to the south (e.g. Stickel et al. 1988, Yanny et al. 1989. Recently, Nilsson et al. (1996) investigated 0235+164 during a faint state and found prominent hydrogen lines at the object redshift of z = 0.940. They note that 0235+164 -at least when in a faint state -shows the spectral characteristics of an HPQ. Furthermore, through HST observations of 0235+164 and its surrounding field, Burbidge et al. (1996) discovered about 30 faint objects around 0235+164 and broad QSO-absorption lines in the southern companion, indicating that the latter is an AGN-type object. Due to the presence of several foreground objects, gravitational microlensing might play a role in the characteristics of the variability in 0235+164, as was suggested by Abraham et al. (1993).
In this paper, we further investigate the radio variability of 0235+164, and attempt to determine the most likely physical mechanisms behind the observed flux density variations. The plan of the paper is as follows. In Section 2 we describe the observations and the data reduction; subsequently we analyze the lightcurves and point out some of their special properties. In Section 3 we explore different scenarios which could explain the variability: we discuss relativistic shocks, a precessing beam model, freefree-absorption, interstellar scattering, and gravitational microlensing. Finally, in Section 4 we conclude with a summary of the failures and successes of these models.
Throughout the paper we assume a cosmological interpretation of the redshift, we use H 0 = 100h km/(s Mpc) and q 0 = 0.5, which gives for the redshift of 0235+164 a luminosity distance of 3280 h −1 Mpc; 1 mas corresponds to 4.2 h −1 pc. The radio spectral index is defined by S ν ∝ ν α .
Observations and data reduction
Radio observations
From Oct 2 to Oct 23, 1992, we observed 0235+164 with a five-antenna subarray of the VLA 1 during and after a reconfiguration of the array from D to A. The aim of these observations was to search for short-timescale variations in several sources. The complete data set will be presented elsewhere (Kraus et al., in preparation). In parallel, optical observations were performed in the R-band (see Section 2.2). Data for 0235+164 were taken at 1. 49,4.86,and 8.44 GHz (λ = 20,6,3.6 cm) every two hours around transit, i.e., six times per day. These three sets of receivers have the lowest system temperatures and highest aperture efficiencies of those available at the VLA (see Crane & Napier 1989); and data in these bands are less susceptible to problems with poor tropospheric phase stability than those at higher frequencies. In addition, intraday variability of compact flat-spectrum radio sources did appear most markedly in this frequency range in previous observations (Quirrenbach et al. 1992). During the first week, the antennae included in our subarray were changed repeatedly due to the ongoing reconfiguration; however, an attempt was made to maintain an approximately constant set of baselines. Since 0235+164 and the used calibrator sources used are extremely compact (cf. VLA calibrator list), the effect of the ongoing reconfiguration on the measurements is negligible.
After correlation and elimination of erroneous data intervals, we performed phase calibration first. Subsequently, a (one-day) mean amplitude gain factor was de-rived using non-variable sources such as 1311+678, which have been linked to an absolute flux density scale (Baars et al. 1977, Ott et al. 1994) by frequent observations of 3C 286 and 3C 48. After a second pass of editing spurious sections of the data, the visibilities of each scan were incoherently averaged over time, baselines, polarization, and IFs. Because of the point-like structure of the sources, the mean source visibility is proportional to the flux density. Eventually, systematic elevation and time-dependent effects in the lightcurves were removed, using polynomial corrections derived from observations of the calibrator sources 0836+710 and 1311+678.
The errors are composed of the statistical errors from the averaging and a contribution from the residual fluctuations of the non-variable sources 3C 286, 1311+678 and 0836+710. The level of these fluctuations was estimated from a running standard deviation of the calibrator measurements over a two-day period. Over the full three-week period, the standard deviations were found to be 0.5, 0.5, and 0.7 % of the mean value at 1.5, 4.9, and 8.4 GHz respectively (with no significant difference between the three non-variable sources).
The resulting lightcurves for the three frequencies are displayed in the top panels of Fig. 1. The mean flux densities are 1.57 Jy, 4.05 Jy, and 5.22 Jy for ν = 1. 49, 4.86, 8.44 GHz, respectively. Therefore, 0235+164 had a highly inverted spectrum at the time of the observations with spectral indices α 1.5 Ghz 4.9 GHz = 0.80 and α 4.9 GHz 8.4 GHz = 0.46.
Optical observations
The radio data were supplemented by observations at 650 nm (R-band filters) taken at the following telescopes: 0.7 m Telescope, Landessternwarte Heidelberg, Germany; 1.2 m Telescope, Observatoire de Haute Provence, France; 1.2 m Telescope, Calar Alto, Spain; 2.1 m Telescope, Cananea, Mexico.
Owing to limited observing time per source and weather limitations, the optical data are sampled more sparsely. They cover the first week of the radio observations, leave a gap for ten days, and continue for a total of thirty days, thus ending ten days after the radio monitoring. After the usual CCD reduction process (see e.g. Heidt & Wagner 1996), we performed relative photometry referencing the measurements to three stars within the field. The corresponding lightcurve is plotted in the bottom panel of Fig. 1. The measurement errors are smaller than the symbol size. In addition, we include three data points, taken from the long-term monitoring by Schramm et al. (1994). Those are marked by triangles.
Lightcurve analysis
As evident from Fig. 1, 0235+164 is variable in all three radio bands and in the optical. A major flare around Table 1). For the optical lightcurve we included three data points (marked by triangles) measured by Schramm et al. (1994).
JD 2448905 can be identified throughout the radio frequencies, and may be tentatively connected with the optical maximum at the beginning of the observation. We note, however, that the exact position of the latter cannot be determined precisely due to the sparse sampling of the optical data. Therefore, we consider the connection between radio and optical variations as possible, but not definitive.
In addition, a second flare towards the end is present in the 21 cm-data, possibly corresponding to the increase at 6 cm, and the sharp peak by a factor of two in the optical. A corresponding feature at 3.6 cm would be expected well inside the observation period but is definitely not present.
The lightcurve at 6 cm shows additional faster variations which have no corresponding features at the other wavelengths. These faster variations (which are not shown by the calibrator sources and therefore are real) could for example be caused by scattering in the ISM as we will discuss later. But we note that the "global" behavior is very similar at all three radio wavelengths.
We focus in this paper on the first flare (JD < ∼ 2448910) which is pronounced in all three radio frequencies, and could be connected to the optical increase around JD 2448900. We assume that all four observed lightcurves are caused by the same physical event in the source. In order to describe this major feature, we fit a linear background and one Gaussian component to the radio lightcurves (using all data points before JD 2448910) according to
S (t) = a 0 + a 1 · t + a 2 · exp − (t − a 3 ) 2 a 2 4 ,(1)
where S(t) is the measured flux density. The parameters and estimated errors are listed in Table 1. The fits reveal three properties which make the radio variability quite unusual. First, the relative amplitude of the flare becomes larger with increasing wavelength. Second, the duration of the event (i.e., the width of the Gaussian given by the parameter a 4 ) decreases with increasing wavelength. And third, no monotonic wavelength dependence of the time of the peak can be found. Including the sparse optical data the sequence is rather: 650 nm → 20 cm → 6/3.6 cm (the peaks at 6 cm and 3.6 cm are simultaneous within the errors). We determine the time lags between the peaks by deriving Cross Correlation Functions for the radio data sets (again using only data before JD 2448910.0). The CCFs were computed using an interpolation method (e.g. White & Peterson 1994 and references therein). Afterwards, the time lags were determined by the calculation of a weighted mean of the CCF (i.e., the center of mass point) using all values ≥ 0.5. The resulting time lags are: (4.58 ± 0.002) · 10 −3 0.318 ± 0.01 8904.3 ± 0.1 1.67 ± 0.1 6 (2.18 ± 0.002) · 10 −2 0.299 ± 0.01 8905.0 ± 0.1 2.44 ± 0.2 3.6 (1.99 ± 0.003) · 10 −2 0.243 ± 0.02 8904.8 ± 0.3 3.65 ± 0.9 Table 1. Fit to the radio lightcurves by a linear background and one Gaussian component
τ
with an error of about 0.2 days in each case. The differences between the time lags derived from the Gaussian fits and the CCF are within the errors and probably due to the fact that the flares are not perfectly Gaussian (this explains also that τ 20 cm 6 cm + τ 6 cm 3.6 cm = τ 20 cm 3.6 cm ). The deviation from the Gaussian shape is particularly obvious in the lightcurve at 6 cm. Nevertheless, the CCF analysis confirms the result that the sequence of the flares is unusual, since the 20 cm peak clearly precedes the peaks in the other bands, while the time lag between the 6 and the 3.6 cm-data does not appear to be significant.
To check the significance of the time lags between the maxima, we carried out Monte-Carlo-Simulations for the Cross-Correlations between the radio frequencies. As a start model for the lightcurves we used the Gaussian fit parameters with the original sampling and added Gaussian noise by a random process. In a second step, we allowed the sampling pattern to be shifted in time randomly and independently for every single simulation. This procedure confirmed that the peak at 20 cm significantly precedes the other two.
Long-term variability
In Fig. 2 we present radio data at 4.8, 8.0 and 14.5 GHz obtained within the Michigan Monitoring Program (e.g. Aller 1999 and references therein) from January 1991 to November 1995. Our VLA observations (indicated by the arrow in the bottom panel of Fig. 2) coincide with the peak of a large flux density outburst.
Also in the mm-and cm-radio data, published by Stevens et al. (1994) a maximum at the time of the VLA-observations can be seen at least at 22, 37, 90 and 150 GHz. The optical data presented by Schramm et al. (1994) also give a clear indication for an outburst at visible wavelengths right before our VLA observations. Three data points which are close to our observations are included in our R-band lightcurve ( Fig. 1, marked as triangles. The long-term monitoring implies that our observations took place when 0235+164 was in a very bright state.
Discussion
Problems
On the basis of the collected data, and the analysis in the previous section, we note several properties of the ob- -The sequence of the flares is rather unusual. The 20 cm maximum precedes the maxima at 3.6 cm and 6 cm.
The first optical maximum -if connected to the radio events -is about four days earlier.
-The peaks become narrower and stronger with increasing radio wavelength -a unique behavior which is not seen in other sources and not easily explained in any of the "standard" physical models. -In case of an intrinsic origin of the variability, one can derive the corresponding source brightness temperature from the duration of the event (e.g. Wagner & Witzel 1995). For λ = 20 cm this yields T B ≃ 7·10 17 K, far in excess of the inverse Compton limit (Kellermann & Pauliny-Toth 1969). -Our observations show that variations are present at both radio and optical wavelengths, with very similar timescales. The gaps in the optical lightcurve do not allow us to establish a one-to-one correspondence between individual events in both wavelength ranges, but it seems plausible that they are caused by a common physical mechanism. This is a severe difficulty for models that attribute the variations to strongly wavelength-dependent propagation effects (free-free absorption and interstellar scintillation).
In the following, we discuss various models which could describe the variations and take into consideration at least some of the peculiar properties mentioned above.
Relativistic shocks
Propagation of a relativistic shock-front through the jet is commonly accepted as one of possible causes of fluxdensity variability in AGN (e.g. Blandford & Königl 1979, Marscher & Gear 1985. The time scales usually involved in these models are of the order of weeks to months (corresponding to source sizes in the range of light weeks to months), and are consequently significantly longer than the ones observed here. Following Marscher & Gear (1985), the characteristics of the flux density evolution in the case of a moving shock within the jet can be described as follows. Starting at high frequencies (in the sub-mm-regime) the outburst propagates to longer wavelengths while the peak of the synchrotron spectrum shows a very special path in the S m -ν m -plane. This path can be described by three power laws S m ∝ ν k m (with different exponents k), distinguishing three different stages of the evolution (see also Marscher 1990). During the synchrotron or the adiabatic expansion stages, which are likely to be found in this wavelength range, the spectral maximum is expected to move from higher to lower frequencies, with the peak flux density being either constant or decreasing with decreasing frequency. Thus, for this "standardmodel", we expect that the flux density reaches its maximum at higher frequencies first, and that the amplitude of the peak decreases towards lower frequencies. This is contrary to our observations.
In contrast, the canonical behavior for a shock-in-jet model is seen in the long-term lightcurve (Fig. 2): The amplitude increases with increasing frequency, resulting in a strongly inverted spectrum during the outburst and the sequence of the peaks (determined from CCFs) follows the expectations: 14.5 GHz → 8.0 GHz → 4.8 GHz.
It should be noted, that the model of Marscher & Gear (1985) is based on three assumptions: (i) the instantaneous injection of relativistic electrons, (ii) the assumption that the variable component is optically thick at the beginning of the process, and (iii) that the jet flow is adiabatic. Therefore, this model describes a transition from large (τ > 1) to small (τ < 1) optical depths for each frequency. It is possible, however, that 0235+164 is initially optically thin at our observing wavelengths, and that the optical depth increases with time, e.g. due to continuous injection of electrons or field magnification, or through compression. In this case, τ may reach unity -and the flux density its maximum -at lower frequencies earlier than at higher ones (e.g. Qian et al. 1996), as observed. A similar behavior was discussed for CTA 26 by Pacholczyk (1977) -although on longer time scales. In this model, we expect the maximum at 4.9 GHz to precede the one at 8.4 GHz or that they are reached at the same time. The latter may be true within the uncertainty. However, the different amplitudes and the durations of the event cannot be explained without additional assumptions.
Alternatively, the observed variations may be explained with a thin sheet of relativistic electrons moving along magnetic field lines with a very high Lorentz factor (γ ≃ 20-25). In this case, a slight change of the viewing angle (e.g. from 0 • to 2-3 • ) may give rise to dramatic variations of the aberration angle and therefore of the observed synchrotron emission (Qian et al., in preparation). Additionally, this should cause significant changes in the linear polarization (strength and position angle), which may be studied in future observations.
Precessing beam model
We now investigate a scenario in which the observed effect is caused by the variable Doppler boosting of an emitting region moving along a curved three-dimensional path. If the observed turnover frequency of such a region falls between 1.5 and 8.4 GHz, peaks in the lightcurves can be displaced relative to each other. The Doppler factor variations required to reproduce the observed timelags may be caused by a perturbed relativistic beam (cf. Roland et al. 1994, see also Camenzind & Krockenberger 1992). The jet is assumed to consist of an ultra-relativistic (γ ≃ 10) beam surrounded by a thermal outflow with speed β ≃ 0.4. The relativistic beam precesses with period P 0 and opening angle Ω 0 . The period of the precession may vary from a few seconds to hundreds of days. Roland et al. (1994) show that this model can explain the observed short-term variability of 3C 273, and also makes plausible predictions about the kinematics of superluminal features in parsecscale jets. We use a similar approach to describe the flux evolution of 0235+164. The trajectory of an emitting component inside the relativistic beam is determined by colli- mation in the magnetic field of the perturbed beam, and can be described by a helical path. In the coordinate system (x,y,z) with z-axis coinciding with the rotational axis of the helix, the component's position is given by
x = r(z) cos(ωt − kz + φ 0 ) y = r(z) sin(ωt − kz + φ 0 ) z = z(t),(2)
where r(z) describes the amplitude of the helix, and can be approximated as r(z) = r 0 z/(a 0 + z). For a precessing beam, a 0 = r 0 / tan Ω 0 . The form of the function z(t) should be determined from the evolution of the velocity β b of the relativistic component. β b can be conveniently expressed as a function of z, and in the simplest case assumed to be constant. Then, under the condition of instantaneous acceleration of the beam (dz/dt > 0, for z → 0 + ), the component trajectory is determined by
t(z) = t 0 + z z0 1 z dz = t 0 + z z0 C 2 (z) kωr 2 (z) + C 3 (z) dz ,(3)
with C 1 (z) = [r ′ z (z)] 2 + 1, C 2 (z) = C 1 (z) + k 2 r 2 (z), and C 3 (z) = [C 2 (z)β 2 b − ω 2 r 2 (z)C 1 (z)] 1/2 . (This follows directly from Equation (7) in Roland et al. (1994).) Generally, both ω and k can also vary. Their variations should be then represented with respect to z, and ω(z) and k(z) be used in Equation (3).
We describe the emission of the perturbed beam by a homogeneous synchrotron spectrum with spectral index α = −0.5, and rest frame turnover frequency ν ′ m = 150 MHz. The beam precession period P 0 = 200 days, and Ω 0 = 5. • 7. r(z) is described by r 0 = 0.1 pc and a 0 = 1 pc. The corresponding lightcurves are plotted in Figure 3.
We can see that the model is capable of reproducing the observed time lag between 1.4 GHz and the higher radio frequencies. One can speculate that a more complex physical setting (e.g. spectral evolution of the underlying emission, or inhomogeneity of the emitting plasma) may be required for explaining the apparent discrepancy between the modeled and observed widths of the flare.
Free-free absorption by a foreground medium
Here we consider the effect of free-free absorption in a foreground medium, either in the host of the BL Lac object itself, or in one of the intervening redshift systems. To keep the discussion simple, we neglect the cosmological redshift, i.e., factors (1 + z). The optical depth for freefree absorption of a plasma is approximately given by (see e.g. Lang 1974):
τ = 8.235 · 10 −2 T −1.35 ν −2.1 N 2 dl ,(4)
where T is the electron temperature in K, ν is measured in GHz, and the emission measure N 2 dl in pc cm −6 . Thus, the absorption of radiation by a foreground medium can be described by e −c·λ 2 , where c is a constant. We assume the following scenario. The source is moving with transverse speed v behind a patchy foreground medium so that changes in the emission measure towards the source produce variable absorption. To lowest order, we describe gaps between the clouds by τ = k 2 λ 2 x 2 (5) (x being the axis perpendicular to the line of sight). Since the observed flux density of a point source is given by S obs = S e −τ , a moving source seen through such a gap in the foreground medium will show peaked lightcurves with roughly Gaussian shape. The width of the peaks will decrease with increasing wavelength. However, there are two major problems that need to be addressed. Firstly, in this model the maxima for all frequencies are reached at the same time, and secondly, the observed durations (i.e., the widths of the Gaussians fitted according to Equation (1)) do not follow the expected behavior a 4 ∝ λ −1 . The time lags between the peaks of the observed lightcurves can be explained for an extended source by a slight shift of the brightness center depending on the frequency.
To deal with the second problem, we assume that the source is not point-like, but has a circular Gaussian shape, with the source size proportional to the wavelength. Thus, the flux density is given by
S(x, t) = S 0 · exp − (x − vt) 2 σ 2 λ ,(6)
with σ λ = σ 0 · λ (i.e., σ 0 is the source size at 1 m wavelength).
Assuming that the angular size of the variable region is much smaller than the antenna beam, the observed flux density is given by the integral
S obs (t) = +∞ −∞ S(x, t) · e −τ (x) dx (7) 7
Evaluating the above integral, gives (since S obs is the integral of a product of two Gaussians)
S obs (t) = S ′ · exp −v 2 t 2 σ 2 λ + 1/k 2 λ 2 (8)
with a new normalization constant S ′ . Therefore, the square of the width of the Gaussian is
a 2 4 = 1 v 2 σ 2 λ + 1 k 2 λ 2 = σ 2 0 v 2 λ 2 + 1 k 2 v 2 1 λ 2 ,(9)
and it should depend on wavelength like A · λ 2 + B · λ −2 . By adjusting the parameters A and B to fit the measured values of a 2 4 at the three observing wavelengths, we derive values for σ 2 0 /v 2 and k 2 · v 2 . We assume here that the transverse speed is dominated by superluminal motion with v/c = β app and obtain a source size of 0.0067 · β app pc corresponding to an angular size of 1.6 · β app µas at λ = 1 m. We note that such a small source diameter even for β app = 10 results in a brightness temperature of about 10 15 K, and therefore violates the inverse Compton limit. For higher velocities -as observed in this source (e.g. Chu et al. 1996) -the observed size can be larger. However, to reconcile our observational findings with the inverse Compton limit, Doppler factors of the order of 100 are needed.
The second term in Equation (9) gives the size of the gap in the foreground medium, i.e., the distance between the points where τ = 1. Since we assumed τ = k 2 λ 2 x 2 , this distance is ∆x = 2/(k · λ), which is about 2.3 · 10 −4 · β app pc at λ = 1 m. (Note that this is true only for the case where the absorber is at the redshift of the BL Lac object; the ratio of the angular diameter distances of emitter and screen has to be applied as a correction factor in the case of an intervening absorber.)
We still have to check whether Equation (4) gives a sufficient optical depth for reasonable choices of electron temperature and emission measure. The strongest constraints come from the data at 3.6 cm: to explain the observed amplitude of 0.24 Jy at a source flux of 5 Jy, τ must be at least 0.05 at this wavelength. For an electron temperature of 5000 K, an emission measure of 5 · 10 6 pc cm −6 is needed. The thickness of the absorber cannot be much larger than the transverse scale derived above, which is 0.06 pc at 3.6 cm for β app = 10; this gives an electron density of 10 4 cm −3 . These values are within the range found in Galactic H II regions and planetary nebulae.
We conclude that this model can explain the observed shorter duration of the flares at longer wavelengths, and -under the assumption of slightly different spatial locations of the brightness center at the observed wavelengths -also the sequence of the peaks. It predicts that the amplitude of the peaks increases more strongly with wavelength than observed, but it is consistent with the data when an underlying non-variable component is taken into account. However, in the possible case of a connection between the radio and the optical variations this model fails, since the optical radiation would not be affected by freefree-scattering.
Interstellar scattering (ISS)
Scattering processes in the interstellar medium are well known to cause flux density variations at radio frequencies (e.g. Rickett 1990). In this section we investigate the possibility that ISS is the cause of the variations seen in our observations. We will follow mainly the considerations and notations of Rickett et al. (1995). For a point-like source, the spatial scale of flux density variations caused by RISS is given by (L is the path length through the medium, θ scat the scattering angle)
r 0,λ ≃ 0.25 L θ scat ,(10)
which is proportional to λ 2.2 , for a Kolmogorov-type medium (Rickett et al. 1984), and therefore also θ scat ∝ λ 2.2 (cf. Cordes et al. 1984). The spatial scale of an extended source (assuming a Gaussian shape of σ λ in width) is then given by
r θ,λ = r 2 0,λ + (0.5 L σ λ ) 2 .(11)
Then, the scintillation index m θ,λ and the variability timescale τ θ,λ for the extended source can be derived by
m θ,λ = m 0,λ r 0,λ r θ,λ ,(12)τ θ,λ = r θ,λ V ,
where V is the velocity of the Earth (i.e., the observer) relative to the scattering medium, and m 0,λ is the (wavelength dependent) scintillation index of a point source.
We assume a source diameter which is proportional to λ as we did in the previous section, thus σ λ = σ 0 λ, and use θ scat = θ 0 λ 2.2 (see above). This gives
m θ,λ = m 0,λ θ 0 λ 2.2 θ 2 0 λ 4.4 + 4 σ 2 0 λ 2 and(13)τ θ,λ = L 4V θ 2 0 λ 4.4 + 4 σ 2 0 λ 2 .
Therefore, it is clear that -independent of the wavelength dependence of m 0,λ -the timescales of the variations become shorter for decreasing wavelengths. This is contrary to our observational findings (see Table 1), implying that this simple model is unlikely to explain the observations. Additionally, interstellar scattering cannot cause variability in the optical regime. Hence, in this case again, a possible connection of the optical and the radio variations would rule out ISS as the only cause of the observed variability. However, owing to the small source diameters involved here, ISS can be present as an additional effect. As an example, we calculate the scintillation index and the timescales with the following assumptions. Following Rickett (1986) the path length in the interstellar medium of our galaxy is L ≃ 500 pc · csc |b| ≃ 788 pc (the source galactic latitude is −40 • ). With σ 0 = 1.2 mas (which corresponds to T B ≃ 10 12 K), θ 0 = 60 mas and a typical velocity (of the observer) V = 50 km/s this yields:
λ [cm] m θ,λ τ θ,λ [d] 20
0.48 12.2 6 0.32 1.28 3.6 0.21 0.64 Therefore, the faster variations which are clearly seen at higher frequencies (especially in the 6 cm lightcurve) may be due to ISS.
Gravitational Microlensing
Another possible explanation for the origin of the observed variations is gravitational microlensing (ML) by stars in a foreground galaxy. ML effects have been unambiguously observed in the multiple QSO 2237+0305 (Irwin et al. 1989, Houde & Racine 1994, and most likely also in other multiply-imaged QSOs (see Wambsganss 1993, and references therein). The possibility that ML can cause AGN variability has long been predicted (Paczyński 1986, Kayser et al. 1986, Schneider & Weiss 1987, but it remains unclear whether ML causes a substantial fraction of the observed variability in QSOs (e.g. Schneider 1993). 0235+164 has a foreground galaxy (z = 0.524) situated within two arcseconds from the line of sight (Spinrad & Smith 1975), and an additional galaxy 0. ′′ 5 away from the source (Stickel et al. 1988, see also additional components reported in Yanny et al. 1989). Additionally, a nearby absorption system was observed at λ = 21 cm by Wolfe, Davis & Briggs (1982). All three objects may host microlenses affecting the emission from 0235+164. Thus, for 0235+164 the probability for ML is expected to be high (Narayan & Schneider 1990), so that sometimes ML events should be present in the lightcurves.
We will show now how ML can modulate the underlying long-term lightcurve and explain faster variations of long-wavelength flux compared to short-wavelength radiation, even when the longer wavelength radiation comes from a larger source (component). Since the available data do not permit a detailed account of possible ML situations, the attention here is restricted to two simple situations: an isolated point-mass lens in the deflector, and a cusp singularity, formed by an ensemble of microlenses (Schneider & Weiss 1987, Wambsganss 1990). In fact, both cases yield similar predicted ML lightcurves. The scales of the source size and the lens mass necessary to yield a flux variation of the observed kind can be estimated for both cases together.
We assume an elliptically shaped emitting feature that moves relativistically in the direction roughly coinciding with the minor axis of the ellipse. Such a component can be formed by relativistic electrons which are locally accelerated by a shock front inside a superluminal jet. The shape of the source component and its orientation is then determined by the flow inside the jet. A Gaussian brightness profile is assumed, with component size ∝ λ (see Fig. 4 for details). We postulate that the emission peaks at all three wavelengths are displaced relative to each other, but that the peaks of shorter-wavelength components are situated within the half intensity contour of longer-wavelength components.
Let βc be the apparent effective transverse velocity of the source component; using the redshift z s = 0.94 of the object, this corresponds to an angular velocity of v a = 2βh 10 −4 mas/day. If a source component moves along a track in the source plane, and the component size is much smaller than the minimum angular separation d a from the singularity, as indicated in Fig. 4 (solid ellipse), then the timescale of variation is given roughly by the ratio d a /v a . On the other hand, if a strongly elongated source component moves so that parts of it cross the line of sight to the singularity (as indicated in Fig. 4, dashed ellipse), then the shortest possible timescale is roughly the ratio between the transverse angular source size a λ (the minor semi-axis at wavelength λ) and the angular velocity. Now assume that the former case approximates the 3.6 cm source and the latter case approximates the 20 cm source. If ∆t 3.6 ∼ 4 days, ∆t 6 ∼ 3 days and ∆t 20 ∼ 2 days are the variability timescales for the three wavelengths considered, we have d a ∼ v a ∆t 3.6 ∼ 8βh 10 −4 mas ,
and r a 20 ∼ 4βh 10 −4 mas ,
where r ≤ 1 is the axis ratio of the Gaussian source component. In order for the 20 cm source to experience appreciable variations, the closest separation of its center from the singularity cannot be larger than its major semiaxis, i.e., a 20 > ∼ d a , and this inequality can be satisfied for r < ∼ 0.5. Since the relative contribution of the moving component to the total flux of the source is unknown, we cannot use the observed lightcurves to determine the magnification of the component emission. The magnification of a point source at separation θ from the point singularity is
µ p = x 2 + 2 x √ x 2 + 4 ,(16)
where x = θ/θ 0 , and θ 0 is the angular scale induced by a point mass lens of mass M :
θ 0 = 4GM c 2 D ds D s D d ,(17)
where D d , D s , and D ds denote, respectively, the angular diameter distances to the lens, the source, and from the
Approximating the point-source magnification by µ p ≃ 1/x (for x ≪ 1), and assuming as before that the size a 3.6 is much smaller than the closest separation of the source from the point-like singularity, the maximum magnification of this source component becomes
µ 3.6,max ∼ θ 0 d a ≃ 2.34 m 1/2 h −1/2 β −1 .(19)
Hence, a solar-mass star would yield a magnification of the order of 2 for the smallest source component moving at roughly the speed of light and in general can produce lightcurves similar to the observed variations. In Fig. 5, we plot numerically determined ML lightcurves for a moving source with an axis ratio r = 0.4, minimum separation d a = 8βh10 −4 mas, and semi-major axis of the 20 cm source component of a 20 = βh10 −3 mas. The lens mass is m = 0.4β 2 h. The source sizes are chosen to be proportional to wavelength, and the brightness peaks of the 6 cm and 20 cm components are displaced relative to the peak of the 3.6 cm component by 0.4 of their corresponding sizes. As can be seen from the modeled lightcurves, the variability timescale of the 20 cm component is considerably shorter than that of the shorter wavelength components, in accordance with our analytical estimates. In addition, the observed shift of the brightness peak at 20 cm before those at smaller wavelengths can be accounted for in our model by a slight tilt of the direction of motion of the source relative to the minor axis of the surface brightness ellipses, in the sense of the large component crossing the caustic point before the closest approach of the 3.6 cm component to that point. Nevertheless, we note that the small source sizes needed (in the range of µas) will result in brightness temperatures of the order of 10 15 K, i.e., three orders of magnitude above the inverse Compton limit. A more detailed modeling of the lightcurves by a microlensing scenario is not warranted at this stage, given the large number of degrees of freedom. Nevertheless, the above considerations have demonstrated that the basic qualitative features can be understood in the microlensing picture without very specific assumptions.
Conclusions
We have observed the BL Lac object 0235+164 at three radio wavelengths and in the optical R-band and found rapid variations in all frequency bands. One single event that can be identified at all radio wavelengths shows very peculiar properties. The brightness peak is reached first at 20 cm wavelength, and afterwards at 3.6 and 6 cm. The amplitudes of the flares decrease from longer to shorter radio wavelengths, and the timescales become longer. The event in the radio regime might be connected to the bright peak in the optical lightcurve, although this connection remains questionable due to the sparse sampling of the R-band data. In the previous sections, we have discussed some models and to what extent they can explain the observed variations.
While the conventional application of the shock-in-jet model has difficulties in reproducing the observations, the assumption of an increasing optical depth (e.g. due to continuous injection of relativistic electrons) can cause a delay of the maximum at high frequencies with respect to the lower frequencies, and therefore explain at least one of the special features.
Variable Doppler boosting can cause simultaneous short-term variability in all observed wave bands. Fairly pronounced time lags between the different frequencies can be caused by turnover frequency variations in the observed spectrum of a moving source. However, broader peaks are expected at longer wavelengths.
Free-free absorption and interstellar scattering are only capable of explaining radio variations, not variability in the optical regime. Therefore, if the connection between the optical and the radio variability is real, these models are ruled out as the only cause for the variations. Furthermore, the dependence of the timescales on wavelength argues against an explanation of the flare by interstellar scattering. The absorption by a patchy foreground medium can easily describe the shape and the widths of the flares (in the radio) and can -if we assume different locations for the brightness center -also explain the time sequence of the brightness peaks.
Gravitational Microlensing -in combination with a wavelength-dependent source size and a slight displacement of the brightness peak -provides a possible explanation for the observed variations in the radio regime. One would also expect fairly strong variability in the visible range, because of the much smaller source size. Microlensing thus appears to be a viable explanation of the observations, which is also quite attractive because of the known foreground objects.
It is quite remarkable that these attempts to explain the rapid radio variability in 0235+164 -different as they are -all imply that the intrinsic source size is very small. To reconcile the observations with the 10 12 K inverse Compton limit, a Doppler factor substantially higher than the "canonical" value of 10 (see e.g. Ghisellini et al. 1993, Zensus 1997) is required. Most scenarios that we have investigated imply D ≃ 100. In this context it is interesting to note that circumstantial evidence for superluminal motion with β app ∼ 30 has been found in this source (Chu et al. 1996). The variations in 0235+164 are also among the strongest and fastest of all sources in the Michigan monitoring program (e.g. Hughes et al. 1992). This suggests that the distribution of Doppler factors in compact radio sources has a tail extending to D ≃ 100, and that 0235+164 -and perhaps more generally the sources showing strong intraday radio variability -belong to this tail. The implied extremely small source size can allow rapid intrinsic variations, and at the same time favor propagation effects. It is therefore plausible that the observed variability is caused by a superposition of both mechanisms.
Fig. 1 .
1Intensity variations of 0235+164 in Oct. 1992 at 1.49 GHz (λ = 20 cm), 4.86 GHz (λ = 6 cm), 8.44 GHz (λ = 3.6 cm), and in the optical R-band (λ = 650 nm) (from top to bottom). Plotted is the flux density (in Jy for the radio data, in magnitudes for the optical data) versus Julian Date. For the radio lightcurves Gaussians fitted according to Equation (1) are included (see Section 2.3 and
Fig. 2 .
2Long-term monitoring of 0235+164 since 1991 with the UMRAO 26-meter telescope at 4.8 GHz, 8 GHz, and 14.5 GHz (from top to bottom). An arrow indicates the epoch of our VLA observations, close to the peak of the large flux density outburst. served flare which are unusual and require an explanation in the framework of physical models:
Fig. 3 .
3Model lightcurves at 1.4, 4.9, and 8.4 GHz for P 0 = 200 days, Ω 0 = 5. • 7.
Fig. 4 .
4Geometry of the proposed microlensing scenario. Elliptical surface brightness contours are drawn for the 3.6 cm (solid), 6 cm (dotted), and 20 cm (dashed) source components. The caustic point of the microlens is at the origin, denoted by 'X', and the source components are assumed to move along the solid line as indicated. The scales indicated on the axis correspond to the synthetic lightcurves shown in Fig. 5. lens to the source, m = M/M ⊙ is the lens mass in units of the solar mass. Assuming that the lens is situated at z = 0.525, θ 0 = 1.87 √ mh 10 −3 mas .
Fig. 5 .
5Microlensing lightcurves obtained from the model discussed in the text. The three curves correspond to the 3.6 cm (solid), 6 cm (dotted) and 20 cm (dashed) source components.
The Very Large Array, New Mexico, is operated by Associated Universities, Inc., under contract with the National Science Foundation.
Acknowledgements. We thank I. Pauliny-Toth and E. Ros for critically reading the manuscript, the referee, J.R. Mattox, for valuable comments, C.E. Naundorf and R. Wegner for help with the observations, and B.J. Rickett for stimulating discussions. The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under a cooperative agreement by Associated Universities, Inc. This research has made use of data from the University of Michigan Radio Astronomy Observatory which is supported by the National Science Foundation and by funds from the University of Michigan.
. R G Abraham, C S Crawford, M R Merrifield, ApJ. 415101Abraham, R.G., Crawford, C.S., Merrifield, M.R., et al., 1993, ApJ 415, 101
M F Aller, BL Lac phenomenon. ASP conference series. Takalo, L. & Valtaoja, E.in pressAller, M.F. 1999, In: Takalo, L. & Valtaoja, E. (eds.) BL Lac phenomenon. ASP conference series, in press
. J W M Baars, R Genzel, I I K Pauliny-Toth, A Witzel, A&A. 6199Baars, J.W.M., Genzel, R., Pauliny-Toth, I.I.K., Witzel, A., 1977, A&A 61, 99
VLBI Monitoring of BL Lac Objects. L B Bååth, VLBI and Compact Radio Sources. IAU Symp. 110, Reidel. Fanti, R., Kellermann, K., Setti, G.Dordrecht127Bååth, L.B., 1984, VLBI Monitoring of BL Lac Objects. In: Fanti, R., Kellermann, K., Setti, G. (eds.) VLBI and Com- pact Radio Sources. IAU Symp. 110, Reidel, Dordrecht, 127
. R D Blandford, A Königl, ApJ. 23234Blandford, R.D., Königl, A., 1979, ApJ 232, 34
. E M Burbidge, E A Beaver, R D Cohen, AJ. 1122533Burbidge, E.M., Beaver, E.A., Cohen, R.D., et al., 1996, AJ 112, 2533
. M Camenzind, M Krockenberger, A&A. 25559Camenzind, M., Krockenberger, M., 1992, A&A 255, 59
. H S Chu, L B Bååth, F T Rantakyrö, A&A. 30715Chu, H.S., Bååth, L.B., Rantakyrö, F.T., et al., 1996, A&A 307, 15
. R D Cohen, H E Smith, V T Junkkarinen, E M Burbidge, ApJ. 318577Cohen, R.D., Smith, H.E., Junkkarinen, V.T., Burbidge, E.M., 1987, ApJ 318, 577
. J M Cordes, S Ananthakrishnan, B Dennison, Nature. 309689Cordes, J.M., Ananthakrishnan, S., Dennison, B., 1984, Nature 309, 689
P C Crane, P J Napier, Sensitivity, In: Synthesis Imaging in Radio Astronomy. Perley, R.A., Schwab, F.R., Bridle, A.H.6139Crane, P.C., Napier, P.J., 1989: Sensitivity, In: Synthesis Imag- ing in Radio Astronomy, Perley, R.A., Schwab, F.R., Bri- dle, A.H. (eds.), ASP Conf. Ser. 6, 139
. G Ghisellini, P Padovani, A Celotti, L Maraschi, ApJ. 40765Ghisellini, G., Padovani, P., Celotti, A., Maraschi, L., 1993, ApJ 407, 65
. J Heidt, S J Wagner, A&A. 30542Heidt, J., Wagner, S.J., 1996, A&A 305, 42
. M Houde, R Racine, AJ. 107466Houde, M., Racine, R., 1994, AJ 107, 466
. P A Hughes, H D Aller, M F Aller, ApJ. 396469Hughes, P.A., Aller, H.D., Aller, M.F., 1992, ApJ 396, 469
. M J Irwin, P C Hewett, R T Corrigan, AJ. 98Irwin, M.J., Hewett, P.C., Corrigan, R.T., et al., 1989, AJ 98, 1989
. D L Jones, S C Unwin, L B Bååth, M M Davis, ApJ. 28460Jones, D.L., Unwin, S.C., Bååth, L.B., Davis, M.M., 1984, ApJ 284, 60
. R Kayser, S Refsdal, R Stabell, A&A. 16636Kayser, R., Refsdal, S., Stabell, R., 1986, A&A 166, 36
. K I Kellermann, I I K Pauliny-Toth, ApJ. 15571Kellermann, K.I., Pauliny-Toth, I.I.K., 1969, ApJ 155, L71
. K R Lang, Astrophysical Formulae. Springer-VerlagLang, K.R., 1974, Astrophysical Formulae, Springer-Verlag, Berlin, Heidelberg
. G Madejski, T Takahashi, M Tashiro, ApJ. 459156Madejski, G., Takahashi, T., Tashiro, M., et al., 1996, ApJ 459, 156
Interpretation of Compact Jet Observations. A P Marscher, Parsec-Scale radio jets. Zensus, J.A., Pearson, T.J.CambridgeCambridge University Press236Marscher, A.P., 1990, Interpretation of Compact Jet Observa- tions. In: Zensus, J.A., Pearson, T.J. (eds.) Parsec-Scale radio jets. Cambridge University Press, Cambridge, 236
. A P Marscher, W K Gear, ApJ. 298114Marscher, A.P., Gear, W.K., 1985, ApJ 298, 114
. C Montigny, D L Bertsch, J Chiang, ApJ. 440525v. Montigny, C., Bertsch, D.L., Chiang, J., et al., 1995, ApJ 440, 525
. R Narayan, P Schneider, MNRAS. 243192Narayan, R., Schneider, P., 1990, MNRAS 243, 192
. K Nilsson, P A Charles, T Pursimo, A&A. 314754Nilsson, K., Charles, P.A., Pursimo, T., et al., 1996, A&A 314, 754
. S L O'dell, B Dennison, J J Broderick, ApJ. 326668O'Dell, S.L., Dennison, B., Broderick, J.J., et al., 1988, ApJ 326, 668
. M Ott, A Witzel, A Quirrenbach, A&A. 284331Ott, M., Witzel, A., Quirrenbach, A., et al., 1994, A&A 284, 331
A G Pacholczyk, Radio Galaxies. Pergamon Press301503Pacholczyk, A.G., 1977, Radio Galaxies, Pergamon Press, Ox- ford Paczyński, B., 1986, ApJ 301, 503
. S J Qian, X C Li, R Wegner, Chin. Astron. Astroph. 2015Qian, S.J., Li, X.C., Wegner, R., et al., 1996, Chin. Astron. Astroph. 20, 15
. A Quirrenbach, A Witzel, T P Krichbaum, A&A. 258279Quirrenbach, A., Witzel, A., Krichbaum, T.P., et al., 1992, A&A 258, 279
. M Rabbette, B Mcbreen, S Steel, N Smith, A&A. 3101Rabbette, M., McBreen, B., Steel, S., Smith, N., 1996, A&A 310, 1
. B J Rickett, ApJ. 307564Rickett, B.J., 1986, ApJ 307, 564
. B J Rickett, ARAA. 28561Rickett, B.J., 1990, ARAA 28, 561
. B J Rickett, W A Coles, G Bourgois, A&A. 134390Rickett, B.J., Coles, W.A., Bourgois, G., 1984, A&A 134, 390
. B J Rickett, A Quirrenbach, R Wegner, A&A. 293479Rickett, B.J., Quirrenbach, A., Wegner, R., et al., 1995, A&A 293, 479
. J Roland, R Teyssier, N Roos, A&A. 290357Roland, J., Teyssier, R., Roos, N., 1994, A&A 290, 357
. G E Romero, J A Combi, P Benagli, A&A. 32677Romero, G.E., Combi, J.A., Benagli, P., et al., 1997, A&A 326, 77
. P Schneider, A&A. 2791Schneider, P., 1993, A&A 279, 1
. P Schneider, A Weiss, A&A. 17149Schneider, P., Weiss, A., 1987, A&A 171, 49
. K.-J Schramm, U Borgeest, D Kuehl, A&AS. 106349Schramm, K.-J., Borgeest, U., Kuehl, D., et al., 1994, A&AS 106, 349
. Z.-Q Shen, T.-S Wan, J M Moran, AJ. 114Shen, Z.-Q., Wan, T.-S., Moran, J.M., et al., 1997, AJ 114, 1999
. H E Smith, E M Burbidge, V T Junkkarinen, ApJ. 218611Smith, H.E., Burbidge, E.M. & Junkkarinen, V.T., 1977, ApJ 218, 611
. H Spinrad, H E Smith, ApJ. 201275Spinrad, H., Smith, H.E., 1975, ApJ 201, 275
. J A Stevens, S J Litchfield, E I Robson, ApJ. 43791Stevens, J.A., Litchfield, S.J., Robson, E.I., et al., 1994, ApJ 437, 91
. M Stickel, J W Fried, H Kühr, A&A. 19813Stickel, M., Fried, J.W., Kühr H., 1988, A&A 198, L13
. L O Takalo, M R Kidger, J A De Diego, AJ. 10440Takalo, L.O., Kidger, M.R., de Diego, J.A., et al., 1992, AJ 104, 40
. H Teräsranta, M Tornikoski, E Valtaoja, A&AS. 94121Teräsranta, H., Tornikoski, M., Valtaoja, E. et al., 1992, A&AS 94, 121
. S J Wagner, A Witzel, ARAA. 33163Wagner, S.J. & Witzel, A., 1995, ARAA 33, 163
Gravitational Microlensing. J Wambsganss, MPA (Garching). 550MPA ReportWambsganss, J.: 1990, Gravitational Microlensing. In: MPA Report 550, MPA (Garching)
Gosset. J Wambsganss, J Surdej, D Fraipont-Crao, Gravitational Lenses in the Universe. E., Refsdal, S., & Remy, M.369Universite de LiegeWambsganss, J., 1993. In: Surdej, J., Fraipont-Crao, D., Gos- set, E., Refsdal, S., & Remy, M. (eds.), Gravitational Lenses in the Universe, Universite de Liege, 369
. J R Webb, A G Smith, R J Leacock, AJ. 95374Webb, J.R., Smith, A.G., Leacock, R.J. et al., 1988, AJ 95, 374
. R J White, B M Peterson, PASP. 106879White, R.J., Peterson, B.M., 1994, PASP 106, 879
. A M Wolfe, M M Davis, F H Briggs, ApJ. 259495Wolfe, A.M., Davis, M.M., Briggs, F.H., 1982, ApJ 259, 495
. B Yanny, D G York, J S Gallagher, ApJ. 338735Yanny, B., York, D.G., Gallagher, J.S., 1989, ApJ 338, 735
. J A Zensus, ARAA. 35607Zensus, J.A., 1997, ARAA 35, 607
|
[] |
[
"Characterizing and Modeling Citation Dynamics",
"Characterizing and Modeling Citation Dynamics"
] |
[
"Young-Ho Eom \nComplex Networks and Systems Lagrange Laboratory\nInstitute for Scientific Interchange\nTorinoItaly\n",
"Santo Fortunato [email protected] \nComplex Networks and Systems Lagrange Laboratory\nInstitute for Scientific Interchange\nTorinoItaly\n"
] |
[
"Complex Networks and Systems Lagrange Laboratory\nInstitute for Scientific Interchange\nTorinoItaly",
"Complex Networks and Systems Lagrange Laboratory\nInstitute for Scientific Interchange\nTorinoItaly"
] |
[] |
Citation distributions are crucial for the analysis and modeling of the activity of scientists. We investigated bibliometric data of papers published in journals of the American Physical Society, searching for the type of function which best describes the observed citation distributions. We used the goodness of fit with Kolmogorov-Smirnov statistics for three classes of functions: log-normal, simple power law and shifted power law. The shifted power law turns out to be the most reliable hypothesis for all citation networks we derived, which correspond to different time spans. We find that citation dynamics is characterized by bursts, usually occurring within a few years since publication of a paper, and the burst size spans several orders of magnitude. We also investigated the microscopic mechanisms for the evolution of citation networks, by proposing a linear preferential attachment with time dependent initial attractiveness. The model successfully reproduces the empirical citation distributions and accounts for the presence of citation bursts as well.
|
10.1371/journal.pone.0024926
| null | 5,381,357 |
1110.2153
|
4d43ba9878c1d13c508b1c48d2da3fb88039da1e
|
Characterizing and Modeling Citation Dynamics
Published September 22, 2011
Young-Ho Eom
Complex Networks and Systems Lagrange Laboratory
Institute for Scientific Interchange
TorinoItaly
Santo Fortunato [email protected]
Complex Networks and Systems Lagrange Laboratory
Institute for Scientific Interchange
TorinoItaly
Characterizing and Modeling Citation Dynamics
Published September 22, 2011Received August 1, 2011; Accepted August 19, 2011;Citation: Eom Y-H, Fortunato S (2011) Characterizing and Modeling Citation Dynamics. PLoS ONE 6(9): e24926. Editor: Matjaz Perc, University of Maribor, Slovenia The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing Interests: The authors have declared that no competing interests exist. *
Citation distributions are crucial for the analysis and modeling of the activity of scientists. We investigated bibliometric data of papers published in journals of the American Physical Society, searching for the type of function which best describes the observed citation distributions. We used the goodness of fit with Kolmogorov-Smirnov statistics for three classes of functions: log-normal, simple power law and shifted power law. The shifted power law turns out to be the most reliable hypothesis for all citation networks we derived, which correspond to different time spans. We find that citation dynamics is characterized by bursts, usually occurring within a few years since publication of a paper, and the burst size spans several orders of magnitude. We also investigated the microscopic mechanisms for the evolution of citation networks, by proposing a linear preferential attachment with time dependent initial attractiveness. The model successfully reproduces the empirical citation distributions and accounts for the presence of citation bursts as well.
Introduction
Citation networks are compact representations of the relationships between research products, both in the sciences and the humanities [1,2]. As such they are a valuable tool to uncover the dynamics of scientific productivity and have been studied for a long time, since the seminal paper by De Solla Price [3]. In the last years, in particular, due to the increasing availability of large bibliographic data and computational resources, it is possible to build large networks and analyze them to an unprecedented level of accuracy.
In a citation network, each vertex represents a paper and there is a directed edge from paper A to paper B if A includes B in its list of references. Citation networks are then directed, by construction, and acyclic, as papers can only point to older papers, so directed loops cannot be obtained. A large part of the literature on citation networks has focused on the characterization of the probability distribution of the number of citations received by a paper, and on the design of simple microscopic models able to reproduce the distribution. The number of citations of a paper is the number of incoming edges (indegree) k in of the vertex representing the paper in the citation network. So the probability distribution of citations is just the indegree distribution P k in ð Þ. There is no doubt that citation distributions are broad, as there are papers with many citations together with many poorly cited (including many uncited) papers. However, as of today, the functional shape of citation distributions is still elusive. This is because the question is illdefined. In fact, one may formulate it in a variety of different contexts, which generally yield different answers. For instance, one may wish to uncover the distribution from the global citation network including all papers published in all journals at all times. Otherwise, one may wish to specialize the query to specific disciplines or years. The role of the discipline considered is important and is liable to affect the final result. For instance, it is well known that papers in Biology are, on average, much more cited than papers in Mathematics. One may argue that this evidence may still be consistent with having similar functional distributions for the two disciplines, defined on ranges of different sizes. Also, the role of time is important. It is unlikely that citation distributions maintain the exact same shape regardless of the specific time window considered. The dynamics of scientific production has changed considerably in the last years. It is well known, for instance, that the number of published papers per year has been increasing exponentially until now [4]. This, together with the much quicker publication times of modern journals, has deeply affected the dynamics of citation accumulation of papers. Moreover, if the dataset at study includes papers published in different years, older papers tend to have more citations than recent ones just because they have been exposed for a longer time, not necessarily because they are better works: the age of a paper is an important factor.
So, the question of which function best describes the citation distributions is meaningless if one does not define precisely the set of publications examined. Redner [5] considered all papers published in Physical Review D up to 1997, along with all articles indexed by Thomson Scientific in the period 1981-1997, and found that the right tail of the distribution, corresponding to highly cited papers, follows a power law with exponent c~3, in accord with the conclusions of Price [3]. Laherrére and Sornette [6] studied the top 1120 most cited physicists during the period 1981-1997, whose citation distribution is more compatible with a stretched exponential P k in ð Þ* exp { k in ð Þ b h i , with b^0:3. Tsallis and de Albuquerque [7] analyzed the same datasets used by Redner with an additional one including all papers published up to 1999 in Physical Review E, and found that the Tsallis distribution P k in ð Þ~P(0)= 1z b{1 ð Þl k in ½ b=(b{1) , with l^0:1 and b^1:5, consistently fits the whole distribution of citations (not just the tail). More recently Redner performed an analysis over all papers published in the 110 years long history of journals of the American Physical Society (APS) [8], concluding that the log-normal distribution
P k in À Á~1 k in ffiffiffiffiffiffiffiffiffiffi 2ps 2 p exp { ln k in À Á {m  à 2 = 2s 2 À Á n oð1Þ
is more adequate than a power law. In other studies distributions of citations have been fitted with various functional forms: powerlaw [9][10][11][12][13][14], log-normal [12,15,16], Tsallis distribution [17,18], modified Bessel function [19,20] or more complicated distributions [21].
In this paper we want to examine citation networks more in depth. We considered networks including all papers and their mutual citations within several time windows. We have performed a detailed analysis of the shape of the distributions, by computing the goodness of fits with Kolmogorov-Smirnov statistics of three model functions: simple power law, shifted power law and lognormal. Moreover, we have also examined dynamic aspects of the process of citation accumulation, revealing the existence of ''bursts'', i.e. of rapid accretions of the number of citations received by papers. Citation bursts are not compatible with standard models of citation accumulation based on preferential attachment [22], in which the accumulation is smooth and papers may attract many cites long after publication. Therefore, we propose a model in which the citation attractiveness of a paper depends both on the number of cites already collected by the paper and on some intrinsic attractiveness that decays in time. The resulting picture delivers both the citation distribution and the presence of bursts.
Results
The distribution of cites
For our analysis we use the citation database of the American Physical Society (APS), described in Materials and Methods. We get the best fit for the empirical citation distributions from the goodness of fit test with Kolmogorov-Smirnov (KS) statistics [23]. The KS statistic D is the maximum distance between the cumulative distribution function (CDF) of the empirical data and the CDF of the fitted model:
D~max k in §k min in jS(k in ){P(k in )j ð 2Þ
Here S(k in ) is the CDF of the empirical indegree k in and P(k in ) is the CDF of the model that fits best the empirical data in the region k in §k min in . By searching the parameter space, the best hypothetical model is the one with the least value of D from the empirical data. To test the statistical significance of the hypothetical model, we cannot use the values of the KS statistics directly though, as the model has been derived from a best fit on the empirical data, rather than being an independent hypothesis. So, following Ref. [23] we generate synthetic datasets from the model corresponding to the best fit curve. For instance, if the best fit is the power law ax {b , the datasets are generated from this distribution. Each synthetic dataset will give a value D synth for the KS statistics between the dataset and the best fit curve. These D synth -values are compared with D emp , i.e. the D-value between the original empirical data and the best fit curve, in order to define a p-value. The p-value is the fraction of D synth -values larger than D emp . If p is large (close to 1), the model is a plausible fit to the empirical data; if it is close to 0, the hypothetical model is not a plausible fit. We applied this goodness of fit test to three hypothetical model distributions: log-normal, simple power law and shifted power law. The log-normal distribution for the indegree k in is given by
P(k in )* 1 k in ffiffiffiffiffiffiffiffiffiffi 2ps 2 p expf{½log (k in ){m 2 =(2s 2 )g,ð3Þ
the simple power law distribution by
P(k in )*k in {c ,ð4Þ
and the shifted power law by
P(k in )*(k in zk 0 ) {c :ð5Þ
We used 1000 synthetic distributions to calculate the p-value for each empirical distribution. Fig. 1 shows some fits for datasets corresponding to several time windows (see Materials and Methods). The detailed summary of the goodness of fit results is shown in Table 1. The simple power law gives high p-value only when one considers the right tail of the distribution (usually k in w20). The log-normal distribution gives high p-value for early years (before 1970) but after 1970 the pvalue is smaller than 0.2. As shown in Figs. 1a and 1b, there is a clear discrepancy in the tail between the best fit log-normal distribution and the empirical distribution. The shifted power law distribution gives significant p-values (higher than 0.2) for all observation periods. The values of the exponent c of the shifted power law are decreasing in time. The range of c goes from 5:6 (1950) to 3:1 (2008).
We conclude that the shifted power law is the best distribution to fit the data.
The distribution of citation bursts
We now turn our attention to citation ''bursts''. While there has been a sizeable activity in the analysis of bursty behavior in human dynamics [24][25][26], we are not aware of similar investigations for citation dynamics. We compute the relative rate 1969,1989,2007 and dt~1 year are shown in Fig. 2a. They are visibly broad, spanning several orders of magnitude. Similar heavy tails of burst size distributions were observed in the dynamics of popularity in Wikipedia and the Web [27]. It is notable that the largest bursts take place in the first years after publication of a paper. This is manifest in Fig. 2b, where we show distributions derived from the same dataset as in Fig. 2a, but including only papers older than 5 (squares) and 10 years (triangles): the tail disappears. In general, more than 90% of large bursts (Dk=kw3:0) occur within the first 4 years since publication.
Dk=k~½k(tzdt) i in {k(t) i in =k(t) i in , where k(t) i in is the number of citations of paper i at time t. The distributions of Dk=k with t~1949,
Preferential attachment and age-dependent attractiveness
For many growing networks, cumulative advantage [28,29], or preferential attachment [22], has proven to be a reliable mechanism to explain the fat-tailed distributions observed. In the context of citation dynamics, it is reasonable to assume that, if a paper is very cited, it will have an enhanced chance to receive citations in the future with respect to poorly cited papers. This can be formulated by stating that the probability that a paper gets cited is proportional to the number of citations it already received. That was the original idea of Price [30] and led to the development of the first dynamic mechanism for the generation of power law distributions in citation networks. In later refinements of the model, one has introduced an attractiveness for the vertices, indicating their own appeal to attract edges, regardless of degree. In particular, one has introduced the so-called linear preferential attachment [31,32], in which the probability for a vertex to receive a new edge is proportional to the sum of the attractiveness of the vertex and its degree. In this Section we want to check whether this hypothesis holds for our datasets. This issue has been addressed in other works on citation analysis, like Refs. [13,33].
We investigated the dependence of the kernel function P(k in ) on indegree k in [34,35]. The kernel is the rate with which a vertex i with indegree k i in acquires new incoming edges. For linear preferential attachment the kernel is
P(k i in )~k i in zA i P j k j in zA j h i:ð6Þ
In Eq. 6 the constant A i indicates the attractiveness of vertex i. Computing the kernel directly for each indegree class (i.e. for all vertices with equal indegree k in ) is not ideal, as the result may heavily fluctuate for large values of the indegree, due to poor The fits are done for indegree larger than k min , whose values are also reported in the
P w (k in )*k 2 in zSATk in :ð7Þ
In Eq. 7 SAT is the average attractiveness of the vertices. In order to estimate P w (k in ), we need to compute the probability that vertices with equal indegree have gotten edges over a given time window, and sum the results over all indegree values from the smallest one to a given k in . The time window has to be small enough in order to preserve the structure of the network but not too small in order to have enough citation statistics. In Fig. 3 we show the cumulative kernel function P w (k in ) as a function of indegree for a time window from 2007 to 2008. The profile of the curve (empty circles) is compatible with linear preferential attachment with an average attractiveness SAT~7:0 over a large range, although the final part of the tail is missed. Still, the slope of the tail, apart from the final plateau, is close to 2, like in Eq. 7. Our result is consistent with that of Jeong et al. [34], who considered a citation network of papers published in Physical Review Letters in 1988, which are part of our dataset as well. We have repeated this analysis for several datasets, from 1950 until 2008, by keeping a time window of one year in each case. The resulting values of SAT are reported in Table 2, along with the number of vertices and mean degree of the networks. The average value of the attractiveness across all datasets is 7:1. This value is much bigger than the average indegree in the early ages of the network like, for example, from 1950 to 1960. Hence, in the tradeoff between indegree and attractiveness of Eq. 6, the latter is quite important for old papers. In general, for low indegrees, attractiveness dominates over preferential attachment. As we see in Fig. 3, in fact, for low indegrees there is no power law dependence of the kernel on indegree. Finally we investigated the time dependence of the kernel. As shown in Fig. 3, when we limit the analysis to papers older than 5 years (squares) or 10 years (triangles), the kernel has a pure quadratic dependence on indegree in the initial part, without linear terms, so the attractiveness does not affect the citation dynamics. This means that the attractiveness has a significant influence on the evolution of the citation network only within the first few years after publication of the papers. The presence of vertex attractiveness had been considered by Jeong et al. as well [34].
The model
We would like to design a microscopic model that reflects the observed properties of our citation networks. Preferential attachment does not account for the fact that the probability to receive citations may depend on time. In the Price model, for instance, papers keep collecting citations independently of their age, while it is empirically observed [33,36,37] that the probability for an article to get cited decreases as the age of the same article increases. In addition, we have seen that citation bursts typically occur in the early life of a paper. Some sophisticated growing network models include the aging of vertices as well [33,[37][38][39][40]. We propose a mechanism based on linear preferential attachment, where papers have individual values of the attractiveness, and the latter decays in time.
The model works as follows. At each time step t, a new vertex joins the network (i.e., a new paper is published). The new vertex/ paper has m references to existing vertices/papers. The probability P(i?j,t) that the new vertex i points to a target vertex j with indegree k j in reads
P(i?j,t)*½k j in zA j (t),ð8Þ
where A j (t) is the attractiveness of j at time t. If A j (t) were constant and equal for all vertices we would recover the standard linear preferential attachment [31,32]. We instead assume that it decays exponentially in time
A(t)~A 0 exp½{(t{t 0 )=t:ð9Þ
In Eq. 9 A 0 is the initial attractiveness of the vertex, and t 0 is the time in which the vertex first appears in the network; t is the time scale of the decay, after which the attractiveness lowers considerably and loses importance for citation dynamics. Since citation bursts occur in the initial phase of a paper's life (Fig. 2b), when vertex attractiveness is most relevant, we expect that the values of the initial attractiveness are heterogeneously distributed, to account for the broad distribution of burst sizes (Fig. 2a). We assume the power law distribution
Discussion
We investigated citation dynamics for networks of papers published on journals of the American Physical Society. Kolmogorov-Smirnov statistics along with goodness of fit tests make us conclude that the best ansatz for the distribution of citations (from old times up to any given year) is a shifted power law. The latter beats both simple power laws, which are acceptable only on the right tails of the distributions, and log-normals, which are better than simple power laws on the left part of the curve, but are not accurate in the description of the right tails. We have also studied dynamic properties of citation flows, and found that the early life of papers is characterized by citation bursts, like already found for popularity dynamics in Wikipedia and the Web.
The existence of bursts is not compatible with traditional models based on preferential attachment, which are capable to account for the skewed citation distributions observed, but in which citation accumulation is smooth. Therefore we have introduced a variant of linear preferential attachment, with two new features: 1) the attractiveness decays exponentially in time, so it plays a role only in the early life of papers, after which it is dominated by the number of citations accumulated; 2) the attractiveness is not the same for all vertices but it follows a heterogeneous (power-law) distribution. We have found that this simple model is accurate in the description of the distributions of citations and burst sizes, across very different scientific ages. Moreover, the model is fairly robust with respect to the choice of the observation window for the bursts.
Materials and Methods
Figure 1 .
1Empirical citation distributions and best fit model distributions obtained through the goodness of fit with Komolgorov-Smirnov statistics. PL: Power law. SPL: Shifted power law. LN: Log-normal. doi:10.1371/journal.pone.0024926.g001
Figure 2 .
2Distributions of citation burst size. (a) The four curves correspond to 1949, 1969, 1989 and 2007, the observation window is dt~1 year. (b) Here the reference year is 2007, but the burst statistics is limited to the papers published until 2003 (squares) and 1998 (triangles). For comparison, the full curve comprising all papers (circles, as in (a)) is also shown. doi:10.1371/journal.pone.0024926.g002 P(A 0 )*A {a 0 : ð10Þ We performed numerical simulations of the model with parameters obtained from the empirical data. We use a~2:5, t~1 year and A min ƒA 0 v0:002N(t) with N(t) is the number of papers at time t. The upper bound represents the largest average indegree of our citation networks, expressed in terms of the number of vertices. The value of A min depends on the obtained value of the attractiveness from empirical data. We set A min~2 5:0 for most years, for 1950 we set A min~1 4:5, because SAT is smaller than 7:1. The result is however not very sensitive to the minimum and maximum value of A 0 . Fig. 4 shows the citation distributions of empirical data versus the model prediction. The model can reproduce the empirical distributions very well at different phases in the evolution of the APS citation network, from the remote 1950 (panel d) until the very recent 2008 (panel a).The distributions of citation burst magnitude Dk=k for the data and the model are shown inFig. 5a. For a better comparison between data and model we ''evolve'' the network according to the model by starting from the structure of the empirical citation network at the beginning of the time window for the detection of the bursts. We stop the evolution after the observation time dt elapses. InFig. 5awe consider 1989 and 2007, with a time window of 1 year for the burst detection. The model successfully reproduces the empirical distributions of burst size. InFig. 5bwe consider much longer observation periods for the bursts, of 5 and 10 years. Still, the model gives an accurate description of the tail of the empirical curve in both cases.
2 Figure 3 .
23Cumulative kernel function of the citation network from 2007 to 2008. The continuous line is Ck int (k int zvAw) with vAw~7:0, C is a constant. The dashed line corresponds to the case without attractiveness (vAw~0:0). doi:10.1371/journal.pone.0024926.g003
citation database includes all papers published in journals of the American Physical Society (APS) from 1893 to 2008, except papers published in Reviews of Modern Physics. There are 3 992 736 citations among 414 977 papers at the end of 2008. The journals we considered are Physical Review (PR), Physical Review Letters (PRL), Physical Review A (PRA), Physical Review B (PRB), Physical Review C (PRC), Physical Review D (PRD), Physical
Figure 4 .
4Comparison of the citation distributions from the empirical data and our model. For all cases, we used a~2:5 and t~1 year. (a) For 2008, N~4415905, vkw~9:0. (b) For 1990, N~180708, vkw~6:5. (c) For 1970, N~62382, vkw~5:6. (d) For 1950, N~1950, vkw~3:1. Here N is the number of vertices/papers and vkw the average number of citations/indegree. doi:10.1371/journal.pone.0024926.g004
Figure 5 .
5Comparison of the distributions of citation burst size from the empirical data and the model. The exponent a of the distribution of initial attractiveness is 2:5, as in Fig. 4. (a) The reference years are 1989 (squares) and 2007 (circles), the observation window for the bursts is dt~1 year in both cases. (b) Here the reference years are 1998 (squares) and 2003 (circles) and the observation windows for the bursts are of 10 and 5 years, respectively. doi:10.1371/journal.pone.0024926.g005
Table 1 .
1Summary of the results of the goodness of fit test with Kolmogorov-Smirnov statistic on the empirical citation distributions for three test functions: log-normal (LN), simple power law (PL) and shifted power law (SPL).1950
1955
1960
1965
1970
1975
1980
1985
1990
1995
2000
2005
2008
LN
p-value
0.717
0.734
0.892
0.998
0.201
0.105
0.19
0.119
0.194
0.194
0.096
0.05
0.064
k min
2
3
7
1 4
2
2
2
3
2
2
2
2
2
PL
p-value
0.001
0.955
0.056
0.321
0.022
0.127
0.204
0.784
0.686
0.412
0.362
0.619
0.44
k min
6
16
9
19
12
17
20
39
46
39
43
47
47
SPL
p-value
0.832
0.777
0.49
1.00
0.943
0.958
0.49
0.728
0.909
1.00
0.797
0.989
0.99
k min
2
2
2
1 4
9
1 2
2
2
2
2
3
6
5
Table 2 .
2Statistics of the empirical citation networks: N is the number of vertices in the network; vkw is the average indegree of the network; vAw is the average attractiveness, determined from the tests of linear preferential attachment discussed in the text. doi:10.1371/journal.pone.0024926.t0021950
1955
1960
1965
1970
1975
1980
1985
1990
1995
2000
2005
2008
N
15880
23350
30996
42074
62382
85590
108794
138206
180708
238142
305570
386569
441595
vkw
2.2
3.1
3.7
4.3
5.1
5.6
6.0
6.2
6.5
7.0
7.7
8.5
9.0
vAw
4.2
5.3
6.2
5.4
7.2
7.9
7.8
9.0
7.4
7.3
6.8
6.4
7.0
PLoS ONE | www.plosone.org
September 2011 | Volume 6 | Issue 9 | e24926
AcknowledgmentsWe thank the American Physical Society for letting us use their citation database.Author ContributionsConceived and designed the experiments: YHE SF. Performed the experiments: YHE. Analyzed the data: YHE. Contributed reagents/ materials/analysis tools: YHE. Wrote the paper: YHE SF.
Physical Review Special Topics -Accelerators and Beams (PRSTAB), and Physical Review Special Topics -Physics Education Research (PRSTPER). Review E (PRE), Physical Review -Series I (PRI). From these data, we constructed time-aggregated citation networks from 1950 to a year x, with x~1951Review E (PRE), Physical Review -Series I (PRI), Physical Review Special Topics -Accelerators and Beams (PRSTAB), and Physical Review Special Topics -Physics Education Research (PRSTPER). From these data, we constructed time-aggregated citation networks from 1950 to a year x, with x~1951,1952,::::,2007, 2008.
Citation indexes for science: A new dimension in documentation through association of ideas. E Garfield, Science. 122Garfield E (1955) Citation indexes for science: A new dimension in documentation through association of ideas. Science 122: 108-111.
E Garfield, Citation Indexing. Its Theory and Applicationsins in Science, Technology and Humanities. New York, USAWileyGarfield E (1979) Citation Indexing. Its Theory and Applicationsins in Science, Technology and Humanities. New York, USA: Wiley.
Networks of scientific papers. Science. de Solla Price DJ169de Solla Price DJ (1965) Networks of scientific papers. Science 169: 510-515.
. Science since Babylon. de Solla Price DJYale University Pressde Solla Price DJ (1975) Science since Babylon. London: Yale University Press.
How popular is your paper? An empirical study of the citation distribution. S Redner, Eur Phys J B. 4Redner S (1998) How popular is your paper? An empirical study of the citation distribution. Eur Phys J B 4: 131-134.
Stretched exponential distributions in nature and economy: ''fat tails'' with characteristic scales. J Laherrère, D Sornette, Eur Phys J B. 2Laherrère J, Sornette D (1998) Stretched exponential distributions in nature and economy: ''fat tails'' with characteristic scales. Eur Phys J B 2: 525-539.
Are citations of scientific papers a case of nonextensivity?. C Tsallis, M P De Albuquerque, Eur Phys J B. 13Tsallis C, de Albuquerque MP (2000) Are citations of scientific papers a case of nonextensivity? Eur Phys J B 13: 777-780.
Citation Statistics from 110 Years of Physical Review. S Redner, Physics Today. 58Redner S (2005) Citation Statistics from 110 Years of Physical Review. Physics Today 58: 49-54.
The skewness of science. P O Seglen, Journal of the American Society for Information Science. 43Seglen PO (1992) The skewness of science. Journal of the American Society for Information Science 43: 628-638.
Statistics of citation networks. A Vazquez, arXiv:condmat/0105031E-printsVazquez A (2001) Statistics of citation networks. E-prints arXiv:condmat/ 0105031.
Citation networks in high energy physics. S Lehmann, B Lautrup, A D Jackson, Phys Rev E. 6826113Lehmann S, Lautrup B, Jackson AD (2003) Citation networks in high energy physics. Phys Rev E 68: 026113.
A mathematical approach to the study of the united states code. M J Bommarito, D M Katz, Physica A. 389Bommarito MJ, Katz DM (2010) A mathematical approach to the study of the united states code. Physica A 389: 4195-4200.
Zipf's law and log-normal distributions in measures of scientific output across fields and institutions: 40 years of slovenia's research as an example. M Perc, J Informetrics. 4Perc M (2010) Zipf's law and log-normal distributions in measures of scientific output across fields and institutions: 40 years of slovenia's research as an example. J Informetrics 4: 358-364.
A simple index for the high-citation tail of citation distribution to quantify research performance in countries and institutions. A Rodríguez-Navarro, PLoS ONE. 620510Rodríguez-Navarro A (2011) A simple index for the high-citation tail of citation distribution to quantify research performance in countries and institutions. PLoS ONE 6: e20510.
Effectiveness of Journal Ranking Schemes as a Tool for Locating Information. M J Stringer, M Sales-Pardo, Lan Amaral, PLoS ONE. 31683Stringer MJ, Sales-Pardo M, Amaral LAN (2008) Effectiveness of Journal Ranking Schemes as a Tool for Locating Information. PLoS ONE 3: e1683.
Universality of citation distributions: towards an objective measure of scientific impact. F Radicchi, S Fortunato, C Castellano, Proc Natl Acad Sci. 105Radicchi F, Fortunato S, Castellano C (2008) Universality of citation distributions: towards an objective measure of scientific impact. Proc Natl Acad Sci USA 105: 17268-17272.
Modeling a century of citation distributions. M L Wallace, V Larivire, Y Gingras, J Informetrics. 3Wallace ML, Larivire V, Gingras Y (2009) Modeling a century of citation distributions. J Informetrics 3: 296-303.
Tsallis q-exponential describes the distribution of scientific citations -a new characterization of the impact. A D Anastasiadis, M P De Albuquerque, M P De Albuquerque, D B Mussi, Scientometrics. 83Anastasiadis AD, de Albuquerque MP, de Albuquerque MP, Mussi DB (2010) Tsallis q-exponential describes the distribution of scientific citations -a new characterization of the impact. Scientometrics 83: 205-218.
Two-step competition process leads to quasi powerlawincome distributions -Application to scientific publication and citation distributions. Afj Van Raan, Physica A. 298Van Raan AFJ (2001) Two-step competition process leads to quasi power- lawincome distributions -Application to scientific publication and citation distributions. Physica A 298: 530-536.
Competition amongst scientists for publication status: toward a model of scientific publication and citation distributions. Afj Van Raan, Scientometrics. 51Van Raan AFJ (2001) Competition amongst scientists for publication status: toward a model of scientific publication and citation distributions. Sciento- metrics 51: 347-357.
We cite as we communicate: A communication model for the citation process. V V Kryssanov, E L Kuleshov, F J Rinaldo, H Ogawa, arXiv:cs/0703115E-printsKryssanov VV, Kuleshov EL, Rinaldo FJ, Ogawa H (2007) We cite as we communicate: A communication model for the citation process. E-prints arXiv:cs/ 0703115.
Emergence of scaling in random networks. A L Barabási, R Albert, Science. 286Barabási AL, Albert R (1999) Emergence of scaling in random networks. Science 286: 509-512.
Power-law distributions in empirical data. A Clauset, C R Shalizi, Mej Newman, SIAM Reviews. 51Clauset A, Shalizi CR, Newman MEJ (2009) Power-law distributions in empirical data. SIAM Reviews 51: 661-703.
The origin of bursts and heavy tails in human dynamics. A L Barabási, Nature. 435Barabási AL (2005) The origin of bursts and heavy tails in human dynamics. Nature 435: 207-211.
Exact results for the barabási model of human dynamics. A Vázquez, Phys Rev Lett. 95248701Vázquez A (2005) Exact results for the barabási model of human dynamics. Phys Rev Lett 95: 248701.
Modeling bursts and heavy tails in human dynamics. A Vázquez, J G Oliveira, Z Dezsö, K I Goh, I Kondor, Phys Rev E. 7336127Vázquez A, Oliveira JG, Dezsö Z, Goh KI, Kondor I, et al. (2006) Modeling bursts and heavy tails in human dynamics. Phys Rev E 73: 036127.
Characterizing and modeling the dynamics of online popularity. J Ratkiewicz, S Fortunato, A Flammini, F Menczer, A Vespignani, Phys Rev Lett. 105158701Ratkiewicz J, Fortunato S, Flammini A, Menczer F, Vespignani A (2010) Characterizing and modeling the dynamics of online popularity. Phys Rev Lett 105: 158701.
A mathematical theory of evolution, based on the conclusions of dr. j. c. willis, f.r.s. Yule Gu, Phil Trans R Soc London Series B. 213Yule GU (1925) A mathematical theory of evolution, based on the conclusions of dr. j. c. willis, f.r.s. Phil Trans R Soc London Series B 213: 21-87.
Models of man: social and rational: mathematical essays on rational human behavior in a social setting. H A Simon, WileySimon HA (1957) Models of man: social and rational: mathematical essays on rational human behavior in a social setting. Wiley.
A general theory of bibliometric and other cumulative advantage processes. D D Price, J Am Soc Inform Sci. 27Price DD (1976) A general theory of bibliometric and other cumulative advantage processes. J Am Soc Inform Sci 27: 292-306.
Connectivity of growing random networks. P L Krapivsky, S Redner, F Leyvraz, Phys Rev Lett. 85Krapivsky PL, Redner S, Leyvraz F (2000) Connectivity of growing random networks. Phys Rev Lett 85: 4629-4632.
Structure of growing networks with preferential linking. S N Dorogovtsev, Jff Mendes, A N Samukhin, Phys Rev Lett. 85Dorogovtsev SN, Mendes JFF, Samukhin AN (2000) Structure of growing networks with preferential linking. Phys Rev Lett 85: 4633-4636.
Measuring the preferential attachment mechanism in citation networks. M Wang, G Yu, D Yu, Physica A. 387Wang M, Yu G, Yu D (2008) Measuring the preferential attachment mechanism in citation networks. Physica A 387: 4692-4698.
Measuring preferential attachment in evolving networks. H Jeong, Z Néda, A L Barabási, Europhys Lett. 61567Jeong H, Néda Z, Barabási AL (2003) Measuring preferential attachment in evolving networks. Europhys Lett 61: 567.
Evolution of weighted scale-free networks in empirical data. Y H Eom, C Jeon, H Jeong, B Kahng, Phys Rev E. 7756105Eom YH, Jeon C, Jeong H, Kahng B (2008) Evolution of weighted scale-free networks in empirical data. Phys Rev E 77: 056105.
Phase transitions in an aging network. K B Hajra, P Sen, Phys Rev E. 7056103Hajra KB, Sen P (2004) Phase transitions in an aging network. Phys Rev E 70: 056103.
Aging in citation networks. K B Hajra, P Sen, Physica A. 346Hajra KB, Sen P (2005) Aging in citation networks. Physica A 346: 44-48.
Evolution of networks with aging of sites. S N Dorogovtsev, Jff Mendes, Phys Rev E. 62Dorogovtsev SN, Mendes JFF (2000) Evolution of networks with aging of sites. Phys Rev E 62: 1842-1845.
Scaling properties of scale-free evolving networks: Continuous approach. S N Dorogovtsev, Jff Mendes, Phys Rev E. 6356125Dorogovtsev SN, Mendes JFF (2001) Scaling properties of scale-free evolving networks: Continuous approach. Phys Rev E 63: 056125.
Effect of aging on network structure. H Zhu, X Wang, J Y Zhu, Phys Rev E. 6856121Zhu H, Wang X, Zhu JY (2003) Effect of aging on network structure. Phys Rev E 68: 056121.
|
[] |
[
"Spinor representation of Lorentzian surfaces in R 2,2",
"Spinor representation of Lorentzian surfaces in R 2,2"
] |
[
"Pierre Bayard ",
"Victor Patty "
] |
[] |
[] |
We prove that an isometric immersion of a simply connected Lorentzian surface in R 2,2 is equivalent to a normalised spinor field solution of a Dirac equation on the surface. Using the quaternions and the Lorentz numbers, we also obtain an explicit representation formula of the immersion in terms of the spinor field. We then apply the representation formula in R 2,2 to give a new spinor representation formula for Lorentzian surfaces in 3-dimensional Minkowski space. Finally, we apply the representation formula to the local description of the flat Lorentzian surfaces with flat normal bundle and regular Gauss map in R 2,2 , and show that these surfaces locally depend on four real functions of one real variable, or on one holomorphic function together with two real functions of one real variable, depending on the sign of a natural invariant.
|
10.1016/j.geomphys.2015.05.002
|
[
"https://arxiv.org/pdf/1410.7313v2.pdf"
] | 118,497,346 |
1410.7313
|
32ab55f0290e9013aec433c9c70d7cc5af42d0cf
|
Spinor representation of Lorentzian surfaces in R 2,2
27 Oct 2014
Pierre Bayard
Victor Patty
Spinor representation of Lorentzian surfaces in R 2,2
27 Oct 2014Lorentzian surfacesDirac OperatorIsometric ImmersionsWeierstrass Representation 2010 Mathematics Subject Classification: 53B2553C2753C4253C50
We prove that an isometric immersion of a simply connected Lorentzian surface in R 2,2 is equivalent to a normalised spinor field solution of a Dirac equation on the surface. Using the quaternions and the Lorentz numbers, we also obtain an explicit representation formula of the immersion in terms of the spinor field. We then apply the representation formula in R 2,2 to give a new spinor representation formula for Lorentzian surfaces in 3-dimensional Minkowski space. Finally, we apply the representation formula to the local description of the flat Lorentzian surfaces with flat normal bundle and regular Gauss map in R 2,2 , and show that these surfaces locally depend on four real functions of one real variable, or on one holomorphic function together with two real functions of one real variable, depending on the sign of a natural invariant.
Introduction
Let R 2,2 be the space R 4 endowed with the metric of signature (2,2) g = −dx 2 0 + dx 2 1 − dx 2 2 + dx 2 3 .
A surface M ⊂ R 2,2 is said to be Lorentzian if the metric g induces on M a Lorentzian metric, i.e. a metric of signature (1,1) : the tangent and the normal bundles of M are then equipped with fibre Lorentzian metrics. The purpose of the paper is to study the spinor representation of the Lorentzian surfaces in R 2,2 ; the main result is the following: if M is an abstract Lorentzian surface, E is a bundle of rank 2 on M, with a Lorentzian fibre metric and a compatible connection, and H ∈ Γ(E) is a section of E, then an isometric immersion of M into R 2,2 , with normal bundle E and mean curvature vector H, is equivalent to a normalised section ϕ ∈ Γ(Σ), solution of a Dirac equation Dϕ = H · ϕ on the surface, where Σ = ΣE ⊗ ΣM is the spinor bundle of M twisted by the spinor bundle of E and D is a natural Dirac operator acting on Σ (we assume that spin structures are given on T M and E). We moreover define a natural closed 1-form ξ in terms of ϕ, with values in R 2,2 , such that F := ξ is the immersion. As a first application of this representation, we derive an easy proof of the fundamental theorem of the theory of Lorentzian surfaces immersed in R 2,2 : a symmetric bilinear map B : T M × T M → E is the second fundamental form of an immersion of M into R 2,2 if and only if it satisfies the equations of Gauss, Codazzi and Ricci. We then deduce from the general representation in R 2,2 spinor representations for Lorentzian surfaces in 3-dimensional Minkowski spaces R 1,2 and R 2,1 , and also obtain new explicit representation formulas; the representations appear to be simpler than the representations obtained before by M.-A. Lawn [10,11] and by M.-A. Lawn and J. Roth [12], since only one spinor field is involved in the formulas. Our last application concerns the flat Lorentzian surfaces with flat normal bundle and regular Gauss map in R 2,2 : the general spinor representation formula permits to study their local structure; they locally depend on four real functions of one real variable if a natural invariant ∆ is positive, and on one holomorphic function together with two real functions of one real variable if ∆ is negative. We note that a spinor representation for surfaces in 4-dimensional pseudo-Riemannian spaces already appeared in [20]; the representation formula obtained in that paper seems to be different, since the normal bundle and the Clifford action are not explicitly involved in the formula.
We quote the following related papers: the spinor representation of surfaces in R 3 was studied by many authors, especially by Th. Friedrich in [7], who interpreted a spinor field representing a surface in R 3 as a constant spinor field of R 3 restricted to the surface; following this approach, the spinor representation of Lorentzian surfaces in 3-dimensional Minkowski space was studied by M.-A. Lawn [10,11] and M.-A. Lawn and J. Roth [12]. M.-A. Lawn, J. Roth and the first author then studied the spinor representation of surfaces in 4-dimensional Riemannian space forms in [4], and the first author the spinor representation of spacelike surfaces in 4-dimensional Minkowski space in [2]. Recently, P. Romon and J. Roth studied in [17] the relation between this abstract approach and more explicit representation formulas existing in the literature for surfaces in R 3 and R 4 . Finally, the local description of the flat surfaces with flat normal bundle and regular Gauss map in 4-dimensional Euclidean and Minkowski spaces was studied in [6].
The outline of the paper is as follows: the first section is devoted to preliminaries concerning the Clifford algebra of R 2,2 , the spin representation, and the spin geometry of Lorentzian surfaces in R 2,2 . We use quaternions and Lorentz numbers to obtain concise formulas. Section 2 is devoted to the spinor representation formula of Lorentzian surfaces in R 2,2 . We indicate at the end of the section how to obtain the representation formulas for surfaces in R 1,2 and R 2,1 . We then apply the representation formula to the local description of the flat Lorentzian surfaces with flat normal and regular Gauss map in Section 3. An appendix ends the paper.
Preliminaries
Clifford algebra of R 2,2 and the spin representation
To describe the Clifford algebra of R 2,2 , it will be convenient to consider the Lorentz numbers
A = {x + σy : x, y ∈ R},
where σ is a formal element such that σ 2 = 1, the complexified Lorentz numbers A C := A ⊗ C ≃ {x + σy : x, y ∈ C}, and the quaternions with coefficients in A C H A C := {q 0 1 + q 1 I + q 2 J + q 3 K : q 0 , q 1 , q 2 , q 3 ∈ A C }, where I, J and K are such that
I 2 = J 2 = K 2 = −1, IJ = −JI = K.
For all q = q 0 1 + q 1 I + q 2 J + q 3 K ∈ H A C , we denote
q := q 0 1 + q 1 I + q 2 J + q 3 K where, if a = x + σy belongs to A C , a := x − σy. The map γ : R 2,2 −→ H A C (2)(1)(x 0 , x 1 , x 2 , x 3 ) −→ 0 σix 0 1 + x 1 I + ix 2 J + x 3 K −σix 0 1 + x 1 I + ix 2 J + x 3 K 0
where H A C (2) stands for the set of 2 × 2 matrices with entries belonging to H A C , is a Clifford map, that is satisfies
γ(x) 2 = − x, x 1 0 0 1
for all x ∈ R 2,2 , and thus identifies
Cl(2, 2) ≃ a b b a : a ∈ H 0 , b ∈ H 1 ,(2)
where H 0 := {η 0 1 + iη 1 I + η 2 J + iη 3 K : η 0 , η 1 , η 2 , η 3 ∈ A} and H 1 := {iη 0 1 + η 1 I + iη 2 J + η 3 K : η 0 , η 1 , η 2 , η 3 ∈ A} .
Here and below we also simply denote by x, x the norm g(x, x) = −x 2 0 + x 2 1 − x 2 2 + x 2 3 of x ∈ R 2,2 . Note that H 0 naturally identifies to the para-quaternions numbers described in [10], but with coefficients in the Lorentz numbers A. Using (2), the sub-algebra of elements of even degree is
Cl 0 (2, 2) ≃ a 0 0 a : a ∈ H 0 ≃ H 0(3)
and the set of elements of odd degree is
Cl 1 (2, 2) ≃ 0 b b 0 : b ∈ H 1 ≃ H 1 .(4)
Let us consider the map H :
H A C × H A C −→ A C (ξ, ξ ′ ) −→ q 0 q ′ 0 + q 1 q ′ 1 + q 2 q ′ 2 + q 3 q ′ 3 where ξ = q 0 1 + q 1 I + q 2 J + q 3 K and ξ ′ = q ′ 0 1 + q ′ 1 I + q ′ 2 J + q ′ 3 K.
It is obviously A C -bilinear and symmetric. If we consider the restriction of this map to H 0 ,
H(q, q ′ ) = η 0 η ′ 0 − η 1 η ′ 1 + η 2 η ′ 2 − η 3 η ′ 3 ∈ A (5) where q = η 0 1 + iη 1 I + η 2 J + iη 3 K and q ′ = η ′ 0 1 + iη ′ 1 I + η ′ 2 J + iη ′ 3 K belong to H 0 , the spin group is given by Spin(2, 2) := {q ∈ H 0 : H(q, q) = 1} ⊂ Cl 0 (2, 2).
Now, if we consider the identification
R 2,2 ≃ {σix 0 1 + x 1 I + ix 2 J + x 3 K : x 0 , x 1 , x 2 , x 3 ∈ R} ≃ {ξ ∈ H 1 : ξ = − ξ},(6)
where
, if ξ = q 0 1 + q 1 I + q 2 J + q 3 K ∈ H A C , ξ := q 0 1 − q 1 I − q 2 J − q 3 K is its conjugate in H A C , we get the double cover Φ : Spin(2, 2) −→ SO(2, 2) (7) q −→ (ξ ∈ R 2,2 −→ qξ q −1 ∈ R 2,2 ).
Here and below SO(2, 2) stands for the component of the identity of the orthogonal group O(2, 2) (elementary properties of this group may be find in [19]). Let us denote by ρ : Cl(2, 2) −→ End(H 0 ) the complex representation of Cl(2, 2) on H 0 given by
ρ a b b a : ξ ∈ H 0 ≃ ξ σi ξ −→ a b b a ξ σi ξ ≃ aξ + σib ξ ∈ H 0 ,(8)
where the complex structure on H 0 is given by the multiplication by J on the right. The spinorial representation of Spin(2, 2) is the restriction to Spin(2, 2) of the representation ρ and simply reads ρ |Spin (2,2) :
Spin(2, 2) −→ End C (H 0 ) (9) a −→ (ξ ∈ H 0 −→ aξ ∈ H 0 ).
Since ρ(σ1) 2 = id H0 , this representation splits into
H 0 = Σ + ⊕ Σ − ,
where Σ + := {ξ ∈ H 0 : σξ = ξ} and Σ − := {ξ ∈ H 0 : σξ = −ξ}. Explicitly, we have
Σ + = (1 + σ) {(R ⊕ RJ) + iI(R ⊕ RJ)}(10)
and
Σ − = (1 − σ) {(R ⊕ RJ) + iI(R ⊕ RJ)} .(11)
Note that, if (e 0 , e 1 , e 2 , e 3 ) stands for the canonical basis of R 2,2 , σ1 ∈ H 0 represents the volume element e 0 · e 1 · e 2 · e 3 , which thus acts as +id on Σ + and as −id on Σ − .
Spinors under the splitting
R 2,2 = R 1,1 × R 1,1
We consider the splitting R 2,2 = R 1,1 × R 1,1 , such that first factor corresponds to the coordinates (x 0 , x 1 ) and the second factor to the coordinates (x 2 , x 3 ); the metrics on the factors are thus −dx 2 0 + dx 2 1 and −dx 2 2 + dx 2 3 respectively. We also consider the corresponding natural inclusion SO(1, 1) × SO(1, 1) ⊂ SO (2,2). We are first interested in the description of the set
Φ −1 (SO(1, 1) × SO(1, 1)) ⊂ Spin(2, 2)
where Φ is the double cover (7). To this end, it is convenient to first introduce some A-valued maps, already considered in [9]. Let z ∈ A; writing
z = 1 + σ 2 (u + v) + 1 − σ 2 (u − v),
u, v ∈ R, and using the properties
1 + σ 2 2 = 1 + σ 2 , 1 − σ 2 2 = 1 − σ 2 , 1 + σ 2 1 − σ 2 = 0,(12)
we have
z n = 1 + σ 2 (u + v) n + 1 − σ 2 (u − v) n for all n ∈ N.
Thus we can define the exponential map exp : A → A by
exp(z) := ∞ n=0 z n n! = 1 + σ 2 e u+v + 1 − σ 2 e u−v(13)
for all z = u + σv ∈ A, where e (·) is the usual exponential map, and also define the A-valued hyperbolic sin and cosin functions by the usual formulas cosh(z) := exp(z) + exp(−z) 2 and sinh(z) := exp(z) − exp(−z) 2 .
It is easy to check the following identities
cosh(z) = cosh(u) cosh(v) + σ sinh(u) sinh(v), sinh(z) = sinh(u) cosh(v) + σ cosh(u) sinh(v)(14)
for all z = u + σv ∈ A. Using the definition (7) of Φ, it is easy to get
Φ −1 (SO(1, 1) × SO(1, 1)) = {±(cosh(z) + i sinh(z)I) : z ∈ A} =: S 1 A ⊂ Spin(2, 2);(15)
more precisely, writing z = u + σv ∈ A and using the identities (14), we get
cosh(z) + i sinh(z)I = (cosh(v) + σi sinh(v)I).(cosh(u) + i sinh(u)I),
and Φ(±(cosh(z)+ i sinh(z)I)) appears to be the transformation of R 2,2 which consists of a Lorentz rotation of angle −2v in the first factor R 1,1 and of angle −2u in the second factor R 1,1 . Thus, setting
Spin ′ (1, 1) := {±(cosh(v) + σi sinh(v)I) : v ∈ R},(16)
and
Spin(1, 1) := {±(cosh(u) + i sinh(u)I) : u ∈ R},(17)
we have
S 1 A = Spin ′ (1, 1).Spin(1, 1) ≃ Spin ′ (1, 1) × Z2 Spin(1, 1)(18)
and the double cover Φ :
S 1 A −→ SO(1, 1) × SO(1, 1).(19)
Now, if we consider the spinorial representation ρ of Spin(2, 2) restricted to S 1 A ⊂ Spin(2, 2), H 0 = Σ + ⊕ Σ − splits into the sum of four complex lines
Σ + = Σ ++ ⊕ Σ −− , Σ − = Σ +− ⊕ Σ −+ ,(20)
where
Σ ++ = (1 + σ)(1 + iI)(R ⊕ RJ), Σ −− = (1 + σ)(1 − iI)(R ⊕ RJ), Σ +− = (1 − σ)(1 − iI)(R ⊕ RJ) and Σ −+ = (1 − σ)(1 + iI)(R ⊕ RJ)
(recall that the complex structure such that the representation is C−linear is given by the rightmultiplication by J). Note that e 0 · e 1 acts as +id on Σ ++ and on Σ +− , and as −id on Σ −− and on Σ −+ , whereas e 2 · e 3 acts as +id on Σ ++ and on Σ −+ , and as −id on Σ −− and on Σ +− . Moreover, it is not difficult to show that the representations of S 1 A on Σ ++ , Σ −− , Σ +− and Σ −+ are respectively equivalent to the multiplication by
±e v+u , ±e −v−u , ±e v−u and ± e −v+u on C.
Remark 1. Let ρ 1 = ρ + 1 ⊕ ρ − 1 and ρ 2 = ρ + 2 ⊕ ρ − 2 be the spinorial representations of Spin ′ (1, 1) and Spin(1, 1) respectively. The representation
ρ 1 ⊗ ρ 2 = ρ + 1 ⊗ ρ + 2 ⊕ ρ − 1 ⊗ ρ − 2 ⊕ ρ + 1 ⊗ ρ − 2 ⊕ ρ − 1 ⊗ ρ + 2(21)
of Spin ′ (1, 1) × Spin(1, 1) is also the sum of the natural representations ±e v+u , ±e −v−u , ±e v−u , ±e −v+u on C, where v ∈ R describes the Spin ′ (1, 1)-factor and u ∈ R the Spin(1, 1)-factor of Spin ′ (1, 1) × Spin(1, 1) as in (16)- (17). Thus, the representation (1,1), is equivalent to the representation ρ 1 ⊗ ρ 2 , and the decomposition (20) of Σ + and Σ − corresponds to (21).
Spin ′ (1, 1) × Spin(1, 1) −→ End C (H 0 ) (22) (g 1 , g 2 ) −→ ρ(g) : ξ −→ gξ, where g = g 1 g 2 ∈ S 1 A = Spin ′ (1, 1).Spin
1.3 Spin geometry of a Lorentzian surface in R 2,2
Fundamental equations
Let M be a Lorentzian surface in R 2,2 . Let us denote by E its normal bundle and by B : T M × T M → E its second fundamental form defined by
B(X, Y ) = ∇ X Y − ∇ X Y
for all X, Y ∈ T M, where ∇ and ∇ are the Levi-Civita connections of M and R 2,2 respectively. We moreover assume that T M and E are oriented, both in space and in time: we assume that the bundles T M and E are oriented, and that, for all p ∈ M, a component of {X ∈ T p M : g(X, X) < 0} and a component of {X ∈ E p : g(X, X) < 0} are distinguished, in a continuous manner; a vector tangent or normal to M belonging to one of these distinguished components will be called futuredirected. We will moreover adopt the following convention: a basis (u, v) of T p M or E p will be said positively oriented (in space and in time) if it has the orientation of T p M or E p , and if g(u, u) < 0 and g(v, v) > 0 with u future-directed. The second fundamental form satisfies the following equations (see e.g. [19]):
1. K = |B(e 2 ,2. K N = (S e0 • S e1 − S e1 • S e0 )(e 2 ), e 3 (Ricci equation), 3. (∇ X B)(Y, Z) − (∇ Y B)(X, Z) = 0 (Codazzi equation),
where K and K N are the curvatures of M and E (E is equipped with the normal connection), (e 2 , e 3 ) and (e 0 , e 1 ) are orthonormal, positively oriented bases of T M and E respectively, and where∇ is the natural connection induced on T * M ⊗2 ⊗ E. As usual, if ν ∈ E, S ν stands for the symmetric operator on T M such that
S ν (X), Y = B(X, Y ), ν for all X, Y ∈ T M.
Remark 2. Assume that (M, g) is a surface equipped with a Lorentzian metric, and E is a bundle on M, of rank 2, with a fibre Lorentzian metric and a compatible connection. Suppose moreover that M and E are oriented, in space and in time. Then, if B : T M × T M → E is a bilinear and symmetric map satisfying the equations (1), (2) and (3) above, the fundamental theorem says that, locally, there is an isometric immersion of M into R 2,2 with normal bundle E and second fundamental form B. The immersion is moreover unique up to the rigid motions of R 2,2 . We will obtain a spinorial proof of this theorem below (Corollary 1).
Spinorial Gauss formula
We assume here that the tangent and the normal bundles of M ⊂ R 2,2 are oriented (in space and in time), with given spin structures. There is a natural identification between ΣR 2,2 |M , the spinor bundle of R 2,2 restricted to M, and Σ := ΣE ⊗ ΣM, the spinor bundle of M twisted by the normal bundle; see [1] and also Remark 1. Moreover, exactly as in the Riemannian case, we have a spinorial Gauss formula (see [1,8,20]): for any ϕ ∈ Σ and any X ∈ T M,
∇ X ϕ = ∇ X ϕ + 1 2 3 j=2 ǫ j e j · B(X, e j ) · ϕ(23)
where ǫ j = e j , e j , ∇ is the spinorial connection of ΣR 2,2 , ∇ is the spinorial connection of Σ defined by ∇ := ∇ ΣE ⊗ id ΣM + id ΣE ⊗ ∇ ΣM and the dot " · " is the Clifford action of R 2,2 (∇ ΣE and ∇ ΣM denote the spinorial connections on ΣE and ΣM ). Thus, if ϕ ∈ ΣR 2,2 is parallel, i.e. is such that ∇ϕ = 0, then its restriction to M satisfies
∇ X ϕ = − 1 2 3 j=2 ǫ j e j · B(X, e j ) · ϕ.(24)
Taking the trace, we get the following Dirac equation
Dϕ = H · ϕ,(25)
where Dϕ := −e 2 · ∇ e2 ϕ + e 3 · ∇ e3 ϕ and where H = 1 2 tr g B ∈ E is the mean curvature vector of M in R 2,2 . Finally, correspondingly to the splittings (20)- (21)
we have Σ = Σ + ⊕ Σ − with Σ + = Σ ++ ⊕ Σ −− and Σ − = Σ +− ⊕ Σ −+ where Σ ++ = Σ + E ⊗ Σ + M, Σ −− = Σ − E ⊗ Σ − M, Σ +− = Σ + E ⊗ Σ − M and Σ −+ = Σ − E ⊗ Σ + M.
The inverse construction
Let (M, g) be a Lorentzian surface and E a bundle of rank 2 on M, equipped with a fibre Lorentzian metric and a compatible connection; we assume that M and E are oriented (in space and in time), with given spin structures. We consider the spinor bundle over M twisted by E and defined by Σ := ΣE ⊗ ΣM.
We endow Σ with the spinorial connection
∇ := ∇ ΣE ⊗ id ΣM + id ΣE ⊗ ∇ ΣM .
We also define the Clifford product " · " by
X · ϕ = (X · E α) ⊗ β if X ∈ Γ(E), X · ϕ = α ⊗ (X · M β) if X ∈ Γ(M ),
where ϕ = α ⊗ β belongs to Σ, · E and · M denote the Clifford actions on ΣE and ΣM respectively and where α = α + − α − ∈ ΣE = Σ + E ⊕ Σ − E. Finally we define the Dirac operator
Dϕ := −e 2 · ∇ e2 ϕ + e 3 · ∇ e3 ϕ(26)
where (e 2 , e 3 ) is an orthogonal basis tangent to M such that |e 2 | 2 = −1 and |e 3 | 2 = 1. :
=Q E × MQM = {(s 1 ,s 2 ) ∈Q E ×Q M : p E (s 1 ) = p M (s 2 )}.
Remark 3. Σ is the vector bundle associated to the principal bundleQ and to the spinor representation ρ 1 ⊗ ρ 2 ≃ ρ of the structure group Spin ′ (1, 1) × Spin(1, 1); see Remark 1.
Since the group S 1 A = Spin ′ (1, 1).Spin(1, 1) belongs to Spin(2, 2), which preserves the Abilinear map H defined on H 0 by (5), the spinor bundle Σ is also equipped with a A-bilinear map H and with a real scalar product ·, · := ℜe H(·, ·) of signature (4, 4) (here ℜe means that we consider the coefficient of 1 in the decomposition A ≃ R1 ⊕ Rσ). We may also define a H 1 -valued scalar product on Σ by ψ,
ψ ′ := σi ξ ′ ξ,(27)
where ξ and ξ ′ ∈ H 0 are respectively the components of ψ and ψ ′ in some local section ofQ. This scalar product is A-bilinear, and satisfies the following properties: for all ψ, ψ ′ ∈ Σ and for all
X ∈ E ⊕ T M ψ, ψ ′ = ψ ′ , ψ and X · ψ, ψ ′ = − ψ, X · ψ ′ .(28)
Note that, by definition, H(ψ, ψ ′ ) is the coefficient of σi1 in the decomposition of ψ, ψ ′ in the basis σi1, I, iJ, K of H 1 (basis as a module over A), and that (28) yields
H(ψ, ψ ′ ) = H(ψ ′ , ψ) and H(X · ψ, ψ ′ ) = H(ψ, X · ψ ′ ),(29)
for all ψ, ψ ′ ∈ Σ and for all X ∈ E ⊕ T M, where the conjugation is here the conjugation in A. In particular, the real scalar product satisfies
ψ, ψ ′ = ψ ′ , ψ and X · ψ, ψ ′ = ψ, X · ψ ′(30)
for all ψ, ψ ′ ∈ Σ and for all X ∈ E ⊕ T M.
Notation
We will use the following notation: ifs ∈Q is a given spinorial frame, the brackets [·] will denote the coordinates in H 0 of the spinor fields in the frames, that is, for ϕ ∈ Σ,
ϕ ≃ [s, [ϕ]] ∈ Σ ≃Q × H 0 /ρ 1 ⊗ ρ 2 .
We will also use the brackets to denote the coordinates ins of the elements of the Clifford algebra
Cl(E ⊕ T M ) : X ∈ Cl 0 (E ⊕ T M ) and Y ∈ Cl 1 (E ⊕ T M ) will be respectively represented by [X] ∈ H 0 and [Y ] ∈ H 1 such that, ins, X ≃ [X] 0 0 [X] and Y ≃ 0 [Y ] [Y ] 0 . Note that [X · ϕ] = [X][ϕ] and [Y · ϕ] = σi [Y ] [ϕ],
and that, in a spinorial frames ∈Q such that π(s) = (e 0 , e 1 , e 2 , e 3 ), where π :Q → Q E × M Q M is the natural projection onto the bundle of the orthonormal frames of E ⊕ T M adapted to the splitting, e 0 , e 1 , e 2 and e 3 ∈ Cl 1 (E ⊕ T M ) are respectively represented by σi1, I, iJ and K ∈ H 1 (recall (1) and (8)).
Spinor representation of Lorentzian surfaces 2.1 The main result
In this section we present the principal theorem concerning the spinor representation of Lorentzian surfaces immersed in R 2,2 . This extends to the signature (2, 2) the main results of [4] and [2]. 1. There is a spinor field ϕ ∈ Γ(Σ) with H(ϕ, ϕ) = 1 solution of the Dirac equation
Dϕ = H · ϕ.
2. There is a spinor field ϕ ∈ Γ(Σ) with H(ϕ, ϕ) = 1 solution of
∇ X ϕ = − 1 2 3 j=2 ǫ j e j · B(X, e j ) · ϕ (31) for all X ∈ T M, where ǫ j = g(e j , e j ) and B : T M × T M → E is bilinear symmetric with 1 2 tr g B = H. 3. There is an isometric immersion F of (M, g) into R 2,2 with normal bundle E,Dϕ = H · ϕ(32)
with H(ϕ, ϕ) = 1. Then the bilinear map
B : T M × T M → E defined by B(X, Y ), ν = −2 X · ∇ Y ϕ, ν · ϕ (33)
for all X, Y ∈ Γ(T M ) and all ν ∈ Γ(E) is symmetric, satisfies the Gauss, Codazzi and Ricci equations and is such that
H = 1 2 tr g B.
In the proposition and below, we use the same notation ·, · to denote the scalar products on T M, on E, and on Σ. As in [7] (and after in [11], [12], [15], [18] in codimension one, and in [4] and [2] in codimension two) the proof of this proposition relies on the fact that such a spinor field is necessarily a solution of (31), with this bilinear map B:
Lemma 2.2.
If ϕ is a solution of the Dirac equation (32) with H(ϕ, ϕ) = 1, then ϕ solves the Killing type equation (31) where B is the bilinear map defined in (33).
Proof. We consider the A-module structure σ := e 0 · e 1 · e 2 · e 3 , defined on the Clifford bundle Cl(E ⊕ T M ) by the multiplication on the left, and on spinor bundle Σ by the Clifford action. The map H : Σ × Σ → A is A-bilinear with respect to this A-module structure, whereas the Clifford action satisfies
σ · (X · ϕ) = (σ · X) · ϕ = −X · (σ · ϕ),
for all ϕ ∈ Σ and X ∈ E ⊕ T M. Now, we consider the following spinors:
{ϕ, e 2 · e 3 · ϕ, e 3 · e 1 · ϕ, e 1 · e 2 · ϕ}.
Using the identities in (29), it is easy to show that these spinors are orthogonal with respect to the form H, with norm 1, −1, 1, −1 respectively; in particular, ∀X ∈ T M,
∇ X ϕ = H(∇ X ϕ, ϕ)ϕ − H(∇ X ϕ, e 2 · e 3 · ϕ)e 2 · e 3 · ϕ + H(∇ X ϕ, e 3 · e 1 · ϕ)e 3 · e 1 · ϕ − H(∇ X ϕ, e 1 · e 2 · ϕ)e 1 · e 2 · ϕ.
We claim that H(∇ X ϕ, ϕ) = 0 and H(∇ X ϕ, e 2 · e 3 · ϕ) = 0.
The first identity is a direct consequence of H(ϕ, ϕ) = 1. The second one is a consequence of the Dirac equation (32): assuming that X = e 2 (the proof is analogous if X = e 3 ), we have
H(∇ e2 ϕ, e 2 · e 3 · ϕ) = H(e 2 · ∇ e2 ϕ, e 3 · ϕ) = H(e 2 3 · ∇ e3 ϕ, ϕ) − H( H · ϕ, e 3 · ϕ) = −H(∇ e3 ϕ, ϕ) − H(ϕ, H · e 3 · ϕ).
But H(∇ e3 ϕ, ϕ) = 0 and
H(ϕ, H · e 3 · ϕ) = H(e 3 · H · ϕ, ϕ) = −H( H · e 3 · ϕ, ϕ) = −H(ϕ, H · e 3 · ϕ),
that is H(ϕ, H · e 3 · ϕ) = 0, and the second identity in (34) follows. We thus get
∇ X ϕ = η(X) · ϕ with η(X) := H(∇ X ϕ, e 3 · e 1 · ϕ)e 3 · e 1 − H(∇ X ϕ, e 1 · e 2 · ϕ)e 1 · e 2 .
Using the relations σ · e 3 · e 1 = e 2 · e 0 and σ · e 1 · e 2 = e 0 · e 3 , we get that η(X) has the form η(X) = e 2 · ν 2 + e 3 · ν 3 ,
for some ν 2 , ν 3 ∈ E. Now, recalling (30), for each ν ∈ E and j = 2, 3,
B(e j , X), ν = −2 e j · ∇ X ϕ, ν · ϕ = −2 ∇ X ϕ, e j · ν · ϕ = −2 η(X) · ϕ, e j · ν · ϕ ,
which, using (35), yields
B(e j , X), ν = −2 e 2 · ν 2 · ϕ, e j · ν · ϕ − 2 e 3 · ν 3 · ϕ, e j · ν · ϕ .(36)
We note that for all ν, ν ′ ∈ E we have
e 2 · e 3 · ϕ, ν · ν ′ · ϕ = 0;(37)
the proof is analogous to the proof of Lemma 3.1 in [2] and is omitted here. Thus (36) reads
B(e 2 , X), ν = −2 ν 2 · ϕ, ν · ϕ = 2 ν 2 , ν and B(e 3 , X), ν = 2 ν 3 · ϕ, ν · ϕ = −2 ν 3 , ν .
Indeed, these last identities hold since, for i = 2, 3,
ν i · ϕ, ν · ϕ = ν · ν i · ϕ, ϕ = − ν i · ν · ϕ, ϕ − 2 ν i , ν ϕ, ϕ = − ν · ϕ, ν i · ϕ − 2 ν i , ν and thus ν i · ϕ, ν · ϕ = − ν i , ν .
Hence ν 2 = 1 2 B(e 2 , X) and ν 3 = − 1 2 B(e 3 , X), and (35) implies formula (31). (30) and (37). The equations of Gauss, Codazzi and Ricci appear to be the integrability conditions of (31): the proof is completely analogous to that given in [2,Theorem 2], and is therefore omitted.
In the next section we prove the second part of Theorem 1.
Weierstrass representation
We assume that ϕ ∈ Γ(Σ) is a spinor field such that Dϕ = H · ϕ and H(ϕ, ϕ) = 1, and we define the H 1 -valued 1-form ξ by
ξ(X) = X · ϕ, ϕ ∈ H 1(38)
where the pairing ., . : Σ × Σ → H 1 is defined in (27).
Lemma 2.3. The 1-form ξ satisfies the following properties:
1. ξ = − ξ, that is ξ takes its values in R 2,2 ⊂ H 1 ; 2. ξ is closed, that is dξ = 0.
Proof. 1. Using the properties (28), we get
ξ(X) = X · ϕ, ϕ = − ϕ, X · ϕ = − X · ϕ, ϕ = − ξ(X);
the result then follows from (6).
By a straightforward computation, we get
dξ(e 2 , e 3 ) = e 3 · ∇ e2 ϕ, ϕ − e 2 · ∇ e3 ϕ, ϕ + e 3 · ϕ, ∇ e2 ϕ − e 2 · ϕ, ∇ e3 ϕ .
Now, the last two terms satisfy e 3 · ϕ, ∇ e2 ϕ = − e 3 · ∇ e2 ϕ, ϕ and e 2 · ϕ, ∇ e3 ϕ = − e 2 · ∇ e3 ϕ, ϕ .
Moreover
e 3 · ∇ e2 ϕ, ϕ − e 2 · ∇ e3 ϕ, ϕ = − e 2 · e 3 · ∇ e2 ϕ, e 2 · ϕ − e 3 · e 2 · ∇ e3 ϕ, e 3 · ϕ = e 2 · ∇ e2 ϕ − e 3 · ∇ e3 ϕ, e 2 · e 3 · ϕ = − Dϕ, e 2 · e 3 · ϕ = − H · ϕ, e 2 · e 3 · ϕ ,
and thus
dξ(e 2 , e 3 ) = − H · ϕ, e 2 · e 3 · ϕ + H · ϕ, e 2 · e 3 · ϕ .
Noting finally that H · ϕ, e 2 · e 3 · ϕ = − ϕ, H · e 2 · e 3 · ϕ = − ϕ, e 2 · e 3 · H · ϕ = e 2 · e 3 · ϕ, H · ϕ = H · ϕ, e 2 · e 3 · ϕ ,
we get that dξ = 0.
We now assume that M is simply connected; then, there exists a function
F : M → R 2,2
such that dF = ξ. The next theorem follows from the properties of the Clifford action and its proof is analogous to the proof of [2, Theorem 3], and is therefore omitted.
Theorem 2. 1. The map F = (F 0 , F 1 , F 2 , F 3 ) : M → R 2,2 is an isometry. 2. The map Φ E : E −→ M × R 2,2 X ∈ E m −→ (F (m), ξ 0 (X), ξ 1 (X), ξ 2 (X), ξ 3 (X))
is an isometry between E and the normal bundle N (F (M )) of F (M ) in R 2,2 , preserving connections and second fundamental forms.
Remark 5.
If M is a Lorentzian surface in R 2,2 , the immersion may be obtained from the constant spinor fields σ1 or −σ1 ∈ H 0 restricted to the surface: indeed, for one of these spinor fields ϕ, and for all X ∈ T M ⊂ M × R 2,2 , we have
ξ(X) = X · ϕ, ϕ = −[ϕ][X] [ϕ] = [X],
where here the brackets [X] ∈ H 1 and [ϕ] = ±σ1 ∈ H 0 represent X and ϕ in one of the two spinorial frames of R 2,2 which are above the canonical basis (recall (27) and Section 1.5). Identifying [X] ∈ R 2,2 ⊂ H 1 to X ∈ R 2,2 , F = ξ identifies to the identity.
Similarly to the Euclidean and Minkowski cases ( [4] and [2]), Theorem 2 gives a spinorial proof of the fundamental theorem given in Remark 2: Corollary 1. We may integrate the Gauss, Ricci and Codazzi equations in two steps:
1. first solving ∇ X ϕ = η(X) · ϕ (39) where η(X) = − 1 2 3 j=2 ǫ j e j · B(X, e j ),
(there is a solution ϕ in Γ(Σ) such that H(ϕ, ϕ) = 1, unique up to the natural right-action of Spin(2, 2) on Γ(Σ)),
then solving
dF = ξ(40)
where ξ(X) = X · ϕ, ϕ (the solution is unique, up to translations in R 2,2 ⊂ H 1 ).
Note that the multiplication on the right by a constant belonging to Spin(2, 2) in the first step, and the addition of a constant belonging to R 2,2 in the second step, correspond to a rigid motion in R 2,2 .
Lorentzian surfaces in R 1,2 and R 2,1
The aim of this section is to deduce spinor characterisations for immersions of Lorentzian surfaces in R 1,2 and R 2,1 ; we obtain characterisations which are different to the characterisations given by M.-A. Lawn [10,11] and by M.-A. Lawn and J. Roth [12]. Keeping the notation of Section 1, we consider the map β : H 0 −→ H 0 given by
β(ξ) = iσξI.
This map is A-linear and satisfies
β 2 = id H0
and β(ξJ) = −β(ξ)J for all ξ ∈ H 0 ; β is thus a real structure on H 0 . We note that β is Spin(2, 2)-equivariant, and thus induces a real structure β : Σ → Σ on the spinor bundle: it satisfies
β 2 = id Σ and β(iϕ) = −iβ(ϕ)
for all ϕ belonging to Σ (here i stands for the natural complex structure on Σ; in coordinates, this is the right-multiplication by J, see Section 1). Moreover, β is anti-linear with respect to the Clifford action of E ⊕ T M : for all X ∈ E ⊕ T M and ϕ ∈ Σ,
β(X · ϕ) = −X · β(ϕ).
Finally, for all ϕ = ϕ + + ϕ − ∈ Σ = Σ + ⊕ Σ − and all X ∈ T M, we have
β(ϕ ± ) = β(ϕ) ± , H(β(ϕ), β(ϕ)) = −H(ϕ, ϕ) and ∇ X β(ϕ) = β(∇ X ϕ).(41)
Throughout the section, we suppose that E = Re 0 ⊕ Re 1 where e 0 and e 1 are unit, orthogonal and parallel sections of E such that e 0 , e 0 = −1 and e 1 , e 1 = 1; we moreover assume that e 0 is future-directed, and that (e 0 , e 1 ) is positively oriented. We consider the isometric embeddings of R 1,2 and R 2,1 in R 2,2 ⊂ H 1 given by
R 1,2 = (e 0 0 ) ⊥ and R 2,1 = (e 0 1 ) ⊥ ,
where e 0 0 = σi1 and e 0 1 = I are the first two vectors of the canonical basis of R 2,2 ⊂ H 1 . We note that the signatures of R 1,2 and R 2,1 are (+, −, +) and (−, −, +) respectively. Let H be a section of E and ϕ ∈ Γ(Σ) be a solution of
Dϕ = H · ϕ and H(ϕ, ϕ) = 1.(42)
According to Section 2.2, the spinor field ϕ defines an isometric immersion M ϕ ֒→ R 2,2 (unique, up to translations), with normal bundle E and mean curvature vector H. We give a characterisation of the isometric immersions in R 1,2 and R 2,1 (up to translations) in terms of ϕ :
Then the isometric immersion M ϕ ֒→ R 2,2 belongs to R 1,2 .
2-Assume that H = He 0 and e 1 · ϕ = −β(ϕ).
Then the isometric immersion M ϕ ֒→ R 2,2 belongs to R 2,1 . Reciprocally, if M ϕ ֒→ R 2,2 belongs to R 1,2 (resp. to R 2,1 ), then (43) (resp. (44)) holds for some unit, orthogonal and parallel sections (e 0 , e 1 ) of E.
Proof. 1-Assuming that (43) holds, we compute e 0 · ϕ, ϕ = σi1.
Thus, the constant vector e 0 0 = σi1 ∈ R 2,2 ⊂ H 1 is normal to the immersion (by Theorem 2, (2), since this is ξ(e 0 )), and the immersion thus belongs to R 1,2 . For the converse statements, we choose (e 0 , e 1 ) such that e 0 · ϕ, ϕ = σi1 in the first case and such that e 1 · ϕ, ϕ = I in the second case. Writing these identities in some frames, we easily deduce (43) and (44).
If we assume that M ⊂ H ⊂ R 2,2 , where H is the hyperplane R 1+r,2−r with r = 0, 1 (i.e. H is R 1,2 if r = 0 and R 2,1 if r = 1), and if we consider e 0 and e 1 timelike and spacelike unit vector fields such that
R 2,2 = Re r ⊕ ⊥ T H and T H = Re 1−r ⊕ ⊥ T M,
then the intrinsic spinors of M identify with the spinors of H restricted to M, which in turn identify with the positive spinors of R 2,2 restricted to M : this identification is the content of Propositions 2.5 and 2.6 below, which, together with the previous results, will give the representation of surfaces in R 1,2 and R 2,1 by means of spinors of ΣM only.
Lorentzian surfaces in R 1,2
We first deduce from Proposition 2.4 1-a spinor representation for Lorentzian surfaces in R 1,2 . We can define a scalar product on C 2 by setting
a + ib c + id , a ′ + ib ′ c ′ + id ′ := ad ′ + a ′ d − bc ′ − b ′ c 2 ;
it is of signature (2,2). This scalar product is Spin(1, 1)-invariant (the action of ±e u ∈ Spin(1, 1) is the multiplication by ±e u on the first and by ±e −u on the second component of the spinors) and thus induces a scalar product ., . on the spinor bundle ΣM. It satisfies the following properties: for all ψ, ψ ′ ∈ ΣM and all X ∈ T M,
ψ, ψ ′ = ψ ′ , ψ and X · M ψ, ψ ′ = − ψ, X · M ψ ′ .(45)
This is the scalar product on ΣM that we use in this section (and in this section only). We moreover define |ψ| 2 := ψ, ψ . The following proposition is analogous to [4, Proposition 6.1] (see also [15,Proposition 2.1], and the references there), and is proved in [16]:
Proposition 2.5.
There is an identification
ΣM ∼ −→ Σ + |M ψ −→ ψ * ,
C−linear, and such that, for all X ∈ T M and all ψ ∈ ΣM,
(∇ X ψ) * = ∇ X ψ * ,
the Clifford actions are linked by
(X · M ψ) * = X · e 1 · ψ * and H(ψ * , ψ * ) = 1 + σ 2 |ψ| 2 .(46)
Using this identification, the intrinsic Dirac operator on M, defined by
D M ψ := −e 2 · M ∇ e2 ψ + e 3 · M ∇ e3 ψ, is linked to D by (D M ψ) * = −e 1 · Dψ * .
We suppose that ϕ is a solution of equation (42) such that (43) holds (the immersion belongs to R 1,2 ), and we choose ψ ∈ ΣM such that ψ * = ϕ + (note that ψ = 0, since H(ϕ, ϕ) = 1); it satisfies (D M ψ) * = −e 1 · Dψ * = −e 1 · H · ψ * = −e 1 · He 1 · ψ * = Hψ * , and thus, using (43) and (46),
D M ψ = Hψ with |ψ| 2 = 1(47)
(since H(ϕ + , ϕ + ) = 1+σ 2 if H(ϕ, ϕ) = 1; recall (10)- (11)).
Reciprocally, if ψ ∈ ΣM is a solution of (47), we can define ϕ + := ψ * and ϕ − := e 0 · ψ * , and get ϕ := ϕ + + ϕ − ∈ Σ, a solution of (42) with H = He 1 (we recall that e 0 and e 1 are parallel sections of E such that e 0 , e 0 = −1 and e 1 , e 1 = 1); since e 0 · ϕ = ϕ we obtain an isometric immersion of M in R 1,2 (Proposition 2.4 1-). A solution of (47) is thus equivalent to an isometric immersion in R 1,2 . We thus obtain a spinorial characterisation of an isometric immersion of a Lorentzian surface in R 1,2 , which is simpler than the characterisation obtained in [11], where two spinor fields are involved.
Remark 6.
We also obtain an explicit representation formula: for all ψ ∈ ΣM, we denote by α(ψ) the spinor field whose coordinates in a given spinorial frame are the complex conjugates of the coordinates of ψ in this frame, and by ψ := ψ + − ψ − , the usual conjugation in ΣM. If we suppose that ψ ∈ ΣM is a solution of (47), setting χ := ψ we can show that
χ, α(χ), iχ, iα(χ)(48)
is ., . -orthonormal with signature (−, +, −, +), and in particular is a real basis of ΣM (i is the natural complex structure of ΣM, which is such that the Clifford action is C−linear). For all X ∈ T M and ϕ = ψ * + e 0 · ψ * , where ψ ∈ ΣM satisfies (47), a computation yields
ξ(X) = X · ϕ, ϕ = − X · M ψ, α(χ) I + X · M ψ, iχ (iJ) − X · M ψ, iα(χ) K.
We note that X · M ψ, χ = 0, and thus that ξ(X) may be interpreted as the coordinates of X · M ψ in the orthonormal basis (48). The formula F = ξ represents the immersion. For seek of brevity we don't include the proof, and refer to [16] for details.
Lorentzian surfaces in R 2,1
In this section we deduce from Proposition 2.4 2-a spinor representation for Lorentzian surfaces in R 2,1 . We consider here the following scalar product on ΣM, given in coordinates by
a + ib c + id , a ′ + ib ′ c ′ + id ′ := − ac ′ + a ′ c + bd ′ + b ′ d 2 ;
it is of signature (2,2). Moreover, for all ψ, ψ ′ ∈ ΣM and all X ∈ T M we have
ψ, ψ ′ = ψ ′ , ψ and X · M ψ, ψ ′ = ψ, X · M ψ ′ .(49)
We moreover write |ψ| 2 := ψ, ψ and still denote by i the natural complex structures on Σ and on ΣM.
Proposition 2.6. There is an identification
ΣM ∼ −→ Σ + |M ψ −→ ψ * ,
C−linear, and such that, for all X ∈ T M and all ψ ∈ ΣM,
(∇ X ψ) * = ∇ X ψ * ,
the Clifford actions are linked by
(X · M ψ) * = ie 0 · X · ψ * and H(ψ * , ψ * ) = − 1 + σ 2 |ψ| 2 .(50)
The detailed proof is given in [16]. Using this identification, we have
(D M ψ) * = ie 0 · Dψ *
for all ψ ∈ ΣM. If we suppose that ϕ is a solution of (42), we can choose ψ = 0 ∈ ΣM such that ψ * = ϕ + ; moreover, if (44) holds, ψ satisfies
(D M ψ) * = ie 0 · Dψ * = ie 0 · H · ψ * = ie 0 · He 0 · ψ * = iHψ * ,
and, using (41) and (50),
D M ψ = iHψ, |ψ| 2 = −1.(51)
Reciprocally, if we suppose that ψ ∈ ΣM satisfies (51), we can define ϕ + := ψ * and ϕ − := e 1 ·β(ψ * ), and set ϕ := ϕ + + ϕ − ∈ Σ; using (41), it is not difficult to see that ϕ satisfies (42), and since e 1 · ϕ = −β(ϕ), defines an isometric immersion of M into R 2,1 (Proposition 2.4 2-). A solution of (51) is thus equivalent to an isometric immersion of the Lorentzian surface into R 2,1 . Here again, we obtain a spinor characterisation of an isometric immersion of a Lorentzian surface in R 2,1 , which is simpler than the characterisation obtained in [12] where two spinor fields are needed.
Remark 7.
We also obtain an explicit representation formula: for ψ ∈ ΣM, we may consider ψ and α(ψ) as in the previous section, and show that
α(ψ), iψ, iα(ψ), ψ(52)
is ., . -orthonormal with signature (−, +, −, +); in particular this is a real basis of ΣM. Setting ϕ := ψ * + e 1 · β(ψ * ) where ψ ∈ ΣM is a solution of (51), a computation yields
ξ(X) = X · ϕ, ϕ = X · M ψ, α(ψ) σi1 − X · M ψ, iα(ψ) iJ − X · M ψ, ψ K
for all X ∈ T M. Since X · M ψ, iψ = 0, ξ(X) may be interpreted as the coordinates of X · M ψ in the orthonormal basis (52). Finally, F = ξ represents the immersion. The Grassmannian of the oriented Lorentzian planes in R 2,2 identifies to
Q = {u 1 · u 2 : u 1 , u 2 ∈ R 2,2 , |u 1 | 2 = −|u 2 | 2 = −1} ⊂ Cl 0 (2, 2).
Setting
ℑm H 0 := iAI ⊕ AJ ⊕ iAK
and since e 2 · e 3 ≃ iI, e 3 · e 1 ≃ J and e 1 · e 2 ≃ iK in the identification Cl 0 (2, 2) ≃ H 0 given in (3), we easily get
Q ≃ {ξ ∈ ℑm H 0 : H(ξ, ξ) = −1}.
We define the cross product of two vectors ξ, ξ ′ ∈ ℑm H 0 by
ξ × ξ ′ := 1 2 (ξξ ′ − ξ ′ ξ) ∈ ℑm H 0 . It is such that ξ, ξ ′ = σi H(ξ, ξ ′ )1 + σi ξ × ξ ′ for all ξ, ξ ′ ∈ ℑm H 0 .
We also define the mixed product of three vectors ξ, ξ ′ , ξ ′′ ∈ ℑm H 0 by
[ξ, ξ ′ , ξ ′′ ] := H(ξ × ξ ′ , ξ ′′ ) ∈ A;
it is also easily seen to be, up to sign, the determinant of the vectors ξ, ξ ′ , ξ ′′ ∈ ℑm H 0 in the basis (iI, J, iK) of ℑm H 0 (considered as a A-module). The mixed product is a A-valued volume form on ℑm H 0 , and induces a natural A-valued area form ω Q on Q by
ω Q (p)(ξ, ξ ′ ) := [ξ, ξ ′ , p],
for all p ∈ Q and all ξ, ξ ′ ∈ T p Q.
The Gauss map of a Lorentzian surface in R 2,2
Let M be an oriented Lorentzian surface in R 2,2 . We consider its Gauss map
G : M → Q x → u 1 · u 2
where, at x ∈ M, (u 1 , u 2 ) is a positively oriented orthogonal basis of T x M such that |u 1 | 2 = −|u 2 | 2 = −1. The pull-back by the Gauss map of the area form ω Q is given by the following proposition:
Proposition 3.1. We have G * ω Q = (K + σK N ) ω M ,
where ω M is the area form, K is the Gauss curvature and K N is the normal curvature of M. In particular, assuming moreover that
dG x : T x M → T G(x) Q(53)
is one-to-one at some point x ∈ M, then K = K N = 0 at x if and only if the linear space dG
x (T x M ) is some A-line in T G(x) Q, i.e. dG x (T x M ) = {a U : a ∈ A}(54)
where U is some vector belonging to T G(x) Q ⊂ H 0 .
Proof. The first part of the proposition may be obtained by a direct computation, exactly as in [2, Proposition 6.3]; see also [3, Proposition 3.1] for a similar statement. The second part of the proposition is a consequence of Lemma A.2 in the appendix at the end of the paper.
As a consequence of Proposition 3.1, if K = K N = 0 and G : M → Q is a regular map, there is a unique Lorentz structure σ on M such that
dG x (σ u) = σ dG x (u)(55)
for all x ∈ M and all u ∈ T x M. Indeed, (54) implies that dG x (T x M ) is stable by multiplication by σ, and we may define σ u := dG −1 x (σ dG x (u)) . See Appendix A.3 for a brief account on Lorentz structures.
The invariant ∆ of a Lorentzian surface in R 2,2
If the Gauss map of M is viewed as a map G : M → Λ 2 R 2,2 , we define
δ(u) := 1 2 dG x (u) ∧ dG x (u) ∈ Λ 4 R 2,2
for all u ∈ T x M ; using the canonical volume element e 0 ∧ e 1 ∧ e 2 ∧ e 3 , we may identify Λ 4 R 2,2 to R and thus consider δ as a quadratic form on T x M ; its determinant with respect to the natural metric on M ∆ := det g δ is a second order invariant of the surface; it is positive if and only if the surface admits two distinct asymptotic directions at every point (since an asymptotic direction is by definition a vector vanishing δ and the sign of ∆ is the opposite of the discriminant of δ), see [3]. This invariant was introduced for surfaces in 4-dimensional euclidian space in [14].
Local description of the flat Lorentzian surfaces with flat normal bundle
In this section we suppose that M is simply connected and that the bundles T M and E are flat (K = K N = 0). We recall that the bundle Σ := ΣE ⊗ ΣM is associated to the principal bundlẽ Q and to the representation ρ of the structure group Spin ′ (1, 1) × Spin(1, 1) in H 0 given by (22). Since the curvatures K and K N are zero, the spinorial connection on the bundleQ is flat, and Q admits a parallel local sections; since M is simply connected, the sections is in fact globally defined. We consider ϕ ∈ Γ(Σ) a solution of
Dϕ = H · ϕ(56)
such that H(ϕ, ϕ) = 1 and g = [ϕ] : M → Spin(2, 2) the coordinates of ϕ ins :
ϕ = [s, g] ∈ Σ =Q × H 0 /ρ.
Note that, by Theorem 1, ϕ also satisfies
∇ X ϕ = η(X) · ϕ(57)
for all X ∈ T M, where
η(X) = − 1 2 j=2,3 ǫ j e j · B(X, e j )(58)
for some bilinear map B : T M × T M → E. In the following, we will denote by (e 0 , e 1 ) and (e 2 , e 3 ) the parallel, orthonormal and positively oriented frames, respectively normal, and tangent to M, corresponding tos, i.e. such that π(s) = (e 0 , e 1 , e 2 , e 3 ) where π :Q → Q E × Q M is the natural projection. We moreover assume that the Gauss map G of the immersion defined by ϕ is regular, and consider the Lorentz structure σ induced on M by G, defined by (55). We now show that g is in fact a conformal map admitting a special parametrization, and that, in such a special parametrization, g depends on a single conformal map ψ : U ⊂ A → A (see Appendix A.3 for the notion of conformal map on a Lorentz surface). To establish this result, we will first need some preliminary lemmas; since they are analogous to lemmas given in [2], we only give very brief indications of their proofs, and refer to this paper for details. Proof. This is the identity G = e 2 · ϕ, ϕ e 3 · ϕ, ϕ written in a section ofQ above (e 2 , e 3 ).
Lemma 3.3. Denoting by [η]
∈ Ω 1 (M, H 0 ) the 1-form which represents η ins, we have
[η] = dg g −1 = η 1 J + iη 2 K,(60)
where η 1 and η 2 are 1-forms on M with values in A.
Proof. This is (57) in the parallel frames, taking into account the special form (58) of η for the last equality.
Lemma 3.4. The 1-formη
:= σi η · ϕ, ϕ (61) satisfiesη = − 1 2 G −1 dG = −g −1 dg.
Proof. Identity (61) ins together with (60) imply thatη = −g −1 dg. The other identity is an easy consequence of (59).
The properties (59) and (60) may be rewritten as follows:
Lemma 3.5. Consider the projection p :
Spin(2, 2) ⊂ H 0 −→ Q ⊂ ℑm H 0 g −→ i g −1 Ig
as a S 1 A -principal bundle, where the action of S 1 A on Spin(2, 2) is given by the multiplication on the left. It is equipped with the horizontal distribution given at every g ∈ Spin(2, 2) by
H g := d(R g −1 ) −1 g (AJ ⊕ iAK) ⊂ T g Spin(2, 2),(62)
where R g −1 stands for the right-multiplication by g −1 on Spin(2, 2). The distribution (H g ) g∈Spin (2,2) is H-orthogonal to the fibers of p, and, for all g ∈ Spin(2, 2), dp g : H g → T p(g) Q is an isomorphism which preserves σ and such that H(dp g (u), dp g (u)) = −4H(u, u)
for all u ∈ H g . With these notations, we have
G = p • g,(64)
and the map g : M → Spin(2, 2) appears to be a horizontal lift to Spin(2, 2) of the Gauss map G : M → Q.
Remark 8. The fibration described in the lemma above generalises the Lorentzian Hopf fibration of pseudo-spheres studied in [13]. See also [2, Lemma 6.6] for a similar result in 4-dimensional Minkowski space.
To proceed further, we need to assume that the invariant ∆ does not vanish; we first suppose ∆ > 0, and only mention at the end of the section, and without proof, the similar results concerning the case ∆ < 0 (see also Remark 9 below, where we recall the results obtained in [3] concerning the case ∆ = 0).
G := {a −→ ±a + b : b ∈ A},
which is compatible with the orientation of M and such that g : U ⊂ A → Spin(2, 2) satisfies
H(g ′ , g ′ ) ≡ ±1;(65)
2-there exists a conformal map ψ : U ⊂ A → A such that
g ′ g −1 = cosh ψJ + i sinh ψK or g ′ g −1 = sinh ψJ + i cosh ψK,(66)
where a : U ⊂ A → M is a chart defined in 1-.
Proof. Let a : U ⊂ A → M be a chart given by the Lorentz structure induced by G and compatible with the orientation of M (see Appendix A.3). By Lemma 3.5, g : U → Spin(2, 2) is a conformal map (since so are G and p in (64)). We consider g ′ : U → H 0 such that dg = g ′ da (see Appendix A.3). If µ : A → A is a conformal map, we have
H((g • µ) ′ , (g • µ) ′ ) = µ ′2 H(g ′ , g ′ ).
We observe that we may find µ such that
µ ′2 H(g ′ , g ′ ) = ±1.(67)
Indeed, since g is a conformal map,
H(g ′ , g ′ ) = 1 + σ 2 u(s) + 1 − σ 2 v(t)(68)
for some functions u and v, where s and t ∈ R are such that a = 1+σ 2 s + 1−σ 2 t (see Appendix A.3); we observe that ∆ > 0 is equivalent to u(s)v(t) > 0 : by (63)-(64),
H(dG, dG) = −4H(g ′ , g ′ )da 2 = −2 (uds 2 + vdt 2 ) + σ(uds 2 − vdt 2 ) ; since H(dG, dG) = dG, dG − σ dG ∧ dG (see Appendix A.1), we deduce that δ := 1 2 dG ∧ dG = uds 2 − vdt 2
and thus that the discriminant of δ has the sign of −uv; the result follows since this sign is also the opposite of ∆ (see Section 3.3). Setting
µ ′ = 1 + σ 2 1 |u| + 1 − σ 2 1 |v| ,
we have by (68)
µ ′2 H(g ′ , g ′ ) = 1 + σ 2 sign(u) + 1 − σ 2 sign(v) = sign(u),
where sign(u) is +1 if u > 0 and is −1 if u < 0. We then define
µ = 1 + σ 2 s s0 1 |u| ds + 1 − σ 2 t t0 1 |v| dt.(69)
µ is clearly a diffeomorphism, and, considering g • µ instead of g, we get a solution of (65). Since all the solutions of (67) preserving orientation are of the form ±µ + b, b ∈ A, we also obtain the uniqueness of a solution up to the group G. We now prove the last claim of the theorem. Writing
g = 1 + σ 2 g 1 + 1 − σ 2 g 2
with g 1 = g 1 (s) and g 2 = g 2 (t) belonging to R1 ⊕ iRI ⊕ RJ ⊕ iRK (g is a conformal map) we get
g ′ g −1 = 1 + σ 2 g ′ 1 g −1 1 + 1 − σ 2 g ′ 2 g −1 2 , with H(g ′ 1 g −1 1 , g ′ 1 g −1 1 ) = H(g ′ 2 g −1 2 , g ′ 2 g −1 2 ) = ±1. Since g ′ 1 g −1 1 and g ′ 2 g −1 2 belong to RJ ⊕ iRK (Lemma 3.3), we deduce that g ′ 1 g −1 1 = cosh ψ 1 J + i sinh ψ 1 K and g ′ 2 g −1 2 = cosh ψ 2 J + i sinh ψ 2 K or g ′ 1 g −1 1 = sinh ψ 1 J + i cosh ψ 1 K and g ′ 2 g −1 2 = sinh ψ 2 J + i cosh ψ 2 K, for ψ 1 = ψ 1 (s) and ψ 2 = ψ 2 (t) ∈ R. The function ψ := 1 + σ 2 ψ 1 (s) + 1 − σ 2 ψ 2 (t)
satisfies (66).
We now study the metric of the surface in the special chart a = x + σy adapted to g, given by Theorem 3. We recall that (e 0 , e 1 ) and (e 2 , e 3 ) are the parallel, orthonormal and positively oriented frames, respectively normal, and tangent to M, corresponding tos. Let us write
H = h 0 e 0 + h 1 e 1 .
We also consider the tangent lightlike vectors they are such that N 1 , N 2 = 1. Finally, we suppose that ψ : U ⊂ A → A is the conformal map defined in Theorem 3 above, and we write ψ = θ 1 + σ θ 2 with θ 1 and θ 2 ∈ R.
Lemma 3.6. We have
N 1 = ± e θ1 √ 2 1 λ ∂ x + 1 µ ∂ y and N 2 = e −θ1 √ 2 1 λ ∂ x − 1 µ ∂ y (70)
where λ, µ ∈ R * satisfy Jdg(e 2 ) + iKdg(e 3 ) = (σh 0 1 + ih 1 I)g;
1 µ 1 λ = − cosh θ 2 sinh θ 2 sinh θ 2 cosh θ 2 h 0 h 1 .(71
since dg(e 2 )g −1 = g ′ g −1 e 2 and dg(e 3 )g −1 = g ′ g −1 e 3 and using the first or the second identity in (66), this may be written
− cosh ψ sinh ψ sinh ψ cosh ψ σh 0 h 1 = e 2 e 3 or sinh ψ cosh ψ cosh ψ sinh ψ σh 0 h 1 = e 2 e 3 .
Setting c := −h 0 sinh θ 2 − h 1 cosh θ 2 and d := −h 0 cosh θ 2 − h 1 sinh θ 2 , these identities read (14)). Since e 2 and e 3 represent the independent vectors e 2 , e 3 , we have cd = 0; setting λ = 1 c and µ = 1 d , we finally easily get (70) and (71). Proposition 3.7. In the chart a = x + σy of Theorem 3, the metric reads ± (λ 2 dx 2 − µ 2 dy 2 );
e 2 = c sinh θ 1 + σd cosh θ 1 e 3 = c cosh θ 1 + σd sinh θ 1 or e 2 = −c cosh θ 1 − σd sinh θ 1 e 3 = −c sinh θ 1 − σd cosh θ 1 (recall
(72) moreover, λ and µ are solutions of the hyperbolic system
∂ x µ = −λ ∂ x θ 2 ∂ y λ = −µ ∂ y θ 2 .(73)
Proof. Since the vectors N 1 , N 2 given in (∂ x , ∂ y ) by (70) satisfy |N 1 | 2 = |N 2 | 2 = 0 and N 1 , N 2 = 1, we have 0 1 1 0
= P t AP,
where A is the matrix of the metric in (∂ x , ∂ y ), and where
P = 1 √ 2 ± e θ 1 λ e −θ 1 λ ± e θ 1 µ − e −θ 1
µ is the matrix representing (N 1 , N 2 ) in the basis (∂ x , ∂ y ); thus
A = ± λ 2 0 0 −µ 2 ,
which is (72). We then compute the Christoffel symbols of this metric using the Christoffel formulas, and easily get
Γ x xx = 1 λ ∂ x λ, Γ x yx = 1 λ ∂ y λ, Γ y xy = 1 µ ∂ x µ, Γ y yy = 1 µ ∂ y µ and Γ y xx = λ µ 2 ∂ y λ, Γ x yy = µ λ 2 ∂ x µ.
Writing finally that (N 1 , N 2 ), given by (70), is parallel with respect to the metric (72) (since so is (e 2 , e 3 )), we easily get (73).
We now state the main result of the section:
Theorem 4. Let ψ : U ⊂ A → A be a conformal map, and θ 1 , θ 2 : U → R be such that
ψ = θ 1 + σθ 2 ;
suppose that λ and µ are solutions of (73) such that λµ = 0, and define
N 1 = ± e θ1 √ 2 1 λ + σ 1 µ and N 2 = e −θ1 √ 2 1 λ − σ 1 µ .(74)
Then, if g : U → Spin(2, 2) ⊂ H 0 is a conformal map solving
g ′ g −1 = cosh ψJ + i sinh ψK or g ′ g −1 = sinh ψJ + i cosh ψK,(75)
and if we set
ξ := ig −1 w 2 − w 1 √ 2 J + w 2 + w 1 √ 2 iK ĝ (76)
where w 1 , w 2 : T U → R are the dual forms of N 1 , N 2 ∈ Γ(T U), the function F = ξ defines a Lorentzian immersion U → R 2,2 with K = K N = 0 and ∆ > 0. Reciprocally, the Lorentzian immersions of M into R 2,2 such that K = K N = 0, ∆ > 0 and with regular Gauss map are locally of this form.
Proof. We first prove the direct statement. We consider the metric on U such that the vectors N 1 ≃ N 1 , N 2 ≃ N 2 ∈ Γ(T U) defined by (74) form a frame of lightlike vectors of T U such that N 1 , N 2 = 1 : this is the metric (72). Since (λ, µ) is a solution of (73), the frame (N 1 , N 2 ) is parallel, and the metric on U is flat. We also consider the trivial bundle E = R 1,1 × U with its trivial metric and its trivial connection: the canonical basis (e 0 , e 1 ) of R 1,1 defines orthonormal and parallel sections of E. We moreover define e 2 := N1−N2 √ 2 , e 3 := N1+N2 √ 2 , parallel and orthogonal frame with e 2 , e 2 = −1 and e 3 , e 3 = 1. We write s = (e 0 , e 1 , e 2 , e 3 ) ∈ Q = (SO(1, 1) × SO(1, 1)) × U, and fixs ∈Q = S 1 A × U such that π(s) = s, where π :Q → Q is the natural double covering. We then consider ϕ ∈ Σ =Q × H 0 /ρ such that [ϕ] = g ins. By construction (equations (75)), ϕ is a solution of the Dirac equation Dϕ = H · ϕ, where H = h 0 e 0 + h 1 e 1 is defined by (71). Moreover, the form defined by (76) is such that ξ = X · ϕ, ϕ ; this is thus a closed 1-form, and F = ξ is an isometric immersion of M into R 2,2 whose normal bundle identifies to E. Thus it is a flat immersion in R 2,2 , with flat normal bundle; moreover ∆ > 0, as it is easily seen using the criterion in the proof of Theorem 3 (H(g ′ , g ′ ) = ±1 by (75), that is u = v = ±1 in (68)).
Reciprocally, if F : M → R 2,2 is the immersion of a flat Lorentzian surface with flat normal bundle, ∆ > 0, and regular Gauss map, we have
F = ξ, with ξ(X) = X · ϕ, ϕ ,(77)
and that ν, ρ are solutions of the system
∂ s (ρ 2 − ν 2 ) + 2∂ t (νρ) = −2(ν 2 + ρ 2 )∂ s θ 2 2∂ s (νρ) − ∂ t (ρ 2 − ν 2 ) = −2(ν 2 + ρ 2 )∂ t θ 2 .
Setting z = s + it, f = ρ − iν and F = f 2 , this system reads
∂ ∂z F = 2b|F | with b = −∂ s θ 2 + i∂ t θ 2
, and thus simplifies to
∂ ∂z f = bf .(80)
Solutions of (80) are special cases of generalised analytic functions (also called pseudoanalytic functions) and are known to be in 1-1 correspondence with analytic functions; see e.g. [5], Section 9. As in Theorem 4 and Corollary 2 above, we get the following Corollary 3. A flat Lorentzian surface with flat normal bundle, regular Gauss map and such that ∆ < 0 locally depends on one analytic function and on two real functions of one real variable.
Remark 9. Finally, if ∆ = 0, then | H| 2 = 0 (because G is regular, see [3]) and the four natural invariants K, K N , | H| 2 , ∆ are zero. Moreover, if we suppose that the surface does not belong to any degenerate hyperplane of R 2,2 , it is umbilic or quasi-umbilic (see [3,Section 5]): it has a parametrization of the form ψ(s, t) = γ(s) + tT (s)
where γ is a lightlike curve in R 2,2 and T is some lightlike vector field along γ such that γ ′ (s) and T (s) are independent for all value of s (Theorem 5.1 and Remark 5.4 in [3]).
Using the Clifford map (1), the quaternions iI, σiI, J, σJ, iK, σiK represent the bivectors e 2 ∧ e 3 , e 0 ∧ e 1 , e 3 ∧ e 1 , e 2 ∧ e 0 , e 1 ∧ e 2 and e 0 ∧ e 3 respectively, and η ≃ x 1 e 2 ∧ e 3 + y 1 e 0 ∧ e 1 + x 2 e 3 ∧ e 1 + y 2 e 2 ∧ e 0 + x 3 e 1 ∧ e 2 + y 3 e 0 ∧ e 3 .
Here e 0 , e 1 , e 2 , e 3 is the canonical basis of R 2,2 . It is then straightforward to verify that the term (82) is η, η − σ η ∧ η.
A.2 Vanishing of the area form on the Grassmannian
We keep here the notation of Section 3.1.
Lemma A.2. If ξ, ξ ′ ∈ T p Q ⊂ ℑm H 0 are such that ω Qp (ξ, ξ ′ ) = 0 then ξ ′ = λξ, ξ = µξ ′ or ξ + ξ ′ = ±σ(ξ − ξ ′ )
for some λ, µ ∈ A. In particular the real vector space generated by ξ and ξ ′ belongs to a A-line in T p Q.
Proof. First, it is easy to see that ω Qp (ξ, ξ ′ ) = 0 if and only if ξ × ξ ′ = 0.
If we write ξ = 1 + σ 2 ξ 1 + 1 − σ 2 ξ 2 and ξ ′ = 1 + σ 2
ξ ′ 1 + 1 − σ 2 ξ ′ 2 ,
where ξ 1 , ξ 2 , ξ ′ 1 , ξ ′ 2 belong to iRI ⊕RJ ⊕iRK ≃ R 3 , then ξ ×ξ ′ = 0 if and only if ξ 1 ×ξ ′ 1 = ξ 2 ×ξ ′ 2 = 0 where the cross product is here the usual cross product in R 3 . We then assume that ξ and ξ ′ are not zero (else, the result is trivial), and consider the following cases: 1-If ξ 1 and ξ 2 are not zero, then ξ ′ 1 = αξ 1 and ξ ′ 2 = βξ 2 for some α, β ∈ R; setting λ = 1+σ 2 α+ 1−σ 2 β we have ξ ′ = λξ. 2-If ξ 1 = 0 and ξ 2 = 0, then, a-assuming ξ ′ 1 = 0 and ξ ′ 2 = 0, we have ξ ′ 1 = αξ 1 for some α ∈ R, and thus ξ ′ = λξ with λ = 1+σ 2 α; b-assuming ξ ′ 1 = 0 and ξ ′ 2 = 0, we have ξ + ξ ′ = σ(ξ − ξ ′ ) by a direct computation. The other cases are similar. Finally, if ξ ′ = λξ or ξ = µξ ′ , the real vector space generated by ξ and ξ ′ obviously belongs to a A−line in T p Q; this result also holds if ξ + ξ ′ = ±σ(ξ − ξ ′ ) since this space is also generated by ξ + ξ ′ and ξ − ξ ′ .
such that the transition functions
ϕ β • ϕ −1 α : ϕ α (U α ∩ U β ) ⊂ A → ϕ β (U α ∩ U β ) ⊂ A, α, β ∈ S
are conformal maps in the following sense: for all a ∈ ϕ α (U α ∩ U β ) and h ∈ A,
d (ϕ β • ϕ −1 α ) a (σ h) = σ d (ϕ β • ϕ −1 α ) a (h)
. A Lorentz structure is also equivalent to a smooth family of maps
σ x : T x M → T x M,
with σ 2 x = id TxM , σ x = ±id TxM . This definition coincide with the definition of a Lorentz surface given in [21]: a Lorentz structure is equivalent to a conformal class of Lorentzian metrics on the surface, that is to a smooth family of cones in every tangent space of the surface, with distinguished lines. Indeed, the cone at x ∈ M is
Ker(σ x − id TxM ) ∪ Ker(σ x + id TxM )
where the sign of the eigenvalues ±1 permits to distinguish one of the lines from the other. If M is moreover oriented, we will say that the Lorentz structure is compatible with the orientation of M if the charts ϕ α : U α → A, α ∈ S preserve the orientations (the positive orientation in A = {x + σy, x, y ∈ R} is naturally given by (∂ x , ∂ y )). In that case, the transition functions are conformal maps A → A preserving orientation.
If M is a Lorentz surface, a smooth map ψ : M → A (or A n , or a Lorentz surface) will be said to be a conformal map if dψ preserves the Lorentz structures, that is if
dψ x (σ x u) = σ ψ(x) (dψ x (u))
for all x ∈ M and u ∈ T x M. In a chart A = {x + σy, x, y ∈ R}, a conformal map satisfies ∂ψ ∂y = σ ∂ψ ∂x .
Defining the coordinates (s, t) such that
x + σ y = 1 + σ 2 s + 1 − σ 2 t (84) (s and t are parameters along the distinguished lines) and writing ψ = 1 + σ 2 ψ 1 + 1 − σ 2 ψ 2 with ψ 1 , ψ 2 ∈ R, (83) reads ∂ t ψ 1 = ∂ s ψ 2 = 0, and we get ψ 1 = ψ 1 (s) and ψ 2 = ψ 2 (t).
A conformal map is thus equivalent to two functions of one variable. We finally note that if ψ : M → A n is a conformal map, we have, in a chart a : U ⊂ A → M, dψ = ψ ′ da,
where da = dx + σdy and ψ ′ belongs to A n ; this is a direct consequence of (83).
If we denote by Q E and Q M the SO(1, 1) principal bundles of the oriented and orthonormal frames of E and T M, byQ E → Q E andQ M → Q M the given spin structures on E and T M, and by p E :Q E → M and p M :Q M → M the natural projections, we define the principal bundle over MQ
Theorem 1 .
1Let (M, g) be a simply connected Lorentzian surface and E a Lorentzian bundle of rank 2 on M equipped with a compatible connection. We assume that M and E are oriented (in space and in time), with given spin structures. Let Σ = ΣE ⊗ ΣM be the twisted spinor bundle and D its Dirac operator, defined in (26). Let H ∈ Γ(E) be a section of E. The three following statements are equivalent:
second fundamental form B and mean curvature H. Moreover, F = ξ, where ξ is the closed 1-form on M with values in R 2,2 defined by ξ(X) := X · ϕ, ϕ for all X ∈ T M. The claims (3) ⇒ (2) ⇒ (1) are direct consequences of the spinorial Gauss formula (Section 1.3.2). We now prove (1) ⇒ (3) using the fundamental theorem of submanifolds (see Remark 2) and the following Proposition 2.1. Let M, E, Σ and H as in Theorem 1. Assume that there exists a spinor field ϕ ∈ Γ(Σ) solution of the Dirac equation
Remark 4 .
4The proof given here does not use any decomposition of the spinor fields; using the same ideas, it should be possible to simplify the proofs of [4, Lemma 3.1] and [2, Lemma 2.1]. Proof of Proposition 2.1. The bilinear map B is symmetric in view of the Dirac equation (32) together with the properties
H
= He 1 and e 0 · ϕ = ϕ.
2 -
2Analogously, assuming that (44) holds, we havee 1 · ϕ, ϕ = − β(ϕ), ϕ = −σi[ϕ][β(ϕ)] = −σi[ϕ][ϕ]σiI = I where [ϕ]∈ H 0 represents the spinor field ϕ in some frames ∈Q. The constant vector e 0 1 = I is thus normal to the immersion, and the result follows.
Grassmannian of the Lorentzian planes in R 2,2
Lemma 3 . 2 .
32The Gauss map of the immersion defined by ϕ is given byG : M −→ Q ⊂ ℑm H 0 (59) x −→ i g −1 Igwhere g = [ϕ] : M → Spin(2, 2) ⊂ H 0 represents ϕ in some local section ofQ.
Theorem 3 .
3Additionally to the assumptions given at the beginning of the section, we suppose that ∆ is positive on M ; we then have: 1-the map g : M → Spin(2, 2) ⊂ H 0 is a conformal map, and, at each point of M, there is a local chart a : U ⊂ A → M, unique up to the action of
)
Proof. In the chart a : U ⊂ A → M introduced above, e 2 , e 3 are represented by two functions e 2 , e 3 : U ⊂ A → A. Ins, the Dirac equation (56) reads−[e 2 ] [∇ e2 ϕ] + [e 3 ] [∇ e3 ϕ] = [ H] [ϕ],that is, recalling Section 1.5,
Acknowledgements: This work is part of the second author's PhD thesis; he thanks CONACYT for support. The first author was partially supported by the project CIC-UMSNH 4.5.Corollary 2.A flat Lorentzian surface with flat normal bundle, regular Gauss map and such that ∆ > 0 locally depends on 4 real functions of one real variable.Proof. We first note that the function ψ in Theorem 4 depends on two functions of one variable: since ψ : A → A is a conformal map, writingwe have ψ 1 = ψ 1 (s) and ψ 2 = ψ 2 (t), where the coordinates (s, t) are defined bysee Appendix A.3. We then write the system (73) in the coordinates (s, t) and getthis is an hyperbolic system, and we may solve a Cauchy problem: once ψ 1 and ψ 2 are given, a solution of (78) depends on two functions µ(0, t), λ(0, t) of the variable t. By Theorem 4, the surface depends on ψ 1 (s), ψ 2 (t), µ(0, t) and λ(0, t).We now briefly describe the case ∆ < 0 : a theorem similar to Theorem 3 holds, replacing (65) by H(g ′ , g ′ ) = ±σ and (66) byA AppendixA.1 The norm H on bivectorsWe keep the notation of Section 1.1.In this formula, ., . stands for the natural scalar product on Λ 2 R 2,2 , and we use the identification Λ 4 R 2,2 ≃ R given by the canonical volume element e 0 ∧ e 1 ∧ e 2 ∧ e 3 to see the term η ∧ η as a real number.Proof. This is merely a computation: if η = iη 1 I + η 2 J + iη 3 K belongs to ℑm H 0 , writing η j = x j + σy j , x j , y j ∈ R, for j = 1, 2, 3, we getA.3 Lorentz surfaces and Lorentz numbersIn this appendix we present elementary results concerning Lorentz surfaces and Lorentz numbers. We will say that a surface M is a Lorentz surface if there is a covering by open subsets M = ∪ α∈S U α and charts ϕ α : U α → A, α ∈ S
In a parallel frames, we have ϕ = [s, g], where g : M → Spin(2, 2) ⊂ H 0 is an horizontal and conformal map (Lemma 3.5 and Theorem 3). In a chart compatible with the Lorentz structure induced by the Gauss map. where ϕ is the restriction to M of the constant spinor field σ1 of R 2,2. ξ is of the form (76) where (w 1 , w 2 ) is the dual basis of the basis defined by (74) and where in this last expression (λ, µ) are solutions ofwhere ϕ is the restriction to M of the constant spinor field σ1 of R 2,2 . In a parallel frames, we have ϕ = [s, g], where g : M → Spin(2, 2) ⊂ H 0 is an horizontal and conformal map (Lemma 3.5 and Theorem 3). In a chart compatible with the Lorentz structure induced by the Gauss map and adapted to g (Theorem 3), ξ is of the form (76) where (w 1 , w 2 ) is the dual basis of the basis defined by (74) and where in this last expression (λ, µ) are solutions of (73).
Extrinsic bounds for the eigenvalues of the Dirac operator. C Bär, Ann. Glob. Anal. Geom. 16C. Bär, Extrinsic bounds for the eigenvalues of the Dirac operator, Ann. Glob. Anal. Geom. 16 (1998) 573-596.
On the spinorial representation of spacelike surfaces into 4-dimensional Minkowski space. P Bayard, J. Geom. Phys. 74P. Bayard, On the spinorial representation of spacelike surfaces into 4-dimensional Minkowski space, J. Geom. Phys. 74 (2013) 289-313.
. P Bayard, V Patty, F Sánchez-Bringas, On timelike surfaces in R 2,2 , in preparationP. Bayard, V. Patty, F. Sánchez-Bringas, On timelike surfaces in R 2,2 , in preparation.
Spinorial representation of surfaces into 4-dimensional space forms. P Bayard, M.-A Lawn, J Roth, Ann. Global Analysis and Geometry. 44P. Bayard, M.-A. Lawn, J. Roth, Spinorial representation of surfaces into 4-dimensional space forms, Ann. Global Analysis and Geometry 44:4 (2013) 433-453.
An outline of the theory of pseudoanalytic functions. L Bers, Bulletin of the American Mathematical Society. 624L. Bers, An outline of the theory of pseudoanalytic functions, Bulletin of the American Math- ematical Society 62:4 (1956) 291-331.
On flat surfaces with flat normal bundle in space forms. M Dajczer, R Tojeiro, Houston Math. J. 21M. Dajczer and R. Tojeiro, On flat surfaces with flat normal bundle in space forms, Houston Math. J. 21 (1995) 319-338.
On the spinor representation of surfaces in Euclidean 3-space. Th, Friedrich, J. Geom. Phys. 28Th. Friedrich, On the spinor representation of surfaces in Euclidean 3-space, J. Geom. Phys. 28 (1998) 143-157.
Lower bounds for the eigenvalues of the Dirac operator, Part II. The submanifold Dirac operator. O Hijazi, X Zhang, Ann. Glob. Anal. Geom. 20O. Hijazi, X. Zhang, Lower bounds for the eigenvalues of the Dirac operator, Part II. The submanifold Dirac operator. Ann. Glob. Anal. Geom. 20 (2001) 163-181.
A Weierstrass representation theorem for Lorentz surfaces. J Konderak, Complex variables. 505J. Konderak, A Weierstrass representation theorem for Lorentz surfaces, Complex variables 50:5 (2005) 319-332.
Spinorial methods, para-complex and para-quaternionic geometry in the theory of submanifolds. M.-A Lawn, Nancy I, D.F.D. MathématiquesUniversité Henri PoincaréPhD ThesisM.-A. Lawn, Spinorial methods, para-complex and para-quaternionic geometry in the theory of submanifolds, PhD Thesis, Université Henri Poincaré -Nancy I, D.F.D. Mathématiques, 2006.
A spinorial representation for Lorentzian surfaces in R 2,1. M.-A Lawn, J. Geom. Phys. 586M.-A. Lawn, A spinorial representation for Lorentzian surfaces in R 2,1 , J. Geom. Phys. 58:6 (2008) 683-700.
Spinorial characterisation of surfaces in pseudo-Riemannian space forms. M.-A Lawn, J Roth, Math. Phys. Anal. and Geom. 14M.-A. Lawn and J. Roth, Spinorial characterisation of surfaces in pseudo-Riemannian space forms, Math. Phys. Anal. and Geom. 14:3 (2011) 185-195.
Clasificación de toros llanos Lorentzianos en espacios tridimensionales, Tesis Doctoral. M.-A Leon, Universidad de Murcia, Departamento de MatemáticasM.-A. Leon, Clasificación de toros llanos Lorentzianos en espacios tridimensionales, Tesis Doctoral, Universidad de Murcia, Departamento de Matemáticas (2012).
On the singularities of submanifolds of higher dimensional Euclidean spaces. J A Little, Annali Mat. Pura et Appl. 83J. A. Little, On the singularities of submanifolds of higher dimensional Euclidean spaces, Annali Mat. Pura et Appl., 83:4A (1969) 261-336.
Surfaces in S 3 and H 3 via spinors, Actes du séminaire de théorie spectrale. B Morel, Institut Fourier. 23B. Morel, Surfaces in S 3 and H 3 via spinors, Actes du séminaire de théorie spectrale, Institut Fourier, Grenoble 23 (2005) 9-22.
Representación espinorial de superficies de tipo tiempo en R 2,2 , Tesis de Doctorado, Posgrado Conjunto UNAM-UMSNH. V Patty, in preparationV. Patty, Representación espinorial de superficies de tipo tiempo en R 2,2 , Tesis de Doctorado, Posgrado Conjunto UNAM-UMSNH, in preparation.
The spinor representation formulas in 3 and 4 dimensions, Pure and Applied Differential Geometry. P Romon, J Roth, Proceedings of the conference PADGE 2012. the conference PADGE 2012AachenShaker VerlagP. Romon, J. Roth, The spinor representation formulas in 3 and 4 dimensions, Pure and Applied Differential Geometry, Proceedings of the conference PADGE 2012, Shaker Verlag, Aachen (2013) 261-282.
Spinorial characterisations of surfaces into 3-homogeneous manifolds. J Roth, J. Geom. Phys. 60J. Roth, Spinorial characterisations of surfaces into 3-homogeneous manifolds, J. Geom. Phys. 60 (2010) 1045-1061.
Semi-Riemannian Geometry with applications to relativity, Pure and applied mathematics. B O'neill, B. O'Neill, Semi-Riemannian Geometry with applications to relativity, Pure and applied math- ematics, 1983.
V V Varlamov, arXiv:math/0004056Spinor representations of surfaces in 4-dimensional pseudo-Riemannian manifolds. V.V. Varlamov, Spinor representations of surfaces in 4-dimensional pseudo-Riemannian man- ifolds, 2000, arXiv:math/0004056.
An introduction to Lorentz surfaces, de Gruyter expositions in mathematics 22. T Weinstein, Walter de GruyterT. Weinstein, An introduction to Lorentz surfaces, de Gruyter expositions in mathematics 22, Walter de Gruyter, 1996.
|
[] |
[
"Deformed Heisenberg algebra with minimal length and equivalence principle",
"Deformed Heisenberg algebra with minimal length and equivalence principle"
] |
[
"V M Tkachuk [email protected] \nDepartment for Theoretical Physics\nIvan Franko National University of Lviv\n12 Drahomanov StUA-79005LvivUkraine\n"
] |
[
"Department for Theoretical Physics\nIvan Franko National University of Lviv\n12 Drahomanov StUA-79005LvivUkraine"
] |
[] |
Studies in string theory and quantum gravity lead to the Generalized Uncertainty Principle (GUP) and suggest the existence of a fundamental minimal length which, as was established, can be obtained within the deformed Heisenberg algebra. The first look on the classical motion of bodies in a space with corresponding deformed Poisson brackets in a uniform gravitational field can give an impression that bodies of different mass fall in different ways and thus the equivalence principle is violated. Analyzing the kinetic energy of a composite body we find that the motion of its center of mass in the deformed space depends on some effective parameter of deformation. It gives a possibility to recover the equivalence principle in the space with deformed Poisson brackets and thus GUP is reconciled with the equivalence principle. We also show that the independence of kinetic energy on composition leads to the recovering of the equivalence principle in the space with deformed Poisson brackets.
|
10.1103/physreva.86.062112
|
[
"https://arxiv.org/pdf/1301.1891v1.pdf"
] | 119,164,063 |
1301.1891
|
b81792de1d39d2550a20911ce832c3ea3f1400ac
|
Deformed Heisenberg algebra with minimal length and equivalence principle
9 Jan 2013 December 11, 2013
V M Tkachuk [email protected]
Department for Theoretical Physics
Ivan Franko National University of Lviv
12 Drahomanov StUA-79005LvivUkraine
Deformed Heisenberg algebra with minimal length and equivalence principle
9 Jan 2013 December 11, 2013
Studies in string theory and quantum gravity lead to the Generalized Uncertainty Principle (GUP) and suggest the existence of a fundamental minimal length which, as was established, can be obtained within the deformed Heisenberg algebra. The first look on the classical motion of bodies in a space with corresponding deformed Poisson brackets in a uniform gravitational field can give an impression that bodies of different mass fall in different ways and thus the equivalence principle is violated. Analyzing the kinetic energy of a composite body we find that the motion of its center of mass in the deformed space depends on some effective parameter of deformation. It gives a possibility to recover the equivalence principle in the space with deformed Poisson brackets and thus GUP is reconciled with the equivalence principle. We also show that the independence of kinetic energy on composition leads to the recovering of the equivalence principle in the space with deformed Poisson brackets.
Introduction
Recently, lots of attention has been devoted to studies of different systems in a space with a deformed Heisenberg algebra that takes into account the quantum nature of space on the phenomenological level. These works are motivated by several independent lines of investigations in string theory and quantum gravity (see, e.g., [1,2,3]) which lead to the Generalized Uncertainty Principle (GUP)
∆X ≥ 2 1 ∆P + β∆P(1)
and suggest the existence of the fundamental minimal length ∆X min = √ β, which is of order of Planck's length l p = G/c 3 ≃ 1.6 × 10 −35 m. It was established that minimal length can be obtained in the frame of small quadratic modification (deformation) of the Heisenberg algebra [4,5] [X, P ] = i (1 + βP 2 ).
(2)
In the classical limit → 0 the quantum-mechanical commutator for operators is replaced by the Poisson bracket for corresponding classical variables
1 i [X, P ] → {X, P },(3)
which in the deformed case reads {X, P } = (1 + βP 2 ).
We point out that historically the first algebra of that kind in the relativistic case was proposed by Snyder in 1947 [6]. But only after investigations in string theory and quantum gravity the considerable interest in the studies of physical properties of classical and quantum systems in spaces with deformed algebras appeared.
Observation that GUP can be obtained from the deformed Heisenberg algebra opens the possibility to study the influence of minimal length on properties of physical systems on the quantum level as well as on the classical one.
Deformed commutation relations bring new difficulties in the quantum mechanics as well as in the classical one. Only a few problems are known to be solved exactly. They are: one-dimensional harmonic oscillator with minimal uncertainty in position [4] and also with minimal uncertainty in position and momentum [7,8], D-dimensional isotropic harmonic oscillator [9,10], three-dimensional Dirac oscillator [11], (1+1)-dimensional Dirac oscillator within Lorentz-covariant deformed algebra [12], one-dimensional Coulomb problem [13], and the singular inverse square potential with a minimal length [14,15]. Three-dimensional Coulomb problem with deformed Heisenberg algebra was studied within the perturbation theory [16,17,18,19,20]. In [21] the scattering problem in the deformed space with minimal length was studied. The ultra-cold neutrons in gravitational field with minimal length were considered in [22,23,24]. The influence of minimal length on Lamb's shift, Landau levels, and tunneling current in scanning tunneling microscope was studied [25,26]. The Casimir effect in a space with minimal length was examined in [27]. In [28] the effect of noncommutativity and of the existence of a minimal length on the phase space of cosmological model was investigated. The authors of paper [29] studied various physical consequences which follow from the noncommutative Snyder space-time geometry. The classical mechanics in a space with deformed Poisson brackets was studied in [30,31,32]. The composite system (N-particle system) in the deformed space with minimal length was studied in [33,34].
Note that deformation of Heisenberg algebra brings not only technical difficulties in solving of corresponding equations but also brings problems of fundamental nature. One of them is the violation of the equivalence principle in space with minimal length [35]. This is the result of assumption that the parameter of deformation for macroscopic bodies of different mass is unique. In paper [33] we shown that the center of mass of a macroscopic body in deformed space is described by an effective parameter of deformation, which is essentially smaller than the parameters of deformation for particles consisting the body. Using the result of [33] for the effective parameter of deformation we show that the equivalence principle in the space with minimal length can be recovered. In section 3 we reproduce the result of [33] concerning the effective parameter of deformation for the center of mass on the classical level and in addition show that the independence of kinetic energy on the composition leads to the recovering of the equivalence principle in the space with deformed Poisson bracket.
Free fall of particle in a uniform gravitational field
The Hamiltonian of a particle (a macroscopic body which we consider as a point particle) of mass m in a uniform gravitational field reads
H = P 2 2m − mgX,(5)
the gravitational field is characterized by the factor g is directed along the x axis. Note that here the inertial mass (m in the first term) is equal to the gravitational mass (m in the second one). The Hamiltonian equations of motion in space with deformed Poisson brackets are as followṡ
X = {X, H} = P m (1 + βP 2 ),(6)P = {P, H} = mg(1 + βP 2 ).(7)
We impose zero initial conditions for position and momentum, namely X = 0, and P = 0 at t = 0. These equations can be solved easily. From the second equation we find
P = 1 √ β tan( βmgt).(8)
From the first equation we obtain for velocitẏ
X = 1 m √ β tan( √ βmgt) cos 2 ( √ βmgt)(9)
and for position
X = 1 2gm 2 β tan 2 ( βmgt).(10)
One can verify that the motion is periodic with period T = π m √ βg . The particle moves from X = 0 to X = ∞, then reflects from ∞ and moves in the opposite direction to X = 0. But from the physical point of view this solution is correct only for time t ≪ T when the velocity of particle is much smaller than the speed of light. In other cases, the relativistic mechanics must be considered.
It is instructive to write out the results for velocity and coordinate in the first order over β:Ẋ
= gt 1 + 4 3 βm 2 g 2 t 2 ,(11)X = gt 2 2 1 + 2 3 βm 2 g 2 t 2 .(12)
In the limit β → 0 we reproduce the well known resultṡ
X = gt, X = gt 2 2 ,(13)
where kinematic characteristics, such as velocity and position of a free-falling particle depend only on initial position and velocity of the particle and do not depend on the composition and mass of the particle. It is in agreement with the weak equivalence principle, also known as the universality of free fall or the Galilean equivalence principle. Note that in the nondeformed case, when the Newtonian equation of motion in gravitational field is fulfilled the weak equivalence principle is noting else that the statement of equivalence of inertial and gravitational masses.
As we see from (9) and (10) or (11) and (12), in the deformed space the trajectory of the point mass in the gravitational field depends on the mass of the particle if we suppose that parameter of deformation is the same for all bodies. So, in this case the equivalence principle is violated. In paper [33] we shown on the quantum level that in fact the motion of the center of mass of a composite system in deformed space is governed by an effective parameter (in [33] it is denoted asβ 0 , here we denote it as β). So, the parameter of deformation for a macroscopic body is
β = i µ 3 i β i ,(14)
where µ i = m i / i m i , m i and β i are the masses and parameters of deformation of particles which form composite system (body). Note that in the next section we derive this result considering kinetic energy of a body consisting of N particles. Firstly, let us consider a special case m i = m 1 and β i = β 1 when body consists of the same elementary particles. Then we find
β = β 1 N 2 ,(15)
where N is the number of particles of body with mass m = Nm 1 . Note that expressions (9) and (10) contain combination √ βm. Substituting the effective parameter of deformation β 1 /N 2 instead of β we find
βm = β 1 m/N = β 1 m 1 .(16)
As a result, the trajectory now does not depend on the mass of the macroscopic body but depends on √ β 1 m 1 , which is the same for bodies of different mass. So, the equivalence principle is recovered.
The general case when a body consists of the different elementary particles is more complicated. Then the situation is possible when different combinations of elementary particles lead to the same mass but with different effective parameters of deformation. Then the motion of bodies of equal mass but different composition will be different. This also violates the weak equivalence principle. The equivalence principle can be recovered when we suppose that
β 1 m 1 = β 2 m 2 = . . . = β N m N = γ(17)
Really, then the effective parameter of deformation for a macroscopic body is
β = i m 3 i ( i m i ) 3 β i = γ 2 ( i m i ) 2 = γ 2 m 2(18)
and thus
βm = γ,(19)
that is the same as (17). Note, that the trajectory of motion in this case does not depend on mass and depends only on γ which takes same value for all bodies. It means that bodies of different mass and different composition move in a gravitational field in the same way and thus the weak equivalence principle is not violated when (17) is satisfied. Equation (17)
β i = γ 2 m 2 i ,(20)
So, the parameter of deformation is completely determined by the mass of a particle. In the next section we derive formula (14) on the classical level and give some arguments concerning the relation (17).
Kinetic energy of a composite system in deformed space and parameter of deformation
In this section we use the natural statement: The kinetic energy has the additivity property and does not depend on composition of a body but only on its mass. Firstly, we consider the additivity property of the kinetic energy. Let us consider N particles with masses m i and deformation parameters β i . It is equivalent to the situation when the macroscopic body is divided into N parts which can be treated as point particles with corresponding masses and parameters of deformation. We consider the case when each particle of the system moves with the same velocity as the whole system.
Let us rewrite the kinetic energy as a function of velocity. From the relation between velocity and momentum (6) in the first approximation over β we find
P = mẊ(1 − βm 2Ẋ 2 ).(21)
Then the kinetic energy as a function of velocity in the first order approximation over β reads
T = mẊ 2 2 − βm 3Ẋ 4 .(22)
The kinetic energy of the whole system is given by (22) where m = i m i . On the other hand, the kinetic energy of the whole system is the sum of kinetic energies of particles which constitute the system:
T = i T i = mẊ 2 2 − i β i m 3 iẊ 4 ,(23)
where we take into account that velocities of all particles are the same as the velocity of the whole systemẊ i =Ẋ, i = 1, . . . , N. Comparing (22) and (23) we obtain (14). Now let us consider the independence of kinetic energy on the composition of a body. It is enough to consider a body of a fixed mass consisting of two parts (particles) with masses m 1 = mµ and m 2 = m(1−µ), where 0 ≤ µ ≤ 1.
Parameters of deformation for the first and second particles are β 1 = β µ and β 2 = β 1−µ , here we write explicitly that parameters of deformations are some function of mass (µ = m 1 /m is dimensionless mass). The particles with different masses constitute the body with the same mass m = m 1 + m 2 . So, in this situation we have the body of the same mass but with different composition.
The kinetic energy of the whole body is given by (22) with the parameter of deformation
β = β µ µ 3 + β 1−µ (1 − µ) 3 .(24)
Since the kinetic energy does not depend on the composition, the parameter of deformation for the whole body must be fixed β = const for different µ.
Thus (24) is the equation for β µ as a function of µ at fixed β. One can verify that the solution reads
β µ = β µ 2 .(25)
Taking into account that µ = m 1 /m we find
β 1 m 2 1 = βm 2(26)
that corresponds to (17). So, the independence of the kinetic energy from composition leads to the one fundamental constant γ 2 = βm 2 . Then parameters of deformation β i of particles or composite bodies of different masses m i are β i = γ 2 /m 2 i that is in agreement with relation (20).
Conclusions
One of the main results of the paper is the expression for the parameter of deformation for particles or bodies of different mass (20) which recovers the equivalence principle and thus the equivalence principle is reconciled with the generalized uncertainty principle. It is necessary to stress that expression (20) was derived also in section 3 from the condition of the independence of kinetic energy on composition. Note that (20) contains the same constant γ for different particles and parameter of deformation is inverse to the squared mass. The constant γ has dimension inverse to velocity. Therefore, it is convenient to introduce a dimensionless constant γc, where c is the speed of light. In order to make some speculations concerning the possible value of γc we suppose that for the electron the parameter of deformation β e is related to Planck's length, namely β e = l p = G/c 3 .
Then we obtain γc = c βm e = α Gm 2 e e 2 ≃ 4.2 × 10 −23 ,
where α = e 2 / c is the fine structure constant. Fixing the parameter of deformation for electron we can calculate the parameter of deformation for particles or bodies of different mass. It is more instructive to write the minimal length for space where the composite body of mass m lives: β = m e m β e = m e m l p .
As an example let us consider nucleons (proton or neutron). The parameter of deformation for nucleons β nuc or minimal length for nucleons reads √ β nuc ≃ l p /1840. So, the effective minimal length for nucleons is three order smaller than that for electrons.
brings one new fundamental constant γ. Note that parameter 1/γ has the dimension of velocity. The parameters of deformation β i of particles or macroscopic bodies of mass m i are determined by fundamental constant γ as follows
. D J Gross, P F Mende, Nucl. Phys. B. 303407D. J. Gross and P. F. Mende, Nucl. Phys. B 303, 407 (1988).
. M Maggiore, Phys. Lett. B. 30465M. Maggiore, 1993 Phys. Lett. B 304, 65 (1993).
. E Witten, Phys. Today. 4924E. Witten, Phys. Today 49, 24 (1996).
. A Kempf, G Mangano, R B Mann, Phys. Rev. D. 521108A. Kempf, G. Mangano, R. B. Mann, Phys. Rev. D 52, 1108 (1995).
. A Kempf, Phys. Rev. D. 545174A. Kempf, Phys. Rev. D 54, 5174 (1996).
. H S Snyder, Phys. Rev. 7138H. S. Snyder, Phys. Rev. 71, 38 (1947).
. C Quesne, V M Tkachuk, J. Phys. A. 3610373C. Quesne and V. M. Tkachuk, J. Phys. A 36, 10373 (2003).
. C Quesne, V M Tkachuk, J. Phys. A. 3710095C. Quesne and V. M. Tkachuk, J. Phys. A 37, 10095 (2004).
. L N Chang, D Minic, N Okamura, T Takeuchi, Phys. Rev. D. 65125027L. N. Chang, D. Minic, N. Okamura and T. Takeuchi, Phys. Rev. D 65, 125027 (2002).
. I Dadić, L Jonke, S Meljanac, Phys. Rev. D. 6787701I. Dadić, L. Jonke and S. Meljanac, Phys. Rev. D 67, 087701 (2003).
. C Quesne, V M Tkachuk, J. Phys. A. 381747C. Quesne and V. M. Tkachuk, J. Phys. A 38, 1747 (2005).
. C Quesne, V M Tkachuk, J. Phys. A. 3910909C. Quesne and V. M. Tkachuk, J. Phys. A 39, 10909 (2006).
. T V Fityo, I O Vakarchuk, V M Tkachuk, J. Phys. A. 392143T. V. Fityo, I. O. Vakarchuk and V. M. Tkachuk, J. Phys. A 39, 2143 (2006).
. Djamil Bouaziz, Michel Bawin, Phys.Rev.A. 7632112Djamil Bouaziz, Michel Bawin, Phys.Rev.A 76, 032112 (2007).
. Djamil Bouaziz, Michel Bawin, Phys.Rev.A. 7832110Djamil Bouaziz, Michel Bawin, Phys.Rev.A 78, 032110 (2008).
. F Brau, J. Phys. A. 327691F. Brau, J. Phys. A 32, 7691 (1999).
. S Benczik, L N Chang, D Minic, T Takeuchi, Phys. Rev. A. 7212104S. Benczik, L. N. Chang, D. Minic and T. Takeuchi, Phys. Rev. A 72, 012104 (2005).
. M M Stetsko, V M Tkachuk, Phys. Rev. A. 7412101M. M. Stetsko and V. M. Tkachuk, Phys. Rev. A 74, 012101 (2006).
. M M Stetsko, Phys. Rev. A. 7462105M. M. Stetsko, Phys. Rev. A 74, 062105 (2006).
. M M Stetsko, V M Tkachuk, Phys. Lett. A. 3725126M. M. Stetsko and V. M. Tkachuk, Phys. Lett. A 372, 5126 (2008).
. M M Stetsko, V M Tkachuk, Phys. Rev. A. 7612707M.M. Stetsko, V.M. Tkachuk, Phys. Rev. A 76, 012707 (2007).
. F Brau, F Buisseret, Phys.Rev.D. 7436002F. Brau, F. Buisseret, Phys.Rev.D 74, 036002 (2006).
. Kourosh Nozari, Pouria Pedram, EPL. 9250013Kourosh Nozari, Pouria Pedram, EPL 92, 50013 (2010).
. Pouria Pedram, Kourosh Nozari, S H Taheri, JHEP. 110393Pouria Pedram, Kourosh Nozari, S. H. Taheri, JHEP 1103:093, (2011).
. S Das, E C Vagenas, Phys. Rev. Lett. 101221301S. Das, E. C. Vagenas, Phys. Rev. Lett. 101, 221301 (2008).
. Ahmed Farag Ali, Saurya Das, Elias C Vagenas, Phys. Rev. D. 8444013Ahmed Farag Ali, Saurya Das, Elias C. Vagenas, Phys. Rev. D 84, 044013 (2011).
. A M Frassino, O Panella, Phys. Rev. D. 8545030A. M. Frassino, O. Panella, Phys. Rev. D 85, 045030 (2012)
. B Vakili, Phys. Rev. D. 7744023B. Vakili, Phys. Rev. D 77, 044023 (2008).
. M V Battisti, S Meljanac, Phys. Rev. D. 7967505M. V. Battisti, S. Meljanac, Phys. Rev. D 79, 067505 (2009).
. S Benczik, L N Chang, D Minic, N Okamura, S Rayyan, T Takeuchi, Phys. Rev. D. 6626003S. Benczik, L. N. Chang, D. Minic, N. Okamura, S. Rayyan, T. Takeuchi, Phys. Rev. D 66, 026003 (2002).
. A M Frydryszak, V M Tkachuk, Czechoslovak Journal of Physics. 53115556A. M. Frydryszak, V. M. Tkachuk, Czechoslovak Journal of Physics 53, No. 11, 5556 C (2003).
. Z K Silagadze, Phys. Lett. A. 3732643Z. K. Silagadze, Phys. Lett. A 373, 2643 (2009).
. C Quesne, V M Tkachuk, Phys. Rev. A. 8112106C. Quesne, V.M. Tkachuk, Phys. Rev. A 81, 012106 (2010).
. F Buisseret, Phys.Rev. A. 8262102F. Buisseret, Phys.Rev. A 82, 062102 (2010).
. Ahmed Farag, Ali , Class. Quant. Grav. 2865013Ahmed Farag Ali, Class. Quant. Grav. 28, 065013 (2011).
|
[] |
[
"Observing Custom Software Modifications: A Quantitative Approach of Tracking the Evolution of Patch Stacks",
"Observing Custom Software Modifications: A Quantitative Approach of Tracking the Evolution of Patch Stacks"
] |
[
"Ralf Ramsauer [email protected] ",
"Daniel Lohmann [email protected] ",
"Wolfgang Mauerer [email protected] ",
"\nTechnical University of Applied Sciences Regensburg\nFriedrich-Alexander University\nErlangen-Nuremberg\n",
"\nTechnical University of Applied Sciences Regensburg Siemens AG\nMunich\n"
] |
[
"Technical University of Applied Sciences Regensburg\nFriedrich-Alexander University\nErlangen-Nuremberg",
"Technical University of Applied Sciences Regensburg Siemens AG\nMunich"
] |
[] |
Modifications to open-source software (OSS) are often provided in the form of "patch stacks" -sets of changes (patches) that modify a given body of source code. Maintaining patch stacks over extended periods of time is problematic when the underlying base project changes frequently. This necessitates a continuous and engineering-intensive adaptation of the stack. Nonetheless, long-term maintenance is an important problem for changes that are not integrated into projects, for instance when they are controversial or only of value to a limited group of users.We present and implement a methodology to systematically examine the temporal evolution of patch stacks, track non-functional properties like integrability and maintainability, and estimate the eventual economic and engineering effort required to successfully develop and maintain patch stacks. Our results provide a basis for quantitative research on patch stacks, including statistical analyses and other methods that lead to actionable advice on the construction and long-term maintenance of custom extensions to OSS.
|
10.1145/2957792.2957810
|
[
"https://arxiv.org/pdf/1607.00905v1.pdf"
] | 14,759,590 |
1607.00905
|
9f25a5707072e40a205539d5197227c02adb1090
|
Observing Custom Software Modifications: A Quantitative Approach of Tracking the Evolution of Patch Stacks
4 Jul 2016
Ralf Ramsauer [email protected]
Daniel Lohmann [email protected]
Wolfgang Mauerer [email protected]
Technical University of Applied Sciences Regensburg
Friedrich-Alexander University
Erlangen-Nuremberg
Technical University of Applied Sciences Regensburg Siemens AG
Munich
Observing Custom Software Modifications: A Quantitative Approach of Tracking the Evolution of Patch Stacks
4 Jul 2016
Modifications to open-source software (OSS) are often provided in the form of "patch stacks" -sets of changes (patches) that modify a given body of source code. Maintaining patch stacks over extended periods of time is problematic when the underlying base project changes frequently. This necessitates a continuous and engineering-intensive adaptation of the stack. Nonetheless, long-term maintenance is an important problem for changes that are not integrated into projects, for instance when they are controversial or only of value to a limited group of users.We present and implement a methodology to systematically examine the temporal evolution of patch stacks, track non-functional properties like integrability and maintainability, and estimate the eventual economic and engineering effort required to successfully develop and maintain patch stacks. Our results provide a basis for quantitative research on patch stacks, including statistical analyses and other methods that lead to actionable advice on the construction and long-term maintenance of custom extensions to OSS.
INTRODUCTION
Special-purpose software, like industrial control, medical analysis, or other domain-specific applications, is often composed of contributions from general-purpose projects that provide basic building blocks. Custom modifications implemented on top of them fulfill certain additional requirements, while the development of mainline, the primary branch of the base project, proceeds independently.
Especially for software with high dependability requirements, it is crucial to keep up to date with mainline: latest fixes must be applied and new general features have to be introduced, as diverging software branches are hard to maintain and lead to inflexible systems [6]. Parallel development often evolves in the form of patch stacks: feature-granular modifications of mainline releases. Because of the dynamics exhibited by modern software projects, maintaining patch stacks can become a significant issue in terms of effort and costs.
Our toolkit PaStA 1 (Patch Stack Analysis) quantitatively analyses the evolution of patch stacks by mining git [5] repositories and produces data that can serve as input for statistical analysis. It compares different releases of stacks and groups similar patches (patches that lead to similar modifications) into equivalence classes. This allows us to compare 1 https://github.com/lfd/PaStA those classes against the base project to measure integrability and influence of the patch stack on the base project. Patches that remain on the external stack across releases are classified as invariant and are hypothesised to reflect the maintenance cost of the whole stack. A fine grained classification of different patch types that depends on the actual modifications could function as a measure for the invasiveness of the stack.
In summary, we claim the following contributions:
• We provide an approach and tool for observing the evolution of patch stacks.
• We propose a language-independent semi-automatic algorithm based on string distances that is suitable for detecting similar patches on patch stacks.
• We provide a case study on Preempt-RT [10], a realtime extension of the Linux kernel that enjoys widespread use in industrial appliances for more than a decade, yet has not been integrated into standard Linux. We measure its influence on mainline and visualise the development dynamics of the stack.
APPROACH
In general, a patch stack (also known as patch set) is defined as a set of patches (commits) that are developed and maintained independently of the base project. Well-known examples include the Preempt-RT Linux realtime extension, the Linux LTSI (Long Term Support Initiative) kernel, and vendor-specific Android stacks needed to port the system to a particular hardware. In many cases, patch stacks are applied on top of individual releases of an upstream version, but they do not necessarily have to be developed in a linear way [1]. The commits of the patched version of a base project are identified as the set of commit hashes that do not occur in the mainline project.
Our analysis is based on the following assumptions:
• Mainline upstream development takes place in one single branch.
• Every release of the patch stack is represented by a separate branch.
The work flow of PaStA consists of the following steps: A commit hash provides a unique identifier for every commit: In the following, U is the set of all commit hashes of the base project, while Pi is the set of the commit hashes of a release i of the patch stacks. P ≡ i Pi denotes all commit hashes on the patch stacks. Note that P ∩ U = ∅. Let H ≡ P ∪ U be the set of all commit hashes of interest. A semi-automatic classification function comp : P × H → {True, False} decides whether two patches are similar or not. A detailed description of the function comp can be found in Section 2.3.
In the implementation, PaStA mines git repositories. Without loss on generality, we focus on this particular version control system because it is widely employed in current OSS development.
Grouping Similar Patches
Patch stacks change as they are being aligned with the changes in base project and additionally integrate or loose functionalities. New patches are pushed on top of the stack, existing patches may be amended to follow up with API changes, or patches are dropped. Because of the rapid dynamics and growth of Open Source projects [3], a significant amount of patches must manually be ported from one release of the base project to the next. Since the base project changes over time, it is necessary to continuously adapt the details of individual patches. Those adaptations can be classified in textual and higher-order conflicts [2]. Textual conflicts can be solved by manually porting the patch to the next version. In a series of patches, patches may depend on each other, so that textual conflicts in one patch lead to follow-up conflicts in further patches. Higher-order conflicts occur when a patch obtains a new (erroneous) semantic meaning after changes in the base project diverged, despite a lack of textual conflicts. Both types are known to induce high maintenance cost [9].
Even if the semantics of patches remain invariant over time (e.g., a patch introduces identical functional modifications in subsequent revisions of the patch), their textual content can change considerably over time. To track patches with unchanged semantics over time, we introduce the classifier function comp that places similar patches into equivalence classes Rj , so that P = j Rj. If comp were able to track the exact semantics of patches, it would hold that comp(a, b) = yes ⇔ a ∼ b. But as comp can only compare textual changes, it follows that comp(a, b) = yes ⇒ a ∼ b. This results from the fact that two similar patches between two successive versions usually have less textual changes than the first and last occurrence of the same patch. We approximate P ≈ jR j .
Comparing Groups Against Mainline
After grouping all patches on the stacks in equivalence classesRj , a complete representative system R ⊆ P is chosen and compared against the commits in the base project. As representative of an equivalence class, we choose the patch with the latest version. Q = {(r, u)|r ∈ R, u ∈ U, comp(r, u) = 1} denotes the set of all patches that are found in the base project.
Detecting Similar Patches
To group patches into equivalence classes and find them in the base project, it is necessary to detect similar commits. Generally, a commit consists of a unique hash, a descriptive message that informally summarises the modifications, and so called diffs [8] that describe the actual changes of the code.
Existing work on detecting similar code fragments primarily targets on detecting code duplicates [4] or on revealing code plagiarism. Possible approaches include languagedependent lexical analysis, code fingerprinting [11], or the comparison of abstract syntax trees [7]. However, all these approaches concentrate on the comparison of code fragments and not on the comparison of similar diffs or commits, as required in our case.
A diff of a file consists of a sequence of hunks that describe the changes at a textual level. Every hunk h is introduced by a range information that determines the location of the changes within a file and contains a section heading h head . Section headings display "the nearest unchanged line that precedes each hunk" [8] and are determined by a regular expression. Range information is followed by the actual changes: lines h + that are added to the new resulting file are preceded by '+', lines h − that are removed from the original file are preceded by '−' and lines h • that did not change are preceded by a whitespace ' '.
For the projects considered in the case study, we observed the following properties:
• Commit messages of upstream patches tend to be more verbose, but still are similar to those on patch stacks.
• Variable and identifier names do not significantly change between different versions.
• Range information of similar hunks changes between different releases.
• Section headings tend to stay similar between different releases.
In contrast to the detection of code plagiarism or the detection of code duplicates, in our case the the textual content of diffs between successive releases of the patch stack tends to stay very close. For this case, string or edit distances provide an easy but powerful language independent method for detecting similar code fragments.
Comparing n diffs against each other requires O(n 2 ) comparison operations. As the necessary string operations are computationally intensive, we employ a coarse-grained preevaluation that serves as a filter: Two commits can only be similar if both touch at least one common file. If the intersection of touched files is disjoint the two commits are automatically considered to be not similar.
Our algorithm calculates a rating for the similarity of the commit message and a rating for the similarity of the diff. When comparing diffs, only similar hunks of commonly changes files are compared. Insertions and deletions are compared independently.
Algorithm 1 describes the evaluation of two patches. The algorithm calculates two ratings, a message rating rm ∈ [0, 1] and a diff rating r d ∈ [0, 1]. r is the weighted arithmetic mean of rm and r d , weighted by a heuristic factor w ∈ [0, 1]. If the resulting rating r < ti, the two commit hashes are classified as dissimilar, if ti ≤ r < ta, then manual evaluation is required, and if r ≥ ta, the commits are classified as similar. Given a commit hash, GetCommit returns the corresponding message and diff. StripTags removes all tags (CC:, Signed-off-by:, Acked-by:, . . . ) as they are not relevant for comparing the content of commit messages. Given the diff of a commit, ChangedFiles returns all touched files of the diff. GetHunks returns all hunks of the diff of a file while HunkByHeading searches for the closest hunk which heading matches x with a rating of at least t h given a section heading x and the diff of a file. Dist takes either two strings or two lists of strings and returns a rating between 0 and 1, where 0 denotes no commonalities and 1 denotes absolute similarity. Our implementation uses the Levenshtein distance, which is a well-known metric of measuring the similarity of strings.
Algorithm 1 Detection of similar patches 1: function comp(a, b, ta, ti, t h , w) 2:
if not PreEval(a, b) then 3:
return False 4:
(msg a , diffa) ←GetCommit(a) 5:
(msg b , diff b ) ←GetCommit(b) 6:
rm ←Dist(StripTags(msg a ), StripTags(msg b )) 7:
r d ← [] 8:
for each file ←ChangedFiles(diffa) do 9:
hunksa ←GetHunks(diffa, file) 10:
hunks b ←GetHunks(diff b , file) 11: r f ← [] 12:
for each lhunk ← hunksa do 13:
rhunk ←HunkByHeading(hunks b , lhunk head , t h ) 14:
if rhunk is None then 15:
continue 16: r f .append(Dist(lhunk + , rhunk + )) 17: r f .append(Dist(lhunk − , rhunk − )) 18: r d .append(Mean(r f )) 19:
r d ←Mean(r d ) 20:
r ← w · rm + (1 − w) · r d 21:
if r ≥ ta then 22:
return True 23:
else if r ≥ ti then 24:
return InteractiveReview(a, b) 25:
return False
DISCUSSION
After grouping all patches into equivalence classes and linking them to optional commits of the base project, we can distinguish between two temporal conditions: (1) Patches that first appeared on the patch stack and later appeared in the base project (ports or forwardports) and (2) patches that first appeared in the base project and were ported back to older versions of the stack (backports). Patches that are not linked to a commit of the base project are called invariant, as they only appear on the stack.
Across two releases of the patch stack, we observe a flow of patches: (1) inflow -new patches on the patch stack and backports. (2) outflow -patches that went upstream or patches that were dropped. (3) invariant -patches that remain on the stack.
In the follwing, we consider the evolution of the Preempt-RT patch stack as a case study: First, we inspect the tem- poral evolution of patch stack size, which is visualised in Figure 1. Among all 554 releases of the patch stack published since Jule 2011 (that in total consist of almost 173 000 patches), we detected 1042 different groups of patches. 195 of those groups were classified as backports, 153 groups were classified as forwardports.
Knowledge of the stack history allows us to determine the composition of older patch stacks in terms of the direction of flow of constituents. Retroactively, we can determine which patches of the stack went upstream at a later point in time, and compute the amount of backported patches and invariant patches. Figure 3 shows the composition of the latest releases of major versions of the Preempt-RT [10] patch stack. Green bars describe the amount of patches on the stack that eventually are integrated into the upstream code base, red bars describe the amount of backports, and the blue bars give the number of invariant patches.
Another covariate of interest is the duration a patch needs to go upstream (i.e., the time between the first appearance on the patch stack and the integration with the base project). Figure 2 shows the result of this analysis for the Preempt-RT project. Positive values on the x-axis describe forwardports, negative values describe backports. There is a prominent hot spot around zero days. We interpret this spot to indicate close cooperation with the base project: backporting of many patches only takes few days while the author list of forward and backport patches overlaps.
CONCLUSIONS
We presented an approach and implementation for the quantitative analysis of patch stacks and a semi-automatic method for identifying similar commits. An evaluation and visualisation of the Preempt-RT patch stack was presented as case study.
In future work, we will concentrate on deeper statistical analysis and comparing the properties and software-enginee- ring implications of patch stacks for a various projects. We are also working on a measure to quantify the invasiveness of patches and patch stacks, which will allow us to draw conclusions on the eventual maintenance cost of such stacks.
( 1 )
1Set up a repository containing all releases of the patch stacks. (2) Identify and group similar patches across different versions of the patch stacks. (3) Compare representatives of those groups against mainline. (4) Use statistical methods to draw conclusions on the development and evolution of the patch stacks.
Figure 1 :
1Preempt-RT patch stack: Evolution of the stack size since Linux kernel version 3.0
Figure 2 :Figure 3 :
23Preempt-RT patch stack: Distribution of integration times (in days) for patches that are eventually integrated in mainline. Positive values indicate forwardports, negative values indicate backPreempt-RT patch stack: Comparing the composition of the last major releases of the patch stacks
.0 .1 0 1 -r t 1 3 0 3 .2 .7 8 -r t 1 1 3 3 .4 .1 1 1 -r t 1 4 1 3 .6 .1 1 -r t 3 1 3 .8 .1 3 -r t 1 6 3 .1 0 .1 0 1 -r t 1 1 1 3 .1 2 .5 7 -r t 7 7 3 .1 4 .6 5 -r t 6 8 3 .1 8 .2 9 -r t 3 0 4 .0 .8 -r t 6 4 .1 .2 0 -r t 2 3 4 .4 .1 2 -r t 1 9 4 .6 .2 -r t 5
The promises and perils of mining git. C Bird, P C Rigby, E T Barr, D J Hamilton, D M German, P Devanbu, Mining Software Repositories, 2009. MSR '09. 6th IEEE International Working Conference on. C. Bird, P. C. Rigby, E. T. Barr, D. J. Hamilton, D. M. German, and P. Devanbu. The promises and perils of mining git. In Mining Software Repositories, 2009. MSR '09. 6th IEEE International Working Conference on, pages 1-10, May 2009.
Proactive detection of collaboration conflicts. Y Brun, R Holmes, M D Ernst, D Notkin, Proceedings of the 19th ACM SIGSOFT Symposium and the 13th European Conference on Foundations of Software Engineering, ESEC/FSE '11. the 19th ACM SIGSOFT Symposium and the 13th European Conference on Foundations of Software Engineering, ESEC/FSE '11New York, NY, USAACMY. Brun, R. Holmes, M. D. Ernst, and D. Notkin. Proactive detection of collaboration conflicts. In Proceedings of the 19th ACM SIGSOFT Symposium and the 13th European Conference on Foundations of Software Engineering, ESEC/FSE '11, pages 168-178, New York, NY, USA, 2011. ACM.
Open Source Development, Communities and Quality: IFIP 20th World Computer Congress, Working Group 2.3 on Open Source Software. A Deshpande, D Riehle, Total Growth of Open Source. Springer USA. Deshpande and D. Riehle. Open Source Development, Communities and Quality: IFIP 20th World Computer Congress, Working Group 2.3 on Open Source Software, September 7-10, 2008, Milano, Italy, chapter The Total Growth of Open Source, pages 197-209. Springer US, Boston, MA, 2008.
A language independent approach for detecting duplicated code. S Ducasse, M Rieger, S Demeyer, Software Maintenance, 1999.(ICSM'99) Proceedings. IEEE International Conference on. IEEES. Ducasse, M. Rieger, and S. Demeyer. A language independent approach for detecting duplicated code. In Software Maintenance, 1999.(ICSM'99) Proceedings. IEEE International Conference on, pages 109-118. IEEE, 1999.
Improving early detection of software merge conflicts. M L Guimarães, A R Silva, Proceedings of the 34th International Conference on Software Engineering, ICSE '12. the 34th International Conference on Software Engineering, ICSE '12Piscataway, NJ, USAIEEE PressM. L. Guimarães and A. R. Silva. Improving early detection of software merge conflicts. In Proceedings of the 34th International Conference on Software Engineering, ICSE '12, pages 342-352, Piscataway, NJ, USA, 2012. IEEE Press.
Deckard: Scalable and accurate tree-based detection of code clones. L Jiang, G Misherghi, Z Su, S Glondu, Proceedings of the 29th international conference on Software Engineering. the 29th international conference on Software EngineeringIEEE Computer SocietyL. Jiang, G. Misherghi, Z. Su, and S. Glondu. Deckard: Scalable and accurate tree-based detection of code clones. In Proceedings of the 29th international conference on Software Engineering, pages 96-105. IEEE Computer Society, 2007.
Comparing and Merging Files. D Mackenzie, P Eggert, R Stallman, D. MacKenzie, P. Eggert, and R. Stallman. Comparing and Merging Files, 2013. http://www.gnu.org/software/diffutils/manual/diffutils.pdf .
The Economic Value of the Long-Term Support Initiative (LTSI). Linux Foundation. H Munakata, T Shibata, H. Munakata and T. Shibata. The Economic Value of the Long-Term Support Initiative (LTSI). Linux Foundation, 2013.
. -Rt Preempt, Wiki, Preempt-RT Wiki. https://rt.wiki.kernel.org/.
Detecting and measuring similarity in code clones. R Smith, S Horwitz, Proceedings of the International Workshop on Software Clones (IWSC). the International Workshop on Software Clones (IWSC)R. Smith and S. Horwitz. Detecting and measuring similarity in code clones. In Proceedings of the International Workshop on Software Clones (IWSC), 2009.
|
[
"https://github.com/lfd/PaStA"
] |
[
"The role of electron-phonon interaction in a magnetically driven mechanism for superconductivity",
"The role of electron-phonon interaction in a magnetically driven mechanism for superconductivity"
] |
[
"H Bakrim \nDépartement de physique\nRegroupement Québecois sur les Matériaux de Pointe\nUniversité de Sherbrooke\nJ1K-2R1SherbrookeQuébecCanada\n",
"C Bourbonnais \nDépartement de physique\nRegroupement Québecois sur les Matériaux de Pointe\nUniversité de Sherbrooke\nJ1K-2R1SherbrookeQuébecCanada\n"
] |
[
"Département de physique\nRegroupement Québecois sur les Matériaux de Pointe\nUniversité de Sherbrooke\nJ1K-2R1SherbrookeQuébecCanada",
"Département de physique\nRegroupement Québecois sur les Matériaux de Pointe\nUniversité de Sherbrooke\nJ1K-2R1SherbrookeQuébecCanada"
] |
[] |
We use the renormalization group method to examine the effect of phonon mediated interaction on d-wave superconductivity, as driven by spin fluctuations in a quasi-one-dimensional electron system. The influence of a tight-binding electron-phonon interaction on the spin-density-wave and d-wave superconducting instability lines is calculated for arbitrary temperature, phonon frequency and antinesting of the Fermi surface. The domain of electron-phonon coupling strength where spindensity-wave order becomes unstable against the formation of a bond-order-wave or Peierls state is determined at weak antinesting. We show the existence of a positive isotope effect for spindensity-wave and d-wave superconducting critical temperatures which scales with the antinesting distance from quantum critical point where the two instabilities merge. We single out a low phonon frequency zone where the bond-oder-wave ordering gives rise to triplet f-wave superconductivity under nesting alteration, with both orderings displaying a negative isotope effect. We also study the electron-phonon strengthening of spin fluctuations at the origin of extended quantum criticality in the metallic phase above superconductivity. The impact of our results on quasi-one-dimensional organic conductors like the Bechgaard salts where a Peierls distortion is absent and superconductivity emerges near a spin-density-wave state under pressure is emphasized.
|
10.1103/physrevb.90.125119
|
[
"https://arxiv.org/pdf/1406.6086v2.pdf"
] | 119,254,843 |
1406.6086
|
6b5bc203a96d411d64db36e3b2851d0231b5af0d
|
The role of electron-phonon interaction in a magnetically driven mechanism for superconductivity
H Bakrim
Département de physique
Regroupement Québecois sur les Matériaux de Pointe
Université de Sherbrooke
J1K-2R1SherbrookeQuébecCanada
C Bourbonnais
Département de physique
Regroupement Québecois sur les Matériaux de Pointe
Université de Sherbrooke
J1K-2R1SherbrookeQuébecCanada
The role of electron-phonon interaction in a magnetically driven mechanism for superconductivity
(Dated: June 25, 2014)PACS numbers:
We use the renormalization group method to examine the effect of phonon mediated interaction on d-wave superconductivity, as driven by spin fluctuations in a quasi-one-dimensional electron system. The influence of a tight-binding electron-phonon interaction on the spin-density-wave and d-wave superconducting instability lines is calculated for arbitrary temperature, phonon frequency and antinesting of the Fermi surface. The domain of electron-phonon coupling strength where spindensity-wave order becomes unstable against the formation of a bond-order-wave or Peierls state is determined at weak antinesting. We show the existence of a positive isotope effect for spindensity-wave and d-wave superconducting critical temperatures which scales with the antinesting distance from quantum critical point where the two instabilities merge. We single out a low phonon frequency zone where the bond-oder-wave ordering gives rise to triplet f-wave superconductivity under nesting alteration, with both orderings displaying a negative isotope effect. We also study the electron-phonon strengthening of spin fluctuations at the origin of extended quantum criticality in the metallic phase above superconductivity. The impact of our results on quasi-one-dimensional organic conductors like the Bechgaard salts where a Peierls distortion is absent and superconductivity emerges near a spin-density-wave state under pressure is emphasized.
I. INTRODUCTION
Since the discovery of superconductivity (SC) in the Bechgaard salts [(TMTSF) 2 X] series 1 , much of the attention paid to the mechanism of Cooper pairing has mostly focused on models of electrons with purely repulsive interactions [2][3][4][5][6][7][8][9][10][11][12][13] . On empirical grounds, this has been amply supported by the ubiquity of spin-density-wave (SDW) correlations nearby the superconducting state when either pressure [14][15][16][17] , temperature 18,19 , or even magnetic field is varied 20,21 . As one moves along the temperature axis for example and enters the metallic state, important SDW fluctuations are found to govern properties of the normal phase, giving rise for instance to a huge enhancement of the NMR spin relaxation rate and to linear-T resistivity term over a wide temperature interval above the critical temperature T c for superconductivity 17,19,22 .
Besides the nesting of the Fermi surface, repulsive interactions are an essential component of SDW correlations 3,[23][24][25] . They have become inescapable ingredients of the model description of superconductivity in these materials. In this regard, the quasi-one dimensional electron gas model with the aid of the renormalization group (RG) method have played an important part in the description of these low dimensional electron systems. In the repulsive sector, it proved particularly generic of the SDW to d-wave SC (SCd) sequence of instabilities when the amplitude of the next-to-nearest neighbour interchain hopping, t ⊥ , called the anti nesting parameter, is tuned to simulate pressure effects on spin fluctuations responsible for superconducting pairing interaction 26,27 . The approach has also shown how the constructive interference between spin fluctuations and Cooper pairing can explain the existence of a Curie-Weiss temperature dependence of the SDW correlation length, which is a key factor in the enhancement of the NMR relaxation rate and the linear-T component in resistivity over the whole pressure interval where superconductivity is present 17,[28][29][30] .
However, in view of the complex molecular structure of systems like the Bechgaard salts, the repulsive electron gas model must be regarded as an idealization. It ignores primarily the interaction of electrons with low energy phonon modes of the lattice. Early X-ray diffuse scattering experiments in (TMTSF) 2 PF 6 and (TMTSF) 2 ClO 4 compounds did reveal the existence of such a coupling, under the guise of lattice fluctuations at the 1D wave vector 2k F of the electron gas (k F being the longitudinal Fermi wave vector). 31,32 The lattice fluctuations remain regular in temperature for the Bechgaard salts, in contrast to so many molecular chain systems where it terminates in a Peierls -bond-order-wave (BOW) -distorted state. Although the reason for this remains for a large part unexplained 32,33 , the presence of 2k F lattice fluctuations is a direct evidence of a finite coupling between electrons and phonons, a consequence of the modulation of tight-binding electron band parameters by lattice vibrations.
This points at the impact a retarded -phonon mediated (Ph-M) -interaction can have on the properties of the electron gas when the mechanism for Cooper pairing is magnetically driven. Whether it is detrimental to SDW and SCd correlations, as one would naturally expect if the electron-phonon interaction was taken in isolation 34 , or on the contrary, if it becomes a factor of reinforcement when it is subordinate to repulsive interactions. The latter possibility can provide new insight as to the conditions prevailing in weakly dimerized systems like the Bechgaard salts that make SDW coming out on top of the Peierls phenomena. It can further clarify as to how electron-phonon interaction can be actively involved in the occurrence of superconductivity near magnetism. It can also shed light on the possibility of a positive isotope effect for the temperature scale of instabilities against SDW/SCd orderings as a function of phonon frequency. Reinforcement could also extend relatively far in the metallic phase by enhancing spin fluctuations as quantum critical effects due to interfering SCd and SDW instabilities 29 .
These possibilities found a rather large echo in the context of other unconventional superconductors, in particular high-T c cuprates [35][36][37][38] , where they framed a significant part of the debate surrounding the relative importance of Coulomb and electron-phonon interactions when superconductivity takes place in the proximity of antiferromagnetism [39][40][41][42][43][44][45][46] and charge-density-wave ordering 47,48 . Its transposition in quasi-one-dimensional superconductors like the Bechgaard salts close to a SDW instability has remained essentially unexplored since the very first attempts to reconcile electronelectron and electron-phonon interactions in the framework of mean-field theory of competing magnetism and superconductivity. 24 In this work we shall address this problem in the weak coupling framework of the RG approach to quasi-1D electron gas model. The model is extended to include both direct and Ph-M electron-electron interactions in the the study of interfering (electron-electron) Cooper and (electron-hole) density-wave pairings at arbitrary phonon frequency ω D . The RG calculations will be carried out at finite temperature T , which bring additional difficulties in the presence of retarded interactions. This turns out to be required when antinesting is present. Actually, a finite t ⊥ breaks the usual correspondence between T and the scaled cut-off energy Λ( ) from the Fermi surface that generates the RG flow. The flow will be then conducted at arbitrary temperature for interactions with momentum dependence along the Fermi surface and a finite set of Matsubara frequencies. This finite−T RG procedure with momentum and frequency variables has been worked out recently for systems where Ph-M interactions are predominant, a situation relevant to competing charge-density-wave and s-wave SC instabilities away from half-filling. 34 It is extended here to weakly dimerized chains systems like the Bechgaard salts where repulsive interactions are dominant and half-filling Umklapp scattering is finite 3,28,29 .
The results put forward below show that the modulation of tight-binding electron band by acoustic lattice vibrations introduces effective Ph-M interactions with a very characteristic dependence on longitudinal momentum transfer of scattered electrons. The dependence affects the RG flow and produces a low-energy downward screening of the repulsive backward scattering and an enhancement of the repulsive Umklapp term. Both effects are ω D dependent, concurring to boost antiferromagnetic exchange between itinerant electrons and reinforce both the SDW and magnetically driven SCd instability lines of the phase diagram. The impact of retardation generates a positive isotope effect whose amplitude peaks at the critical strength of antinesting where SDW and SCd instabilities lines meet and their constructive interference is the strongest. Above a definite strength of electronphonon interaction, the SDW becomes unstable against the formation of a BOW distorted state and triplet f-wave superconductivity if antinesting and retardation are sufficiently high. The latter states are both characterized by a negative isotope effect, as a result of antiadiabaticity.
The boost of interference by electron-phonon interaction is not limited to the transition lines, but is also manifest in the metallic phase where it feeds deviations to Fermi liquid behaviour at the origin of extended quantum criticality in the normal phase 17,28,29 . The latter can be followed through the reinforcement of the Curie-Weiss behaviour of the SDW susceptibility which is correlated to ω D and antinesting t ⊥ in the whole range where superconductivity is present.
In section II we introduce the quasi-1D electron gas model which is extended to include tight-binding electron-phonon interaction term. In Sec. III the oneloop RG flow equations for the different electron-electron vertices and relevant response functions are given and integrated in the determination of the phase diagram at arbitrary anti nesting and phonon frequency. Their integration is carried out in Sec. IV leads to the determination of the phase diagrams, isotope effects and spin fluctuations in the normal state. In Sec. V, we discuss the implications of our results in the description of unconventional superconductors like the Bechaggard salts and conclude this work.
II. THE MODEL
For a linear array of N ⊥ chains of length L, the Hamiltonian of the quasi-1D electron gas with electron-phonon coupling is given by
H = H 0 p + H ep + p,k,σ E p (k) c † p,k,σ c p,k,σ + πv F LN ⊥ {k,σ} g 1 c † +,k4,σ1 c † −,k3,σ2 c +,k2,σ2 c −,k1,σ1 + g 2 c † +,k4,σ1 c † −,k3,σ2 c −,k2,σ2 c +,k1,σ1 + 1 2 g 3 c † +,k4,σ1 c † +,k3,σ2 c −,k2,σ2 c −,k1,σ1 + H.c δ k1+k2=k3+k4(±G) ,(1)
In the purely electronic part that has been made explicit, the operator c † p,k,σ (c p,k,σ ) creates (destroys) a right (p = +) and left (p = −) moving electron of wave vector k = (k, k ⊥ ) and spin σ. The free part is modeled by the anisotropic one-electron energy spectrum in two dimensions,
E p (k) = v F (pk − k F ) + (k ⊥ ),(2)
where
(k ⊥ ) = −2t ⊥ cos k ⊥ − 2t ⊥ cos 2k ⊥ .(3)
The longitudinal part has been linearized around the longitudinal Fermi wave vector pk F ; v F is the longitudinal Fermi velocity, t ⊥ is the nearest-neighbor hopping integral in the perpendicular direction while t ⊥ is a second nearest-neighbor hopping paramaterizing deviations to perfect nesting at q 0 = (2k F , π), which simulates the most important effect of pressure in our model. The quasi-1D anisotropy of the spectrum is
E F 15t ⊥ , where E F = v F k F
3000K is the longitudinal Fermi energy congruent with the range found in the Bechgaard salts [49][50][51] ; E F is taken as half the bandwidth cutoff E 0 = 2E F in the model. The interacting part of the Hamiltonian is described by the bare backward, g 1 ≡ g 1 (+k F , −k F ; +k F , −k F ), and forward, g 2 ≡ g 2 (+k F , −k F ; −k F , +k F ), scattering amplitudes between right and left moving electrons defined on the 1D Fermi surface. The half-filling character of the band -a consequence of a small dimerization of the chains -gives rise to Umklapp scattering of bare amplitude g 3 ≡ g 3 (±k F , ±k F ; ∓k F , ∓k F ), and for which momentum conservation involves the longitudinal reciprocal lattice vector G = (4k F , 0). All couplings are normalized by πv F and are initially independent of transverse momenta k ⊥i , but acquire such an dependence along the RG flow.
Regarding the values taken by the interaction parameters throughout the present calculations, we shall take g 1 = g 2 /2 0.32 and g 3 0.025, which follows from the phenomenological analysis of previous works that fixes their amplitude from different experiments in the weakly dimerized systems like the Bechgaard salts 28,29 .
The electron-phonon part of the hamiltonian (1) follows from the modulation of the longitudinal hopping integral by acoustic phonons in the tight-binding approximation 52 . It reads
H 0 p + H ep = q,ν ω q,ν b † q,ν b q,ν + 1 2 + (LN ⊥ ) − 1 2 p,σ,ν k,q g ν (k, q)c † p,k+q,σ c −p,k,σ (b † q,ν + b −q,ν )(4)
where ν is related to the different polarization of acoustic phonons. For phonons of interest propagating parallel to the chains a-axis, we have
ω q,ν =ω ν sin q 2 ,(5)g ν (k, q) =i4 λ ν √ 2M ω ν sin q 2 cos k + q 2 ,(6)
where the coupling amplitude λ ν = ∇t · e ν is expressed in terms of the longitudinal hooping integral, t, and the unit vector e ν of the lattice displacement; ω ν = 2 κ ν /M is the Debye frequency for the acoustic branch ν, and M is the mass of molecular unit. The bandwidth of acoustic branches in the molecular systems like the Bechgaard salts does not exceed ω ν ∼ 100K 53-55 . We will consider in the following the interval normalized phonon frequency 0 < ω D /t ⊥ ≤ 0.5.
For the partition function Z, it is straightforward to proceed to the partial trace of harmonic phonon degrees of freedom and express the partition function,
Z = Dψ * Dψ * e S0+S I ,
as a functional integral over the fermion anti commuting fields ψ ( * ) . The bare action in the Matsubara-Fourier space is given by
S 0 [ψ * , ψ] = k ,p,σ [G 0 p (k)] −1 ψ * p,σ (k)ψ p,σ (k)(7)
wherek = (k, ω n = ±πT, ±3πT, . . .) and
G 0 p (k) = [iω n − E p (k)] −1(8)
is the bare fermion propagator. The interacting part of the action is of the form
S I [ψ * , ψ] = − T LN ⊥ πv F {k,σ} { g 1 (k 1 ,k 2 ,k 3 ,k 4 )ψ * +,σ4 (k 4 )ψ * −,σ3 (k 3 )ψ +,σ2 (k 2 )ψ −,σ1 (k 1 ) + g 2 (k 1 ,k 2 ,k 3 ,k 4 )ψ * +,σ4 (k 4 )ψ * −,σ3 (k 3 )ψ −,σ2 (k 2 )ψ +,σ1 (k 1 ) + 1 2 [g 3 (k 1 ,k 2 ,k 3 ,k 4 )ψ * +,σ4 (k 4 )ψ * +,σ3 (k 3 )ψ −,σ2 (k 2 )ψ −,σ1 (k 1 ) + c.c.]}δk 1+k2 ,k3+k4(±Ḡ) (9)
wherek i ≡ (k ⊥i , ω ni ) and G = (4k F , 0, 0) for Umklapp scattering. The amplitude of the bare effective backscattering is given by
g 1 (k 1 ,k 2 ,k 3 ,k 4 ) = g 1 − ν 2 πv F ω ν × g ν (k F , −2k F )g ν (−k F , 2k F ) 1 + (ω n3 − ω n1 ) 2 /ω 2 ν ≡ g 1 + g ph 1 + (ω n3 − ω n1 ) 2 /ω 2 D ,(10)
where the electron-phonon matrix element has been evaluated on the 1D Fermi points ±k F . This leads to an attractive contribution from all acoustic branches of normalized amplitude
g ph = −4 ν λ 2 ν /(πv F κ ν ).(11)
Here we have defined the Debye frequency ω D = ω 2k F ,ν , as the average phonon frequency over the different branches at the zone edge. As for the amplitude of the effective forward scattering, we have
g 2 (k 1 ,k 2 ,k 3 ,k 4 ) = g 2 − 2 πv F ν ω 0,ν g ν (k F , 0)g ν (−k F , 0) ω 2 0,ν + (ω n3 − ω n1 ) 2 = g 2(12)
which remains unaffected by phonons at vanishing momentum transfer. Finally, for the bare Umklapp term in the presence of phonons, we have
g 3 (k 1 ,k 2 ,k 3 ,k 4 ) = g 3 − ν 2 πv F ω ν × g ν (k F , 2k F )g ν (k F , −2k F ) 1 + (ω n3 − ω n1 ) 2 /ω 2 ν ≡ g 3 + η|g ph | 1 + (ω n3 − ω n1 ) 2 /ω 2 D(13)
which, in contrast to normal backscattering, gives rise to a retarded repulsive contribution. Here η is a reduction factor that takes into account the weak dimerization of the chains. For simplicity we shall take η = g 3 /g 1 (= ∆ D /E F 1) (see also Ref. 24 ). The dependence of the above bare retarded couplings on the longitudinal momentum transfer will play an important role in their RG flow at low energy.
III. RENORMALIZATION GROUP EQUATIONS
We use the finite temperature momentum-frequency RG scheme introduced in Ref. 34 . In the partition function we proceed to the successive integration of electron states in the energy shell Λ( )d at energy dis-
tance ±Λ( ) = ±E F e − from the Fermi surface, where ∈ [0, ∞).
For the k ⊥ -momentum dependence of the scattering amplitudes on each Fermi sheet, a constant energy surface in the Brillouin zone is separated into 12 patches, inside of which the couplings are considered constant in the loop integration.The number of patches is sufficient to take into account the non perturbative effect of warping of the Fermi surface and the antinesting term t ⊥ . Regarding the frequency dependence we have considered a finite number of N ω = 14 Matsubara frequencies ω n (−7 ≤ n ≤ 6), within a mean field single patch scheme for the loop frequency variable as described below. The flow equations read
∂ g 1 (k 1 ,k 2 ,k 3 ,k 4 ) = 1 2π dk ⊥ I P (k ⊥ ,q P ) × P g 1 (k 1 ,k,k P ,k 4 )g 1 (k P ,k 2 ,k 3 ,k) + P,v g 2 (k 1 ,k,k 4 ,k P )g 1 (k P ,k 2 ,k 3 ,k) + P,v g 1 (k 1 ,k,k P ,k 4 )g 2 (k P ,k 2 ,k,k 3 ) + P g 3 (k 1 ,k,k 3 ,k P )g 3 (k P ,k 2 ,k,k 4 ) + P,v g 3 (k,k 1 ,k 3 ,k P )g 3 (k P ,k 2 ,k,k 4 ) + P,v g 3 (k 1 ,k,k P ,k 3 )g 3 (k P ,k 2 ,k,k 4 ) + 1 2π dk ⊥ I C (k ⊥ ,q C ) × C g 1 (k 1 ,k 2 ,k,k C )g 2 (k,k C ,k 4 ,k 3 ) + C g 2 (k 1 ,k 2 ,k C ,k)g 1 (k,k C ,k 3 ,k 4 ) ,(14)∂ g 2 (k 1 ,k 2 ,k 3 ,k 4 ) = 1 2π dk ⊥ I P (k ⊥ ,q P ) × P,l g 2 (k 1 ,k,k 3 ,k P )g 2 (k P ,k 2 ,k,k 4 ) + P,l g 3 (k 1 ,k,k P ,k 4 )g 3 (k P ,k 2 ,k 3 ,k) + 1 2π dk ⊥ I C (k ⊥ ,q C ) × C g 1 (k 1 ,k 2 ,k,k C )g 1 (k,k C ,k 4 ,k 3 ) + C g 2 (k 1 ,k 2 ,k C ,k)g 2 (k,k C ,k 3 ,k 4 ) ,(15)
and
∂ g 3 (k 1 ,k 2 ,k 3 ,k 4 ) = 1 2π dk ⊥ I P (k ⊥ ,q P ) ×2 P g 1 (k 1 ,k,k 3 ,k P )g 3 (k P ,k 2 ,k,k 4 ) + P,v g 1 (k 1 ,k,k 3 ,k P )g 3 (k P ,k 2 ,k 4 ,k) + P,v g 2 (k,k 1 ,k 3 ,k P )g 3 (k P ,k 2 ,k,k 4 ) + 1 2π dk ⊥ I P (k ⊥ ,q P ) × 2 P,l g 2 ((k,k 1 ,k 3 ,k P )g 3 (k P ,k 4 ,k 2 ,k)(16)
These consist of closed loop ( P = −2), vertex corrections ( P,v = 1) and ladder ( P,l = 1) diagrams of the q P electron-hole (Peierls) pairing, which combine with the ladder diagrams ( C = −1) of the electron-electron (Cooper) pairing.
Herek P =k +q P ,k P =k +q P andk C = −k +q C , whereq P,C = (q ⊥P,C , ω P,C ) corresponds to the Peierlsq P =k 1 −k 4 ,q P =k 1 −k 3 and Cooperq C =k 2 +k 1 variables. In the above equations, each diagram singles out a discrete frequency convolution of the form D P,C = ωn g i • g j • L P,C , between the coupling products and the Peierls (Cooper) loop derivative L P,C = T ∂ G 0 + (k +q P,C )G 0 − (±k). The exact frequency summation at arbitrary T is computationally out of reach. It can be approximated, however, using a mean field scheme in which D P,C → g i • g j ωn L P,C , where · · · = N ω −1 n · · · , stands as an average of the couplings over ω n , the internal loop frequency variable. The product of couplings averaged over ω n is thus considered constant in the exact evaluation of the derivative of the Cooper and Peierls loops I P,C = +∞ n=−∞ L P,C at temperature T , which read
I P,C (k ⊥ ,q P,C ) = ν=±1 θ[|E 0 ( )/2 + νA P,C | − E 0 ( )/2] × 1 4 tanh E 0 ( ) + 2νA P,C 4T + tanh E 0 ( ) 4T × (E 0 ( ) + νA P,C )E 0 ( ) (E 0 ( ) + νA P,C ) 2 + ω 2 P,C ,(17)
where ω P = ω n3 − ω n1 , and
A P = −ε(k ⊥ ) − ε(k ⊥ + q ⊥P ),(18)
for the Peierls channel; ω C = ω n1 + ω n2 , and
A C = −ε(k ⊥ ) + ε(k ⊥ + q ⊥C ),(19)
for the Cooper channel. Here θ[x] is the step function (θ[0] ≡ 1 2 ). At finite temperature the above decoupling scheme with the number of frequencies retained represent a good compromise between exacting computing time and reproducing the results known for either the non-retarded case in quasi-1D [27][28][29] or the quantum corrections to the BOW ordering in pure electron-phonon problem in one dimension 34 .
The nature of instabilities of the electron gas and their critical temperatures, T µ , are best studied from the susceptibilities χ µ . For the coupled electron-phonon model under consideration, only superconducting and staggered density-wave susceptibilities present a singularity as a function of antinesting and electron-phonon interaction strength. In the static limit, these are defined by
πv F χ µ (q 0 µ ) = 1 2π dk ⊥ z 2 µ (k +q 0 µ ) I P,C (k ⊥ +q 0 µ )d ,(20)
where the vertex parts z µ are governed by one-loop flow equations. In the density-wave channel, we shall consider
∂ z SDW (k +q 0 P ) = 1 2π dk ⊥ I P (k ⊥ ,q 0 P )z SDW (k +q 0 P ) × [ P,l g 3 (k,k +q 0 P ,k ,k+q 0 P ) + P,l g 2 (k +q 0 P ,k,k ,k +q 0 P )] ,(21)
and
∂ z BOW (k +q 0 P ) = 1 2π dk ⊥ I P (k ⊥ ,q 0 P )z BOW (k +q 0 P ) × [ P g 1 (k +q 0 P ,k,k ,k+q 0 P ) + P,l g 2 (k +q 0 P ,k,k ,k +q 0 P ) − P g 3 (k ,k+q 0 P ,k +q 0 P ,k) − P,l g 3 (k,k +q 0 P ,k ,k+q 0 P )](22)
for the static µ =SDW and BOW susceptibilities, respectively atq 0 P = (π, 0). In the superconducting channel, we shall examine
∂ z µ (−k +q 0 C ) = 1 2π dk ⊥ I C (k ⊥ ,q 0 C )z SC (−k +q 0 C ) ×∆ µ (k ⊥ ) C [g 1 (−k +q 0 C ,k , −k +q 0 C ,k) + g 2 (−k +q 0 C ,k ,k, −k +q 0 C )] ,(23)
for the static SC susceptibility atq 0 C = 0, where ∆ µ (k ⊥ ) is the form factor for the SC order parameter. For SCd and triplet-f wave (SCf) correlations, we have ∆ SCd (k ⊥ ) = √ 2 cos k ⊥ and ∆ SCf (k ⊥ ) = (sgn k) √ 2 cos k ⊥ , whereas for conventional singlet pairing (SS), we have ∆ SS (k ⊥ ) = 1.
Before embarking on the solution of the above equations, it is instructive at this stage to examine their basic features as a function of the different energy scales of the model. At high temperature where T ω D and the phonons are classical, the contribution of Ph-M interaction to all open diagrams -ladder and vertex corrections -becomes strongly dampened for all Λ( ), as a result of retardation that reduces the summations over intermediate frequency transfer in such diagrams. In this temperature range, the Ph-M part contribute more appreciably to the closed loop diagram of the Peierls channel which does not have an intermediate sum over transfer frequency, and this on equal footing with the direct Coulomb part in Eqs. (14)(15)(16). On the other hand, when entering in the low temperature domain at T < ω D , retardation effects are reduced which progressively strengthen the contribution of electron-phonon interaction to open diagrams. This increases mixing or interference between all diagrams of the Peierls and Cooper scattering channels.
For the range of parameters considered in the model, the temperature scale T µ of instabilities of the electron gas that are considered below all fall in the temperature range T µ t ⊥ . This is where the transverse electron motion and the warping of the Fermi surface are coherent, making the electron gas effectively two-dimensional, albeit strongly anisotropic in this temperature domain. This is known to affect the interference in a particular way depending on the energy distance Λ( ) from the Fermi surface in the RG flow. At high energy, when Λ( ) t ⊥ , the flow essentially coincides with the 1D limit where the interference is maximum, although subjected to the above conditions between T and ω D . When Λ( ) t ⊥ , the interference between the Peierls and Cooper channels is affected by the coherent warping of the Fermi surface and ultimately nesting alterations at Λ( ) < t ⊥ . Both generate a momentum dependence of the coupling constants (14)(15)(16) which reflects in the end the nature of the electron gas instability at T µ .
IV. RESULTS
A. Instabilities for weak phonon-mediated interaction
The integration of the RG equations up to → ∞ for the couplings (14-16) and pair vertices (21)(22)(23) leads to the temperature dependence of the selected susceptibilities as a function of antinesting, t ⊥ /t ⊥ , phonon frequency, ω D /t ⊥ , normalized by t interchain hopping; and weak Ph-M interaction parameterized by the ratio
|g ph | ≡ |g ph |/g 1 ,(24)
here normalized by the strength of non retarded repulsive interaction g 1 . The main features the influence of weak Ph-M coupling has on the temperature dependence of relevant susceptibilities are summarized in Fig. 1 at small and intermediate antinesting parameter t ⊥ , and different ω D . In Fig. 1-a, t ⊥ is taken sufficiently small so that nesting promotes a singularity in χ SDW , indicating an instability against the onset of SDW order at T SDW . As for the correlations in the BOW and SCd channels, the related susceptibilities are non singular and remain small. According to Fig. 1-a, the presence of an even small |g ph | at sizeable ω D is sufficient to cause a noticeable increase of T SDW compared to the purely electronic limit. At the outset, the strengthening of SDW instability takes its origin in the momentum transfer dependence of Ph-M interaction in Eqs. (10-13) at = 0, resulting in a reduction of the backscattering and an increase of Umklapp term. As discussed in more detailed in Sec. IV C 1, both concur to an increase of antiferromagnetic spin exchange between itinerant spins. The above effects on scattering amplitudes are magnified by the RG flow. Moreover, the reinforcement of SDW becomes the most efficient in the temperature range T < ω D owing to the reduction of retardation. It is where the Ph-M part acts progressively as non retarded contributions in all open diagrams such as the ladder and vertex corrections of (14-16) and (21). The influence of Ph-M coupling on SDW correlations will then naturally depend on the value of phonon frequency ω D . Fig. 1-c shows indeed that lowering ω D reduces the enhancement of T SDW at low antinesting, an indication of a positive isotope effect on SDW (see Sec. IV C 1).
At large enough t ⊥ , nesting turns to be sufficiently poor to prevent the occurrence of SDW. The instability of the metallic state no longer takes place in the densitywave channel, but rather shows up by interference in the Cooper channel with the onset of SCd order at T c . As shown in Fig. 1-b, the presence of a small Ph-M coupling at the same ω D gives rise to a substantial increase of the critical temperature T c compared to the purely electronic case. The SCd strengthening goes hand in hand with the boost of SDW spin fluctuations responsible for the Cooper pairing in the metallic state. This is shown in Fig. 1-b where at non zero |g ph | a more pronounced, though non singular, enhancement of χ SDW is found above T c . This feature signals that the reinforcement of spin fluctuations persists relatively deep in the normal state.
In Fig. 1-d the effect of ω D on both T c and normal state spin fluctuations is singled out. The growth of T c with ω D is correlated with the increase of spin correlations above T c . In this part of the Figure, we note that the onset spin fluctuations reinforcement takes place at T < ω D where χ SDW clearly separates from the static ω D → 0 limit; it signals the growth of ladder and vertex corrections following a reduction of retardation. The enhancement of spin fluctuations in the normal phase will be analyzed in Sec. IV D where it is found to follow a Curie-Weiss temperature dependence, which is comparatively more pronounced than the one occurring in the purely electronic limit 28,29 .
Concerning BOW correlations, Fig. 1 shows that for weak |g ph |, these remain weakly enhanced. However, as it will be shown next, the situation qualitatively changes when |g ph |, though still small, reaches some critical value.
B. Phase diagrams
Spin-density-wave versus d-wave superconductivity
We now consider the sequence of instabilities of the metallic state as a function of t ⊥ in order to construct the phase diagrams at weak Ph-M couplings. This is shown in Fig. 2-a. At small |g ph | and for a sizeable ω D , the system remains unstable to the formation of a SDW state with a T SDW that displays the characteristic monotonic decrease with increasing t ⊥ 23,26-29,56,57 . At the approach of a well defined antinesting threshold t * ⊥ , however, T SDW undergoes a critical drop that terminates at t * ⊥ where SCd begins at its peak value denoted by T * c . Above, T c shows a continuous decrease with t ⊥ that correlates with the reduction of SDW fluctuations as the source of Cooper pairing.
As stressed above, Fig. 2-a confirms that the Ph-M coupling, albeit small, reinforces both T SDW and T c for all t ⊥ , including the critical value t * ⊥ at which superconductivity emerges. We also note from Fig. 2-a that this reinforcement reduces the sharpness of its critical drop at the approach of t * ⊥ , an effect that carries over in the superconducting sector where the reduction of T c with t ⊥ turns to be less rapid.
Also shown in the Figure are the instability lines in the static, ω D → 0 limit (continuous lines of Fig. 2-a ). Retardation effects are found to be very important at the approach of the critical value t * ⊥ | ω D →0 and beyond, an indication of that the isotope effect is clearly non uniform as a function of t ⊥ (see Sec. IV C 1). It is also worth noticing from the Figure that in the presence of dominant non retarded repulsive interactions, the influence of Ph-M terms on both SDW and SC-d instabilities remains finite in the static limit. This contrasts with the situation when only Ph-M interactions are present, and where T c → 0 as ω D → 0 for s-wave SC 34 .
2.
Bond-order-wave versus superconductivity
By increasing further the strength of Ph-M coupling for the same ω D , the Fig. 2-b shows that the SDW-SCd sequence of instabilities as a function of t ⊥ is only maintained up to a critical, |g c ph |(≈ 0.52 for the parameters used), above which SDW turns out to be no longer stable and replaced by the onset of a non magnetic BOW state at T BOW . The typical variation of relevant susceptibilities in the BOW sector of the phase diagram are given in Fig. 3-a. The BOW instability that takes place from the metallic state corresponds to the onset of a Peierls, though correlated, lattice distorted state. 58,59 . A remarkable feature of the phase diagrams of Fig. 2-b is that above |g c ph | and at not too small ω D , the BOW instability continues to be followed by SCd superconductivity at t ⊥ ≥ t * ⊥ . In these conditions, however, T c becomes a decreasing function of |g ph |. Tis is depicted in Fig. 4, where it behaves so after having reached its maximum at the boundary |g c ph | where SDW, BOW are found to be essentially degenerate and at their maximum strength. It is worth noticing that at the boundary T c , has increased by a factor of four or so compared to the purely electronic case. Despite the presence of a Peierls lattice distorted state, the essential role played by spin fluctuations in the emergence of SCd at t ⊥ ≥ t * ⊥ remains. This is confirmed in Fig. 3-b where χ SDW > χ BOW over a large temperature interval at the approach of T c in the normal state.
Another surprising feature of the phase diagram in the |g ph | > |g c ph | is found at low phonon frequency. Fig. 5 shows that in the small ω D range, the BOW ordering at t ⊥ ≥ t * ⊥ is followed by a triplet SCf instability instead of a SCd one. Since small phonon frequency increases retardation, it reinforces closed loop diagrams in the RG flow, related to density or charge fluctuations. Bond charge correlations are then increased with respect to their spin counterpart and for dominant repulsive interactions, this leads to SCf type of superconductivity. The triplet-singlet competition is in a way reminiscent of the one found when a weak repulsive (non retarded) interchain interaction is added to the quasi-1D electron gas model 27 . The latter coupling is also known to boost exclusively charge fluctuations 60 , in a way similar the electron-phonon interaction does in the present case when strong retardation is present; the same interchain coupling is also known to promote a SDW to Fig. 5, the BOW ordering is weakly affected, whereas a SCf → SCd crossover is indeed found to occur at small ω D /t ⊥ (∼ 0.1 for the parameters used).
C. Isotope effects
Spin-density-wave and d-wave superconductivity
In the preceding paragraphs we mentioned on several occasions the positive influence of raising ω D on the strength of SDW and SCd instabilities. This result ob-
S ex I = πv F T LN ⊥ {k},q P 1 2 (g 2 +g 3 )• Sk 1,qP · Sk 2,−qP ,(25)
where Sk ,q P = 1 2 ψ * +,α (k +q P ) σ αβ ψ −,β (k) + c.c is the Fourier-Matsubara component of the SDW spin density. Thus in weak coupling, the combination 1 2 (g 2 + g 3 ) corresponds to a momentum and frequency dependent antiferromagnetic exchange interaction. It is the same exchange term that governs the enhancement of the vertex part z SDW for the SDW susceptibility [See Eq. (21)]. Its growth with decreasing Λ( ) results from the multiple exchange scattering of virtual q P electron-hole pairs carried by ladder and vertex corrections in the flow equations (15)(16). As to the backscattering term, g 1 , its role is indirect. This coupling carries a large longitudinal momentum transfer corresponding to a repulsive short-range contribution along the chains, which tends to damp the amplitude of both g 2 and g 3 , reducing the exchange scattering and then SDW correlations.
Therefore the combined influence of a g 1 reduction and a g 3 increase by Ph-M interactions in (10) and (13) will boost g 2 and in turn g 3 and antiferromagnetic exchange. As mentioned earlier, however, this additional and positive input of Ph-M interaction reaches its maximum impact in the temperature domain T < ω D , namely where retardation effects on virtual electron-hole pair scattering processes become small, hence the isotope effect on SDW.
The increase of T SDW with ω D is illustrated in Fig. 6a for |g ph | = 0.1 and different values of t ⊥ in the SDW part of the phase diagram. At relatively small t ⊥ , that is well into the SDW sector, the T SDW undergoes a monotonic but weak increase over all the frequency range of phonons, a consequence of ladder and vertex corrections to the antiferromagnetic exchange that grow in importance by increasing ω D . It is worth noticing that in the adiabatic limit, T SDW | ω D →0 is found to be slightly larger than the T SDW | g ph =0 obtained in the absence of Ph-M interaction [see Fig. 2-a]. This indicates that static phonons still have a positive influence on the exchange interaction (25) and the strength of SDW correlations. This adiabatic effect finds a certain echo in the strong coupling -Hubbard interaction -case where dynamical mean field theory calculations do predict an enhancement of antiferromagnetic exchange between localized spins by zero frequency phonons 41 . Here the static enhancement essentially results from the mixing of Ph-M interaction to the non retarded Coulomb terms g i in the RG flow; it vanishes by taking g i → 0 in Eqs (10), (12) and (13), a result found in the limit of pure the electron-phonon coupling 34 .
When t ⊥ increases and approaches the critical domain where, the drop in T SDW becomes according to Fig. 2-a essentially vertical, and the isotope effect becomes huge as traced in Fig. 6-a. Close to t * ⊥ , the reinforcement of SDW correlations by an even small increase in ω D gives rise a large increase of T SDW . This is not the consequence of nesting improvement, but rather the result of stronger nesting deviations needed to counteract the reinforcement of SDW instability by Ph-M interactions. For t ⊥ slightly above t * ⊥ , Fig. 6-a features the interesting possibility of a SCd to SDW transition as a function of ω D .
The positive isotope effet carries over into the SCd side of the phase diagram where T c is found to increase with ω D at different t ⊥ , as shown in Fig. 6-b. This is directly associated with the ω D -dependent reinforcement of spin correlations in the normal state as already pointed out in Fig. 1-d, which strengthens the pairing interaction in the SCd channel. Although the isotope effect is slightly larger in amplitude near the critical t * ⊥ , it remains of comparable size at arbitrary value of antinesting with a power law T c ∼ ω α D that takes place at intermediate frequency with an exponent α 0.24(≡ d ln T c /d ln ω D ), a value virtually independent of t ⊥ [see Fig. 6-b] and |g ph |. At high phonon frequency where the ratio ω D /T c becomes very large, retardation effects become negligible and T c tends to level off with frequency. This saturation probably reflects the limitation of using a finite number of Matsubara frequencies in the mean-field approximation of the loop convolution over frequency.
Bond order wave versus superconductivity
In the BOW regime above |g c ph |, the isotope effect on T BOW has the opposite sign. At low t ⊥ for instance, Fig. 7-a shows that T BOW decreases monotonically with ω D and the reduction becomes increasingly large with t ⊥ which also softens the lattice distortion through nesting alteration. A reduction of T BOW with ω D is a consequence of the growth of non adiabaticity of the phonon field, a well known factor to be at play in the reduction of the Peierls distortion gap in purely electronphonon models in one dimension 58,59,61 . From a diagrammatic point of view, non adiabaticity is a quantum effect again tied to the unlocking of Ph-M interaction to open diagrams and thus to quantum interference between electron-hole and Cooper pairing at the one-loop level.
In contrast to the SCd-SDW mixing, the interference is in the present case destructive: Cooper paring contributions have opposite sign and this reduces the temperature scale of BOW ordering 58 . The onset of a quantum to classical crossover for the BOW state is perceptible at ω D /2T 0 BOW | ω D →0 ∼ 1, as it is found to occur in the pure electron-phonon limit 34,59 .
Above t * ⊥ , but for small ω D , we still observe an inverse isotope effect for the T c of triplet, SCf superconductivity, as shown in Fig. 7-b. This confirms the role of BOW fluctuations in the existence of SCf ordering at repulsive coupling. This is further supported when ω D increases and crosses the critical value at which SCd reappears in Fig. 5. Then the isotope effect becomes once again positive as a consequence of the growth of antiferromagnetic exchange and spin fluctuations that govern the d-wave Cooper pairing.
D. Normal state
Now that the increase of the influence of electronphonon interactions on the temperature scales for ordering has been examined, one can turn our attention on the influence of a weak Ph-M interaction on spin correlations of the normal phase above T c . This is done for the SDW-SCd sequence of instabilities. In Fig. 8-a, we show the temperature dependence of the inverse SDW susceptibility at small |g ph | and various strengths of antinesting. At sufficiently high t ⊥ > t * ⊥ , χ −1 SDW decays essentially linearly from the high temperature region and extrapolates towards a critical point at a finite T SDW . However, as the temperature is lowered at T < t ⊥ , nesting deviations becomes coherent and the susceptibility undergoes a change of regime and ceases to be critical. Nevertheless, according to Fig. 8-a, χ −1 SDW keeps decreasing and extrapolates to a non zero intercept at T = 0 and a finite slope at the end point T c .
This non singular growth of spin correlations in the metallic state, which persist down to T c , can be well described by a Curie-Weiss form (continuous lines in Fig. 8):
χ SDW = C T + Θ ,(26)
extending up to the temperature T CW for the onset of the Curie-Weiss regime, which is about ten times T c in temperature at the frequency used in the Figure (T CW decreases when ω D is lowered [see Fig. 1-d]). Here the Curie-Weiss scale Θ stands as a characteristic energy for SDW fluctuations, which is defined positive when t ⊥ > t * ⊥ . The Curie-Weiss behaviour has been already found in the purely electronic case 28,29 . It results from the positive feedback of SCd pairing on SDW correlations, a consequence of constructive interference between these channels of correlations. The presence of Ph-M interactions clearly reinforces this behavior. As shown in Fig. 8-b, cranking up |g ph | leads to the decrease of the Curie-Weiss scale Θ, and an increase of the constant C This is consistent with an increase of the SDW correlation length ξ ∼ (T + Θ) −1/2 , in tune with the increase of T c discussed above. The softening of Θ in Fig. 8 carries on until t ⊥ reaches t * ⊥ where Θ → 0. There the system would then become quantum critical with χ SDW ∼ 1/T and T SDW → 0, had it not been the presence of superconductivity at a finite T c that prevents to reach the SDW quantum critical point. Below t * ⊥ , Θ < 0 and the system enters in the SDW sector with a finite
T SDW (≡ −Θ) > T c .
At the approach of t * ⊥ , Θ is well fitted by the quantum scaling form
Θ ≈ A(t ⊥ − t * ⊥ ) η ,(27)
with an exponent η 1, consistently with product η = νz of the correlation length (ν = 1/2) and the dynamical (z = 2) exponents for SDW at the one-loop level. The linear profile of Θ near t * ⊥ is illustrated in Fig. 2-a. From the latter Figure and Fig. 8-b, the coefficient A decreases relatively quickly with |g ph |.
V. DISCUSSION AND CONCLUSION
In this work we used a weak coupling RG approach to examine the influence of the tight-binding electron- phonon interaction on the interplay between magnetism and superconductivity in quasi-one-dimensional conductors. When the phonon-mediated interaction remains weak and subordinate to the direct Coulomb terms of the electron gas, the RG flow of scattering amplitudes is found to be distorted for particular momentum transfers. This reinforces the antiferromagnetic exchange between itinerant spins and yields an increase of the temperature scale of SDW ordering. By introducing enough nesting deviations in the electron kinetics, SDW ordering is inhibited, but magnetic reinforcement by the electronphonon interaction persists and shifts by interference in the superconducting channel. D-wave Cooper pairing and T c then becomes enhanced compared to the purely electronic situation. These properties were found to be affected by retardation effects linked to the exchange of low energy acoustic phonons that modulate the strength of virtual electron-hole scattering entering in the antiferromagnetic exchange term of the electron gas. This gives rise to a positive isotope effect on SDW ordering temperature, which carries over beyond critical antinesting t * ⊥ where d-wave superconductivity is found.
Our results also revealed that such an increase for T c is preceded by the strengthening of spin fluctuations in the normal phase. This is manifest in a more pronounced Curie-Weiss SDW susceptibility compared to the purely electronic situation, a consequence of self-consistency between d-wave Cooper pairing and spin fluctuations, a positive interference effect whose amplitude scales with T c .
We have also established the range of electron-phonon interaction beyond which SDW ordering is no longer stable against the BOW or Peierls distorted state. In these conditions, the Peierls ordering was found to be followed above critical antinesting by either d-wave or amazingly triplet f-wave superconductivity depending if retardation effects are weak or strong, respectively. Isotope effect which is found to be negative in the triplet SCf sector and positive in SCd reflects the origin of pairing interaction in both situations, namely BOW fluctuations in the former case and SDW ones in the latter.
The relevance of the above results for concrete materials like the Bechgaard salts is of interest. Superconductivity emerges in these systems where SDW state ends under pressure. Their normal state is characterized by important spin fluctuations over a large temperature interval above T c whose amplitude scales with the one of spin correlations under pressure, as made abundantly clear by NMR experiments 19,22,62,63 .
Our findings show that intrachain repulsive interactions are dominant in these materials. While repulsive interactions are known to be able to generate on their own the sequence of SDW-SCd instabilities as a function t ⊥ in the quasi-1D electron gas model [26][27][28][29] , the present results show, however, that the addition of a relatively small tight-binding electron-phonon interaction, which would be compatible with diffuse X-ray scattering experiments 31,32 , are far from being an obstacle for superconductivity. When subordinate to the purely electronic repulsion, the phonon-mediated interaction can indeed play a very active part in assisting anti ferromagnetism in the emergence of d-wave superconductivity with a stronger T c .
Although the typical range of values taken by the electron-phonon matrix element has not been determined with great accuracy in materials like the Bechgaard salts [see for example Ref. 64 ], the results of the present paper suggest that it should be small in amplitude compared to direct interactions. This is supported by the stability of the SDW state against the Peierls distortion, which from the above results is found to be assured only within a finite interval of weak phonon-mediated interaction at essentially arbitrary retardation. Therefore the absence of the Peierls phenomena in the Bechgaard salts may be viewed as a mere consequence of the weakness of the electron-phonon coupling constant in these materials. This view would be consistent with previous estimations made from optics 64 , and also from the fact that the only few materials showing a lattice distorted phase belong to the more correlated isostructural compounds of the (TMTTF) 2 X series, the so-called Fabre salts. A compound like (TMTTF) 2 PF 6 , for instance, is well known to undergo a spin-Peierls transition within a strongly correlated Mott state 31,32,65 . Less than 10 kbars of pressure is sufficient to weaken the coupling of phonons to electrons and transform this state into one with antiferromagnetic Néel order 66,67 ; 30 kbars separate the latter from the sequence of SDW-SC instabilities found in the prototype compound (TMTSF) 2 PF 6 of the Bechgaard salts [68][69][70] , in line with a coupling to phonons that remains in the background of direct Coulomb terms.
As to the possible experiments able to disentangle the part played by phonon-mediated interaction on the SDW-SC sequence of instabilities seen in molecular ma-terials like the Bechgaard salts, isotope effect measurements would be certainly of interest, especially near the quantum critical point where the present results show that it becomes huge at the approach of t * ⊥ on the SDW side of the phase diagram. While isotope effect in molecular materials proves to be difficult to realize in practice due to the complications of controlling all other parameters following a change in the mass M of molecular units (volume of the unit cell, disorder, etc.), the 13 C enrichment of the TMTSF molecular units stands probably as the best way to limit these side effects and to test some of the results obtained here. According to Fig. 2a, for instance, a finite reduction in ω D would induce a decrease in the critical t * ⊥ at which superconductivity occurs. Practically, one should therefore expect a downward shift of the critical pressure for the emergence of superconductivity and a decrease in the maximum T * c at that point and beyond on the pressure axis.
Another possible signature of the reinforcement of anti ferromagnetism by electron-phonon interaction in the Bechgaard salts may be found in its influence on the Curie-Weiss behaviour of SDW susceptibility which governs the enhancement of the NMR spin-lattice relaxation rate observed down to T c 19,22,62,71,72 . While the quasi-1D electron gas model with purely electronic interactions does predict a critical linear suppression of the Curie-Weiss scale Θ for spin fluctuations as t ⊥ → t * ⊥ 28,29 , its slope [coefficient A of Eq. (27)] proves to be significantly larger than the one seen in experiments 73 . In this regard, we have found that adding a small |g ph | is sufficient to reduce the downslope of Θ to values congruent with experiments 73 , and this over a large range of retardation. This supports the view of an active role played by the electron-phonon interaction in the properties of the metallic state, especially those associated to quantum criticality at t * ⊥ . In this paper, we have dealt exclusively with the coupling of correlated electrons to low energy acoustic phonons within the tight-binding scheme for the electronic structure, a coupling well known to be responsible of electronically driven structural instabilities in low di-mensional molecular materials 32,52 . We did not consider intramolecular -Holstein -phonon modes, yet also well known to be present. Their classification alongside their -small -coupling to electrons in (TMTSF) 2 X have been obtained from infrared optical studies. 64 These molecular phonons are characterized by relatively large energies and weak retardation effects compared to acoustic branches considered above. Their influence can then in first approximation be incorporated through a redefinition of the non retarded terms, amounting to a small and same down shift of the couplings g i of the electron gas model. Since the latter couplings were taken as phenomenological constants whose range were fixed by experiments, the values taken in the present work should embody to some extend the influence of intramolecular phonons.
In conclusion, we have performed a finite temperature renormalization group analysis of the quasi-1D electron electron gas model with non retarded electron-electron couplings and phonon-mediated interactions of the tightbinding electronic structure. For a phonon-mediated interaction that is weak compared to non retarded terms, we found a reinforcement of anti ferromagnetism and its transition toward superconductivity under bad nesting conditions of the electron spectrum. The weakness of phonon-mediated interactions acts as a decisive factor for the stability of anti ferromagnetism against the Peierls phenomena in low dimensional conductors. It is likely that these retarded interactions have also a built-in positive impact in the observation of organic superconductivity on the verge of anti ferromagnetism in the Bechgaard salts.
FIG. 1 :
1(Color on line) Typical temperature variations of the SDW, BOW and SCd-wave susceptibilities at ωD/t ⊥ = 0.4 for (a) weak and (b) intermediateantinesting t ⊥ at zero (open symbols ) and non-zero (full symbols, |g ph | = 0.1 ) phononmediated interaction. The comparison of susceptibilities for the same |g ph | for weak (c) and intermediate (d) antinesting values at lower phonon frequencies.
on line) Phase diagrams of the repulsive quasi-1D electron gas model as a function of the anti nesting parameter t ⊥ and |g ph | for (a), the SDW/SCd and (b) BOW/SCd sequences of instabilities at ωD/t ⊥ = 0.4. In (a), the continuous lines correspond to the instabilities lines in the adiabatic ωD → 0 limit and the dashed lines show the variation of the Curie-Weiss scale Θ of χSDW [Eq. (26)] as a function of t ⊥ in the superconducting region.
FIG. 3 :
3(Color on line) Temperature variation of the SDW, BOW and SCd susceptibilities for |g ph | above the threshold |g c ph | for the occurrence of BOW instability at (a) t ⊥ < t * ⊥ , and in the superconducting sector at t ⊥ > t * ⊥ for the (b) SCd (ωD/t ⊥ = 0.4) and (c) triplet SCf (ωD/t ⊥ = 10 −3 ) instabilities.
FIG. 4 :
4(Color on line) SDW/BOW critical temperatures at t ⊥ = t * ⊥ /2 below the threshold antinesting (right) and the maximum SCd critical temperature (left) [T * c = Tc(t * ⊥ )] versus the normalized strength of phonon-mediated interaction |g ph | at ωD/t ⊥ = 0.4. FIG. 5: (Color on line) Phase diagram above the threshold |g c ph | for the BOW to SC sequence of instabilities as a function of antinesting. The Figure shows the crossover between triplet f-wave and singlet d-wave superconductivity in the small phonon frequency region. BOW crossover in the density-wave instabilities at low antinesting 27 . Cranking up ω D results in the progressive enhancement of open diagrams which are responsible for spin fluctuations and d-wave superconductivity. Although from
FIG. 6 :
6(Color on line) Isotope effect at |g ph | = 0.1 for (a) TSDW at different anti nesting t ⊥ < t * ⊥ and (b) Tc of the SCd channel for different t ⊥ > t * ⊥ .tained by varying the molecular mass M at fixed elastic constant κ [g ph kept constant according to Eq.(11)], corresponds to a positive isotopic effect. The reinforcement can be understood as a modification of the effective antiferromagnetic exchange by retardation. For itinerant electrons, the total scattering amplitudes g 2 and g 3 in the action S I contribute an exchange term of the form
FIG. 7 :
7(Color on line) Isotope effect at |g ph | > |g c ph | for (a) TBOW at different anti nesting t ⊥ < t * ⊥ and on (b) Tc in the SCf and SCd channels for different t ⊥ > t * ⊥ .
FIG. 8 :
8The temperature dependence of the normal phase inverse SDW susceptibility at different antinesting (a) and electron-phonon interaction strength (b). The straight lines correspond to the Curie-Weiss fit[Eq. 26].
. D Jérome, A Mazaud, M Ribault, K Bechgaard, J. Phys. (Paris) Lett. 4195D. Jérome, A. Mazaud, M. Ribault, and K. Bechgaard, J. Phys. (Paris) Lett. 41, L95 (1980).
C Bourbonnais, D Jérome, arXiv:cond-mat/0904.0617The Physics of Organic Superconductors and Conductors. A. LebedHeidelbergSpringer110357Springer Series in Materials ScienceC. Bourbonnais and D. Jérome, in The Physics of Or- ganic Superconductors and Conductors, edited by A. Lebed (Springer, Heidelberg, 2008), vol. 110, Springer Series in Materials Science, p. 357, arXiv:cond-mat/0904.0617.
. V J Emery, R Bruinsma, S Barisic, Phys. Rev. Lett. 481039V. J. Emery, R. Bruinsma, and S. Barisic, Phys. Rev. Lett. 48, 1039 (1982).
. V J Emery, Synth. Met. 1321V. J. Emery, Synth. Met. 13, 21 (1986).
. L G Caron, C Bourbonnais, Physica. 143453L. G. Caron and C. Bourbonnais, Physica 143B, 453 (1986);
. C Bourbonnais, L G Caron, Europhys. Lett. 5209C. Bourbonnais and L. G. Caron, Europhys. Lett. 5 (1988) 209.
. M T Béal-Monod, C Bourbonnais, V J Emery, Phys. Rev. B. 347716M. T. Béal-Monod, C. Bourbonnais, and V. J. Emery, Phys. Rev. B 34, 7716 (1986).
. D J Scalapino, E Loh, J E Hirsch, Phys. Rev. B. 348190D. J. Scalapino, E. Loh, and J. E. Hirsch, Phys. Rev. B 34, R8190 (1986).
. H Shimahara, J. Phys. Soc. Jpn. 581735H. Shimahara, J. Phys. Soc. Jpn. 58, 1735 (1989).
. S Mazumdar, R T Clay, D K Campbell, Phys. Rev. B. 6213400S. Mazumdar, R. T. Clay, and D. K. Campbell, Phys. Rev. B 62, 13400 (2000).
. K Kuroki, R Arita, H Aoki, Phys. Rev. B. 6394509K. Kuroki, R. Arita, and H. Aoki, Phys. Rev. B 63, 094509 (2001).
. Y Fuseya, Y Suzumura, J. Phys. Soc. Jpn. 741263Y. Fuseya and Y. Suzumura, J. Phys. Soc. Jpn. 74, 1263 (2005).
. Y Fuseya, H Kohno, K Miyake, J. Phys. Soc. Jpn. 74722Y. Fuseya, H. Kohno, and K. Miyake, J. Phys. Soc. Jpn. 74, 722 (2005).
. J Friedel, Eur. Phys. J. B. 5483J. Friedel, Eur. Phys. J. B 54, 83 (2006).
. D Jérome, H J Schulz, Adv. Phys. 31299D. Jérome and H. J. Schulz, Adv. Phys. 31, 299 (1982).
. L J Azevedo, J E Schirber, J M Williams, M A Beno, D R Stephens, Phys. Rev. B. 301570L. J. Azevedo, J. E. Schirber, J. M. Williams, M. A. Beno, and D. R. Stephens, Phys. Rev. B 30, 1570 (1984).
. T Vuletic, P Auban-Senzier, C Pasquier, S Tomic, D Jerome, M Heritier, K Bechgaard, Eur. Phys. J. B. 25319T. Vuletic, P. Auban-Senzier, C. Pasquier, S. Tomic, D. Jerome, M. Heritier, and K. Bechgaard, Eur. Phys. J. B 25, 319 (2002).
. N Doiron-Leyraud, P Auban-Senzier, S René De Cotret, C Bourbonnais, D Jérome, K Bechgaard, L Taillefer, Phys. Rev. B. 80214531N. Doiron-Leyraud, P. Auban-Senzier, S. René de Cotret, C. Bourbonnais, D. Jérome, K. Bechgaard, and L. Taille- fer, Phys. Rev. B 80, 214531 (2009).
. C Bourbonnais, F Creuzet, D Jérome, K Bechgaard, A Moradpour, J. Phys. (Paris) Lett. 45755C. Bourbonnais, F. Creuzet, D. Jérome, K. Bechgaard, and A. Moradpour, J. Phys. (Paris) Lett. 45, L755 (1984).
. F Creuzet, C Bourbonnais, L G Caron, D Jérome, A Moradpour, Synth. Met. 19277F. Creuzet, C. Bourbonnais, L. G. Caron, D. Jérome, and A. Moradpour, Synth. Met. 19, 277 (1987).
. W Kang, S T Hannahs, P M Chaikin, Phys. Rev. Lett. 703091W. Kang, S. T. Hannahs, and P. M. Chaikin, Phys. Rev. Lett. 70, 3091 (1993).
. J R Cooper, W Kang, P Auban, G Montambaux, D Jerome, K Bechgaard, Phys. Rev. Lett. 631984J. R. Cooper, W. Kang, P. Auban, G. Montambaux, D. Jerome, and K. Bechgaard, Phys. Rev. Lett. 63, 1984 (1989).
. W Wu, P M Chaikin, W Kang, J Shinagawa, W Yu, S E Brown, Phys. Rev. Lett. 9497004W. Wu, P. M. Chaikin, W. Kang, J. Shinagawa, W. Yu, and S. E. Brown, Phys. Rev. Lett. 94, 097004 (2005).
. K Yamaji, J. Phys. Soc. Jpn. 512787K. Yamaji, J. Phys. Soc. Jpn. 51, 2787 (1982).
. H Gutfreund, B Horovitz, M Weger, J. Phys. 44983H. Gutfreund and B. Horovitz and M. Weger, J. Phys. (Paris) Coll. 44, 983 (1983);
. B Horovitz, H Gutfreund, M Weger, Sol. State Comm. 39541B. Horovitz and H. Gutfreund and M. Weger, Sol. State Comm. 39, 541 (1981).
. Y Hasegawa, H Fukuyama, J. Phys. Soc. Jpn. 553978Y. Hasegawa and H. Fukuyama, J. Phys. Soc. Jpn. 55, 3978 (1986).
. R Duprat, C Bourbonnais, Eur. Phys. J. B. 21219R. Duprat and C. Bourbonnais, Eur. Phys. J. B 21, 219 (2001).
. J C Nickel, R Duprat, C Bourbonnais, N Dupuis, Phys. Rev. Lett. 95247001J. C. Nickel, R. Duprat, C. Bourbonnais and N. Dupuis, Phys. Rev. Lett. 95, 247001 (2005);
. Phys. Rev. B. 73165126Phys. Rev. B 73, 165126 (2006).
. C Bourbonnais, A Sedeki, Phys. Rev. B. 8085105C. Bourbonnais and A. Sedeki, Phys. Rev. B 80, 085105 (2009).
. A Sedeki, D Bergeron, C Bourbonnais, Phys. Rev. B. 85165129A. Sedeki, D. Bergeron, and C. Bourbonnais, Phys. Rev. B 85, 165129 (2012).
. H Meier, P Auban-Senzier, C Pépin, D Jérome, Phys. Rev. B. 87125128H. Meier, P. Auban-Senzier, C. Pépin, and D. Jérome, Phys. Rev. B 87, 125128 (2013).
. J Pouget, R Moret, R Comes, K Bechgaard, J.-M Fabre, L Giral, Mol. Cryst. Liq. Cryst. 79129J. Pouget, R. Moret, R. Comes, K. Bechgaard, J.-M. Fabre, and L. Giral, Mol. Cryst. Liq. Cryst. 79, 129 (1982).
. J P Pouget, 2466J. P. Pouget, Crystals 2, 466 (2012).
. V J Emery, J. Phys. 44977V. J. Emery, J. Phys. (Paris) Coll. 44, 977 (1983).
. H Bakrim, C Bourbonnais, Eur. Phys. Lett. 9027001H. Bakrim and C. Bourbonnais, Eur. Phys. Lett. 90, 27001 (2010).
. A Lanzara, P V Bogdanov, X J Zhou, S Kellar, D L Feng, E D Lu, T Yoshida, H Eisaki, A Fujimori, K Kishiom, Nature. 412510A. Lanzara, P. V. Bogdanov, X. J. Zhou, S. kellar, D. L. Feng, E. D. Lu, T. Yoshida, H. Eisaki, A. Fujimori, K. Kishiom, et al., Nature 412, 510 (2001).
. G.-H Gweon, T Sasagawa, S Y Zhou, J Graf, H Takagi, D.-H Lee, A Lanzara, Nature. 430187G.-H. Gweon, T. Sasagawa, S. Y. Zhou, J. Graf, H. Takagi, D.-H. Lee, and A. Lanzara, Nature 430, 187 (2004).
. H Iwasawa, J F Douglas, K Sato, T Masui, Y Yoshida, Z Sun, H Eisaki, H Bando, A Ino, M Arita, Phys. Rev. Lett. 101157005H. Iwasawa, J. F. Douglas, K. Sato, T. Masui, Y. Yoshida, Z. Sun, H. Eisaki, H. Bando, A. Ino, M. Arita, et al., Phys. Rev. Lett. 101, 157005 (2008).
. M K Crawford, M N Kunchur, W E Farneth, E M Mccarron, Iii , S J Poon, Phys. Rev. B. 41282M. K. Crawford, M. N. Kunchur, W. E. Farneth, E. M. McCarron III, and S. J. Poon, Phys. Rev. B 41, 282 (1990).
. J Bauer, G Sangiovanni, Phys. Rev. B. 82184535J. Bauer and G. Sangiovanni, Phys. Rev. B 82, 184535 (2010).
. F D Klironomos, S.-W Tsai, Phys. Rev. B. 74205109F. D. Klironomos and S.-W. Tsai, Phys. Rev. B 74, 205109 (2006).
. G Sangiovanni, M Capone, C Castellani, M Grilli, Phys. Rev. Lett. 9426401G. Sangiovanni, M. Capone, C. Castellani, and M. Grilli, Phys. Rev. Lett. 94, 026401 (2005).
. O Gurnnasson, O Röuch, J. Phys.: Condens. Matter. 2043201O. Gurnnasson and O. Röuch, J. Phys.: Condens. Matter 20, 043201 (2008).
. C Honerkamp, H C Fu, D.-H Lee, Phys. Rev. B. 7514503C. Honerkamp, H. C. Fu, and D.-H. Lee, Phys. Rev. B 75, 014503 (2007).
. Z B Huang, W Hanke, E Arrigoni, D J Scalapino, Phys. Rev. B. 68220507Z. B. Huang, W. Hanke, E. Arrigoni, and D. J. Scalapino, Phys. Rev. B 68, 220507 (2003).
. S Andergassen, S Capara, C Di Castro, M Grilli, Phys. Rev. Lett. 8756401S. Andergassen, S. Capara, C. Di Castro, and M. Grilli, Phys. Rev. Lett. 87, 056401 (2001).
. N Bulut, D J Scalapino, Phys. Rev. B. 5414971N. Bulut and D. J. Scalapino, Phys. Rev. B 54, 14971 (1996).
. J Chang, E Blackburn, A T Holmes, N B Christensen, J Larsen, J Mesot, R Liang, D A Bonn, W N Hardy, A Watenphul, Nature Physics. 8871J. Chang, E. Blackburn, A. T. Holmes, N. B. Christensen, J. Larsen, J. Mesot, R. Liang, D. A. Bonn, W. N. Hardy, A. Watenphul, et al., Nature Physics 8, 871 (2012).
. M L Tacon, A Bosak, M Souliou, G Dellea, T Loew, R Heid, K.-P Bohnen, G Ghiringhelli, M Krisch, B Keimer, Nature Physics. 1052M. L. Tacon, A. Bosak, M. Souliou, G. Dellea, T. Loew, R. Heid, K.-P. Bohnen, G. Ghiringhelli, M. Krisch, and B. Keimer, Nature Physics 10, 52 (2014).
. P M Grant, Phys. Rev. B. 266888P. M. Grant, Phys. Rev. B 26, 6888 (1982).
. L Ducasse, A Abderraba, J Hoarau, M Pesquer, B Gallois, J Gaultier, J. Phys. C. 193805L. Ducasse, A. Abderraba, J. Hoarau, M. Pesquer, B. Gal- lois, and J. Gaultier, J. Phys. C 19, 3805 (1986).
. D L Pévelen, J Gaultier, Y Barrans, D Chassau, F Castet, L Ducasse, Eur. Phys. J. B. 19363D. L. Pévelen, J. Gaultier, Y. Barrans, D. Chassau, F. Castet, and L. Ducasse, Eur. Phys. J. B 19, 363 (2001).
. W P Su, J R Schrieffer, A J Heeger, Phys. Rev. B. 222099W. P. Su, J. R. Schrieffer and A. J. Heeger, Phys. Rev. B 22, 2099 (1980);
. S Barisic, Phys. Rev. B. 5932S. Barisic, Phys. Rev. B 5, 932 (1972).
. M Krauzman, H Poulet, R M Pick, Phys. Rev. B. 3399M. Krauzman, H. Poulet, and R. M. Pick, Phys. Rev. B 33, 99 (1986).
. C Homes, J E Elridge, Phys. Rev. B. 406138C. Homes and J. E. Elridge, Phys. Rev. B 40, 6138 (1989).
. J P Pouget, S K Khanna, F Denoyer, R Comès, A F Garito, A J Heeger, Phys. Rev. Lett. 37437J. P. Pouget, S. K. Khanna, F. Denoyer, R. Comès, A. F. Garito, and A. J. Heeger, Phys. Rev. Lett. 37, 437 (1976).
. Y Hasegawa, H Fukuyama, Physica B. 143447Y. Hasegawa and H. Fukuyama, Physica B 143, 447 (1986).
. G Montambaux, Phys. Rev. B. 384788G. Montambaux, Phys. Rev. B 38, 4788 (1988).
. L G Caron, C Bourbonnais, Phys. Rev. B. 294230L. G. Caron and C. Bourbonnais, Phys. Rev. B 29, 4230 (1984).
. H Bakrim, C Bourbonnais, Phys. Rev. B. 76195115H. Bakrim and C. Bourbonnais, Phys. Rev. B 76, 195115 (2007).
. P A Lee, T M Rice, R A Klemm, Phys. Rev. B. 152984P. A. Lee, T. M. Rice, and R. A. Klemm, Phys. Rev. B 15, 2984 (1977).
. E Fradkin, J E Hirsch, Phys. Rev. B. 271680E. Fradkin and J. E. Hirsch, Phys. Rev. B. 27, 1680 (1983).
S E Brown, P M Chaikin, M J Naughton, The Physics of Organic Superconductors and Conductors. A. LebedHeidelbergSpringer11049Springer Series in Materials ScienceS. E. Brown, P. M. Chaikin, and M. J. Naughton, in The Physics of Organic Superconductors and Conductors, edited by A. Lebed (Springer, Heidelberg, 2008), vol. 110, Springer Series in Materials Science, p. 49.
. Y Kimura, M Misawa, A Kawamoto, Phys. Rev. B. 8445123Y. Kimura, M. Misawa, and A. Kawamoto, Phys. Rev. B 84, 045123 (2011).
. D Pedron, R Bozio, M Meneghetti, C Pecile, Phys. Rev. B. 4910894D. Pedron, R. Bozio, M. Meneghetti, and C. Pecile, Phys. Rev. B 49, 10894 (1994).
. F Creuzet, C Bourbonnais, L G Caron, D Jérome, K Bechgaard, Synthetic Metals. 19289F. Creuzet, C. Bourbonnais, L. G. Caron, D. Jérome, and K. Bechgaard, Synthetic Metals 19, 289 (1987).
. L G Caron, F Creuzet, P Butaud, C Bourbonnais, D Jérome, K Bechgaard, Synth. Met. 27123L. G. Caron, F. Creuzet, P. Butaud, C. Bourbonnais, D. Jérome, and K. Bechgaard, Synth. Met. 27B, 123 (1988).
. D Chow, P Wzietek, D Foglatti, B Alavi, D J Tantillo, C A Merlic, S E Brown, Phys. Rev. Lett. 813984D. Chow, P. Wzietek, D. Foglatti, B. Alavi, D. J. Tantillo, C. A. Merlic, and S. E. Brown, Phys. Rev. Lett. 81, 3984 (1998).
. H Wilhelm, D Jaccard, R Duprat, C Bourbonnais, D Jérome, J Moser, C Carcel, J M Fabre, Eur. Phys. J. B. 21175H. Wilhelm, D. Jaccard, R. Duprat, C. Bourbonnais, D. Jérome, J. Moser, C. Carcel, and J. M. Fabre, Eur. Phys. J. B 21, 175 (2001).
. T Adachi, E Ojima, K Kato, H Kobayashi, T Miyazaki, M Tokumoto, A Kobayashi, J. Am. Chem. Soc. 1223238T. Adachi, E. Ojima, K. Kato, H. Kobayashi, T. Miyazaki, M. Tokumoto, and A. Kobayashi, J. Am. Chem. Soc. 122, 3238 (2000).
. J Moser, M Gabay, P Auban-Senzier, D Jérome, K Bechgaard, J M Fabre, Eur. Phys. J. B. 139J. Moser, M. Gabay, P. Auban-Senzier, D. Jérome, K. Bechgaard, and J. M. Fabre, Eur. Phys. J. B 1, 39 (1998).
. P Wzietek, F Creuzet, C Bourbonnais, D Jérome, K Bechgaard, P Batail, J. Phys. I. 3171P. Wzietek, F. Creuzet, C. Bourbonnais, D. Jérome, K. Bechgaard, and P. Batail, J. Phys. I 3, 171 (1993).
. J Shinagawa, Y Kurosaki, F Zhang, C Parker, S E Brown, D Jérome, K Bechgaard, J B Christensen, Phys. Rev. Lett. 98147002J. Shinagawa, Y. Kurosaki, F. Zhang, C. Parker, S. E. Brown, D. Jérome, K. Bechgaard, and J. B. Christensen, Phys. Rev. Lett. 98, 147002 (2007).
. C Bourbonnais, A Sedeki, C. R. Physique. 12532C. Bourbonnais and A. Sedeki, C. R. Physique 12, 532 (2011).
|
[] |
[
"Strategy to Extract Kitaev Interaction using Symmetry in Honeycomb Mott Insulators",
"Strategy to Extract Kitaev Interaction using Symmetry in Honeycomb Mott Insulators"
] |
[
"Jiefu Cen \nDepartment of Physics\nUniversity of Toronto\nM5S 1A7TorontoOntarioCanada\n",
"Hae-Young Kee \nDepartment of Physics\nUniversity of Toronto\nM5S 1A7TorontoOntarioCanada\n\nCanadian Institute for Advanced Research\nCIFAR Program in Quantum Materials\nM5G 1M1TorontoOntarioCanada\n"
] |
[
"Department of Physics\nUniversity of Toronto\nM5S 1A7TorontoOntarioCanada",
"Department of Physics\nUniversity of Toronto\nM5S 1A7TorontoOntarioCanada",
"Canadian Institute for Advanced Research\nCIFAR Program in Quantum Materials\nM5G 1M1TorontoOntarioCanada"
] |
[] |
The Kitaev spin liquid, a ground state of the bond-dependent Kitaev model in a honeycomb lattice has been a centre of attraction, since a microscopic theory to realize such an interaction in solid-state materials was discovered. A challenge in real materials though is the presence of the Heisenberg and another bonddependent Gamma interactions detrimental to the Kitaev spin liquid, and there have been many debates on their relative strengths. Here we offer a strategy to extract the Kitaev interaction out of a full microscopic model by utilizing the symmetries of the Hamiltonian. Two tilted magnetic field directions related by a two-fold rotational symmetry generate distinct spin excitations originating from a specific combination of the Kitaev and Gamma interactions. Together with the in-and out-of-plane magnetic anisotropy, one can determine the Kitaev and Gamma interactions separately. Dynamic spin structure factors are presented to motivate future experiments. The proposed setups will advance the search for Kitaev materials.We thank J. Gordon, I. Lee, C. Hammel, S. Nagler, and A. Tennant for useful discussions.
|
10.1038/s42005-022-00893-4
|
[
"https://arxiv.org/pdf/2203.06193v3.pdf"
] | 247,447,206 |
2203.06193
|
c7f05c381d3977b43455d04a7dcbd3edb3b4377b
|
Strategy to Extract Kitaev Interaction using Symmetry in Honeycomb Mott Insulators
18 May 2022
Jiefu Cen
Department of Physics
University of Toronto
M5S 1A7TorontoOntarioCanada
Hae-Young Kee
Department of Physics
University of Toronto
M5S 1A7TorontoOntarioCanada
Canadian Institute for Advanced Research
CIFAR Program in Quantum Materials
M5G 1M1TorontoOntarioCanada
Strategy to Extract Kitaev Interaction using Symmetry in Honeycomb Mott Insulators
18 May 2022(Dated: May 20, 2022)1
The Kitaev spin liquid, a ground state of the bond-dependent Kitaev model in a honeycomb lattice has been a centre of attraction, since a microscopic theory to realize such an interaction in solid-state materials was discovered. A challenge in real materials though is the presence of the Heisenberg and another bonddependent Gamma interactions detrimental to the Kitaev spin liquid, and there have been many debates on their relative strengths. Here we offer a strategy to extract the Kitaev interaction out of a full microscopic model by utilizing the symmetries of the Hamiltonian. Two tilted magnetic field directions related by a two-fold rotational symmetry generate distinct spin excitations originating from a specific combination of the Kitaev and Gamma interactions. Together with the in-and out-of-plane magnetic anisotropy, one can determine the Kitaev and Gamma interactions separately. Dynamic spin structure factors are presented to motivate future experiments. The proposed setups will advance the search for Kitaev materials.We thank J. Gordon, I. Lee, C. Hammel, S. Nagler, and A. Tennant for useful discussions.
INTRODUCTION
An electron's orbital motion in an atom generates a magnetic field which influences its spin moment, known as spin-orbit coupling. When the coupling is strong in heavy atoms, the effective Hamiltonian is described by the spin-orbit-entangled pseudospin wave-function and the interactions among magnetic ions are highly anisotropic different from the standard Heisenberg interaction [1][2][3][4][5][6] . A fascinating example is the Kitaev model with a bond-dependent interaction in a two-dimensional honeycomb lattice, whose ground state is a quantum spin liquid (QSL) with Majorana fermions and Z 2 vortex excitations 7 . There have been extensive studies on the model because in the Kitaev QSL non-Abelian excitations emerge under a magnetic field, and their braidings provide topological computation. Since a microscopic mechanism to generate such an interaction was uncovered 8 , intense efforts toward finding QSLs including a variety of candidate materials from spin S =1/2 9-18 to higher-spin S systems have been made [19][20][21][22] . Despite such efforts, a confirmed Kitaev QSL is still missing.
One challenge in finding the Kitaev QSL in magnetic materials is the presence of other spin interactions which may generate magnetic orderings or other disordered phases [23][24][25][26][27][28][29] . A generic nearest neighbour (n.n.) model in an ideal honeycomb was derived which revealed the isotropic Heisenberg interaction and another bond-dependent interaction named the Gamma (Γ) 25 . Furthermore, there exist further neighbour interactions such as second and third n.n.
Heisenberg interactions, which makes it difficult to single out the Kitaev interaction itself.
There have been many debates on the relative strengths, especially between the dominant Kitaev and Gamma interactions in Kitaev candidate materials 13,18,28,30 , and an experimental guide on how to extract the Kitaev interaction out of a full Hamiltonian is highly desirable.
In this work, we present a symmetry-based experimental strategy to determine the Kitaev interaction. Our proposal is based on the π-rotation around the a-axis perpendicular to one of the bonds in the honeycomb plane, denoted by C 2a symmetry that is broken by a specific combination of the Kitaev and Γ interactions. This broken C 2a can be easily detected with the help of a magnetic field applied within the a−c plane where the c-axis is perpendicular to the honeycomb plane; spin excitations under the two field angles of θ and −θ, measured away from the honeycomb plane as shown in Fig. 1(a), are distinct due to the combination of the Kitaev and Gamma interactions. The two field angles are related by the π-rotation around a-axis, i.e. C 2a operation. Such differences are based on the symmetry and signal the relative strengths of these interactions. A magnetic ordering that further enhances the broken C 2a symmetry does not alter the asymmetry, but quantifying the interaction strengths requires the size of the magnetic ordering. For this reason, a polarized state in the high-field region would be ideal for our purpose.
To determine each of the interactions, one needs to use the conventional in-vs. outof-plane anisotropy in spin excitations. We note that the Gamma interaction affects the conventional anisotropy, but the Kitaev does not when the field is large enough to compensate the order by disorder effect 31 . Thus subtracting the Gamma contribution deduced from the conventional anisotropy allows us to estimate the Kitaev interaction from the measured spin excitations under the field angles of θ and −θ. Both the conventional anisotropy and the π-rotation-related spin excitations can be measured by angle-dependent ferromagnetic resonance (FMR) or inelastic neutron scattering (INS) techniques while sweeping the magnetic field directions in the a − c plane containing the C 2a rotation axis.
Below we present the microscopic model and main results based on the π-rotation symmetry around a-axis. To demonstrate our theory, we also show the FMR and dynamical spin structure factors (DSSF) obtained by exact diagonalization (ED). We analyze the different spin excitations under the two field angles at finite momenta using the linear spin wave theory (LSWT), which further confirms our results based on the symmetry argument. Our results will guide a future search of Kitaev materials.
RESULT
Model -The generic spin exchange Hamiltonian among magnetic sites with strong spinorbit coupling for the ideal edge sharing octahedra environment in the octahedral x − y − z axes shown in Fig. 1(a) contains the Kitaev (K), Gamma (Γ), and Heisenberg (J)
interactions 25 : H = ij ∈αβ(γ) JS i · S j + KS γ i S γ j + Γ(S α i S β j + S β i S α j ) ,(1)
where S = 1 2 σ with ≡ 1 and σ is Pauli matrix, ij denotes the nearest neighbor (n.n.) magnetic sites, and αβ(γ) denotes the γ bond taking the α and β spin components (α, β, γ ∈ {x,y,z}). The x-, y-, and z-bonds are shown in red, blue, and green colours, respectively in Fig. 1(a). Further neighbour interactions and trigonal-distortion allowed interactions, and their effects will be discussed later.
To analyze the symmetry of the Hamiltonian, we rewrite the model in the a − b − c axes 32-34 :
H = i,j J XY (S a i S a j + S b i S b j ) + J Z S c i S c j + J ab cos φ γ (S a i S a j − S b i S b j ) − sin φ γ (S a i S b j + S b i S a j ) − √ 2J ac cos φ γ (S a i S c j + S c i S a j ) + sin φ γ (S b i S c j + S c i S b j ) ,(2)
where φ γ = 0, 2π 3 , and 4π 3 for γ = z-, x-, and y-bond respectively, and the exchange interactions are given by
J XY = J + J ac , J Z = J + J ab , J ab = 1 3 K + 2 3 Γ, J ac = 1 3 K − 1 3 Γ.(3)
The Hamiltonian H is invariant under π-rotation around the b-axis denoted by C 2b and 2π 3rotation around the c-axis by C 3c in addition to the inversion and time-reversal symmetry.
Our proposed experimental design is based on the observation that the H is not invariant under π-rotation about the a-axis C 2a due to the presence of only J ac , i.e., if J ac = 0, C 2a is also a symmetry of H. Since the C 2a is broken by J ac , if there is a way to detect the broken C 2a , that will signal the strength of J ac . We note that the magnetic field sweeping from the c-axis to a-axis within the a − c-plane does the job. The fields with angles of θ (blue line) and −θ (red line) for 0 < θ < π 2 shown in Fig. 1(b) and (c) are related by C 2a rotation, and thus measuring the spin excitation difference between these two field directions will detect the strength of J ac .
To prove our symmetry argument, we consider a full model with a magnetic field. Under a magnetic field, the total Hamiltonian including the Zeeman term is given by
H tot = H + H B = H − g µ B i S i · h,(4)
where the external field h has the polar angle θ measured away from the a − b honeycomb plane and the azimuthal angle φ from the a-axis as shown in Fig. 1 We focus on the lowest energy excitation n = 1 which gives a dominant resonance at low temperatures, and drop the n in ω n from now on for simplicity, even though our proposal works for all n. We define the excitation anisotropy between the magnetic field with angles of θ and −θ as δω K (θ) ≡ ω(θ) − ω(−θ) for 0 < θ < π 2 , and the conventional anisotropy between in-and out-of-plane fields as δω A ≡ ω(θ = 0) − ω(θ = π 2 ). Below we first show how δω K arises from J ac under the field in the a − c plane based on the symmetry.
Symmetry Analysis -To understand the origin of a finite δω K for φ = 0 under the magnetic field sweep, we first begin with a special case when φ = π 2 , i.e, when the external Direction of the external magnetic field h in abc axes where θ is measured from the a−b plane, and φ is from the a-axis. The blue arrow M represents the magnetic moment direction with the angle
ℎ ( ) (− ) ( − ) ( + ) (c) - ℎ M 3+ X − above X − below (a) (b)θ M . (c) δω K (θ)
in the a − c plane is the difference in the spin excitation energies ω between two field directions: ω(θ) (blue) and ω(−θ) (red). C 2b maps ω(θ) to ω(π +θ), so δω K (π −θ) = −δω K (θ).
field is in the b − c plane. This is a special case where δω K = 0 for the following reason.
The Zeeman terms due to the field with the angle θ and with −θ are related by a π rotation of the field about theb axis, denoted by
C 2b,θ : H B ∝ (cos θS b i + sin θS c i ) −→ (cos θS b i − sin θS c i ).(5)
The same can be achieved by a π-rotation of the lattice,
C 2b : (S a , S b , S c ) → (−S a , S b , −S c ) and φ x ↔ φ y ,(6)
which also indicates H is invariant under C 2b . While H B breaks the C 2b symmetry of H, the total Hamiltonian H + H B (θ) and H + H B (−θ) are related by C 2b and therefore, share the same eigenenergies, i.e., δω K = 0. The difference due to the field is simply removed by a π rotation of the eigenstates about theb axis. The magnetic field sweeping from θ to −θ in the other planes equivalent to b − c plane by C 3c symmetry also gives δω K = 0. Now let us consider when the magnetic field sweeps in the a − c plane. Similarly, the magnetic field directions θ and −θ are related by
C 2a,θ : H B ∝ (cos θS a i + sin θS c i ) −→ (cos θS a i − sin θS c i ).(7)
Considering a π rotation of the lattice about theâ axis,
C 2a : (S a , S b , S c ) → (S a , −S b , −S c ) and φ x ↔ φ y ,(8)
we find J XY , J Z , J ab , terms are invariant under C 2a , while the J ac terms transform as magnetic field angle. Here, for simplicity, we calculate the excitation energy probed by the RF field (details can be found in the Methods) with a set magnetic field strength for spin 1 2 using ED on a C 3 -symmetric 24-site cluster.
C 2a : J ac → −J ac .(9)
We set our units the magnetic field h = 1 and g = µ B ≡ 1, leading to the excitation energy of a free spin, ω 0 = gµ B h = 1, so the excitation energies calculated are normalized by ω 0 . A few sets of different interaction parameters (in units of ω 0 ) are investigated. Figure 2(a) shows the J = −1 and K = Γ = 0.5 case with no δω K (θ) between −π/2 < θ < 0 (red line) and 0 < θ < π/2 (blue line), since J ac = 0. The conventional anisotropy δω A is finite, because the Γ interaction generates a strong anisotropy between the plane θ = 0 and the c-axis θ = π/2, i.e., J XY = J Z due to a finite Γ contribution. The black line is for only J = −1 showing a uniform FMR independent of angles which serves as a reference. Kitaev candidate materials 28 . Clearly, δω K (θ) is significant due to a finite J ac , and δω A is also large due to a finite Γ. While a magnetic field of strength h = 1 is used to polarize the ground state where the finite-size effect is small as shown in Supplementary Note 2, our symmetry argument works for any finite field. However, we note that the finite-size effect of ED is minimal when the ground state is polarized. in the a − c plane. The square boxes denote the excitation energies obtained by the ED, and the colour bars indicate the intensity of DSSF α S αα (q, ω) (details can be found in the Methods). The structure factor is convolved with a Gaussian of finite width to emulate finite experimental resolution. We observe a clear difference between the two field directions, δω K at every momentum points. In particular, δω K is the largest at M 2 -point, while it is tiny at the K 1 -point. Note that M 1 and M 3 are related by the C 2b and inversion.
X 1 K 1 M 1 K 2 M 2 X 2 X 3 M 3 1.00 1.05 1.10 1.15 / 0 (b) a 1 a 2 C 2a C 2b a b (c) X 1 K 1 M 1 K 2 M 2 X 2 X 3 M 3(d)
To gain more insights of δω K (θ) at finite momenta obtained by ED, we also perform LSWT calculations with the magnetization making an angle θ M as indicated in Fig. 1(b). with the ED results in Fig. 3(a). The mismatch between LSWT and ED is visible at every momentum, which implies the significant effects of nonlinear terms 47 .
However, when the field increases, the difference should decrease, since the magnetic polarization increases at a higher field. In Fig. 3(b), we show both ED and LSWT with h = 8 and θ M ∼ 25.8 • , where the two results match well as expected, and the nonlinear terms become less significant. In particular, the anisotropy δω K at the K-point at the high field limit given by the leading terms in 1/h, is simplified as
δω K (θ) = 3 8 cos θ M |2 √ 2J ac sin θ M − J ab cos θ M | − |2 √ 2J ac sin θ M + J ab cos θ M | + 9 √ 2J ac J ab (2 sin 2θ M + sin 4θ M ) 128h cos(θ − θ M ) + O( 1 h 2 ),(10)
where θ M (θ) → θ when h → ∞. This shows that both J ac and J ab should be finite for a finite δω K at the K-point, which explains no splitting of δω K at the K-point in Fig. 3(b), as our choice of parameters gives J ab = 0, i.e, Γ = −K/2. On the other hand, at the M 2 -point, there is no simple expression, but the leading terms of δω K (θ) in δθ a/c around the a-and c-axis (δθ a = 0 − θ and δθ c = θ − π/2) are given by
δω K (θ) J ac (δθ a )A + O(δθ 3 a ) J ac (δθ c )C + O(δθ 3 c ),(11)
where A and C are functions of other interactions given in Supplementary Note 3. Clearly, δω K (θ) appears as odd powers of J ac and δθ a/c , consistent with the symmetry analysis presented above.
So far, we have focused on the ideal octahedra environment. However, trigonal distortion is often present, albeit small, which introduces extra exchange interactions. Below we discuss other contributions to δω A complicating the isolation of K from J ac and our resolution of such complication in order to estimate the Kitaev interaction out of a full Hamiltonian.
Effects of trigonal distortion and further neighbour interactions -In principle,
there are other small but finite interactions; few examples in δH include
δH = ij ∈αβ(γ) Γ (S α i S γ j + S γ i S α j + S β i S γ j + S γ i S β j ) + J 2 i,j S i · S j + J 3 i,j S i · S j ,(12)
where Γ is introduced when a trigonal distortion is present 26 ; J 2 and J 3 are the second and third n.n. Heisenberg interactions respectively. It is natural to expect that they are smaller than the n.n. Kitaev, Gamma, and Heisenberg interactions 18,27,28 . Several types of interlayer exchange interactions are present, but they are even smaller than the terms considered in Eq. 12 18 .
Let's investigate how they affect the above analysis done for the ideal n.n. Hamiltonian.
First of all, the isotropic interactions such as further neighbour J 2 , J 3 , and the interlayer Heisenberg do not make any change to our proposal, since they do not contribute to δω A nor δω K . On the other hand, the Γ modifies the exchange parameters as follows:
J XY = J + J ac − Γ , J Z = J + J ab + 2Γ , J ab = 1 3 K + 2 3 (Γ − Γ ), J ac = 1 3 K − 1 3 (Γ − Γ ).(13)
The conventional anisotropy δω A is now due to Γ + 2Γ obtained from J Z − J XY . Thus to single out the Kitaev interaction, one has to find both Γ and Γ , as J ac is a combination of K, Γ and Γ . Once the trigonal distortion is present, the g-factor also becomes anisotropic,
i.e., the in-plane g a is different from the c-axis g c , which affects δω A .
However, the g-factor anisotropy does not affect the δω K , since the field angles of θ and −θ involve the same strength of in-and out-of-plane field components, i.e, h(θ) = h aâ + h cĉ and h(−θ) = −h aâ + h cĉ . Thus we wish to extract the information of K and Γ − Γ from δω K , as it is free from the g-factor anisotropy.
We note that δω K at the K-point, Eq. (10) offers both J ac and J ab from the first term independent of the field and the next term proportional to 1/h eff (h eff = h g 2 a cos 2 θ + g 2 c sin 2 θ).
Once J ac and J ab are deduced, K and Γ − Γ can be estimated from Eq. (13). The measurements of δω K at the K-point with a large magnetic field then determine K and Γ − Γ separately. Further neighbor Heisenberg interactions, J 2 and J 3 do not modify Eq. (10) in the high-field limit, so they do not affect our procedure.
DISCUSSION
We propose an experimental setup to single out the Kitaev interaction for honeycomb
Mott insulators with edge-sharing octahedra. In an ideal octahedra cage, the symmetryallowed n.n. interactions contain the Kitaev, another bond-dependent Γ and Heisenberg interactions. We prove that the magnetic anisotropy related by the π-rotation around the a-axis denoted by δω K occurs only when a combination of K and Γ, i.e. K − Γ, is finite.
This can be measured from the spin excitation energy differences under the magnetic field of angle sweeping from above to below the honeycomb plane using the FMR or INS techniques.
Since the in-and out-of-plane magnetic anisotropy, δω A is determined solely by Γ, one can estimate Γ strength first from δω A and then extract the Kitaev interaction from δω K .
While the trigonal distortion introduces an additional interaction, the Kitaev interaction is unique as it is the only interaction that contributes to δω K without altering δω A . Our theory is applicable to all Kitaev candidate materials including an emerging candidate RuCl 3 .
In particular, since the two dominant interactions are ferromagnetic Kitaev and positive Γ interactions in RuCl 3 3,5,18,27 , leading to a large J ac and a small J ab , we predict that δω K independent of the g-factor anisotropy is significant except at the K-point. Supplementary Note 4 shows the FMR and INS of a set of parameters with a small negative Γ interaction to stabilize a zero-field zig-zag ground state as in RuCl 3 18,27,28 . Another relevant perturbation in some materials is the effect of monoclinic structure which loses the C 3c symmetry of R3, making the z-bond different from the x-and y-bonds. The current theory of finite δω K due to a finite J ac still works for C2/m structure. However, since the z-bond of J z ac (= K z /3−Γ z /3) is no longer the same as the x-and y-bonds of J x ac (= J y ac ) and C 2a symmetry relates between the x-and y-bonds, the anisotropy δω K at different momenta, detecting both J x ac = K x /3 − Γ x /3 and J z ac , is required to determine different x-and z-bond strengths. The symmetry-based theory presented here is also valid for higher spin models with the Kitaev interaction such as S = 3/2 CrI 3 including a nonzero single-ion anisotropy 19,21,22,48 which generates a further anisotropy in δω A but does not affect the δω K . The next near-est neighbor Dzyaloshinskii-Moriya interaction with the d-vector along the c-axis 49 is also invariant under the C 2a symmetry. Further studies for higher-spin models remain to be investigated to identify higher-spin Kitaev spin liquid. We would like to emphasize that the proposed set-up is suitable for other experimental techniques such as low-energy terahertz optical and nuclear magnetic resonance spectroscopies that probe spin excitations in addition to the angle-dependent FMR and INS spectroscopy shown in this work as examples.
METHODS
Exact Diagonalization Simulations
S αβ (q, ω) = 1 N N i,j e −iq·(R i −R j ) ∞ ∞ dte iωt S α i (t)S β j (0) = 1 N N i,j e −iq·(R i −R j ) λ,λ p λ λ|S α i |λ λ |S β j |λ δ( ω + E λ − E λ ) = λ,λ p λ λ|S α −q |λ λ |S β q |λ δ( ω + E λ − E λ ),(14)
where the Lehmann representation is used; |λ and |λ are the eigenstates with the thermal population factor p λ , and S α,β are the spin operators. In the low temperatures, we take |λ to be the ground state |0 and we are interested in the lowest energy excitation to |1 with a nonzero probability. For optical spectroscopies such as FMR, α = β = direction of the RF electromagnetic field and q = 0, so |0 and |1 belong to the same momentum sector. The structure factor simplifies to
S αα (ω) = 1 N 1| i S α i |0 2 δ( ω + E 0 − E 1 ).
For INS, the finite q must match the difference in the momenta of |0 and |1 . For simplicity, we calculate the DSSF for α = β, α S αα (q, ω) = α 1|S α q |0 2 δ( ω + E 0 − E 1 ).
Linear Spin Wave Theory -The Hamiltonian in Eq. (2) is bosonized by the standard Holstein-Primakoff transformation 53 expanded to linear order in the spin S:
S + j = S a j + iS b j = √ 2S a j − a † j a j a j 4S + O( 1 S 2 ) √ 2Sa j S − j = S a j − iS b j = √ 2S a † j − a † j a † j a j 4S + O( 1 S 2 ) √ 2Sa † j S c j = S c − a † j a j ,(15)
where the quantization axis is parallel to the c-axis. The Fourier transforms are a j = 1 √ N k e ik·r j a k for sublattice A and b j = 1 √ N k e ik·(r j +δ) b k for sublattice B, where δ is the vector pointing to nearest neighbors. The resulting quadratic Hamiltonian has the form
H = k X † H(k)X, where X † = ( a † k , b † k , a −k , b −k )
. Diagonalizing this BdG Hamiltonian following standard methods 54 gives two spin wave excitation branches.
DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author upon reasonable request.
CODE AVAILABILITY
The code used to generate the data used in this study is available from the corresponding author upon reasonable request.
(b). The magnetic anisotropy in the spin excitation energies is defined as ω n (θ) = E n (θ) − E 0 (θ), where E n and E 0 are the excited and ground state energy respectively. This anisotropy is affected by all interactions other than the isotropic Heisenberg limit (J XY = J Z ), making it difficult to quantify the effect of individual interactions. However, if we compare the two excitation anisotropies, ω n (θ) and ω n (−θ) for a given strength h and φ = 0 as shown inFig. 1(c), related by C 2a symmetry transformation, we can eliminate the effects of all other interactions except J ac thanks to symmetries of the model. Since our theory relies on the symmetry of the Hamiltonian, the ground state should break the C 2a symmetry only explicitly from the J ac term. The magnetic field also contributes to the C 2a breaking, but by comparing two angles of θ and −θ, the effect of J ac is isolated.
FIG. 1 .
1Crystal structure and direction of the magnetic field. (a) Schematic of the honeycomb lattice of transition metal ions (light blue) in edge sharing octahedra environment of anions (above the honeycomb plane: gray, below the plane: light gray). Octahedral xyz axes, abc axes, and the Kitaev bonds x (red), y (green), z (blue) are indicated. C 2a and C 2b symmetries (orange) are highlighted. The octahedra environment breaks C 2a , while C 2b symmetry is intact. (b)
FIG. 2 .
2By the same argument, if J ac = 0, H is invariant under C 2a , and the eigenenergies of the total Hamiltonian for θ and −θ are the same, i.e., δω K = 0. If J ac = 0, the total Hamiltonian H + H B (θ) and H + H B (−θ) cannot be related by C 2a , and therefore, δω K = 0. We need to change the sign of J ac for the C 2a relation to hold, i.e., the transformation of the external field angles of θ to −θ is equivalent to the change of J ac to −J ac . Thus, the lack of C 2a symmetry allows us to single out the J ac interaction through δω K .Since J ac contains a combination of the Kitaev and Γ interactions, we need other methods to subtract the Γ contribution. The in-and out-of-plane anisotropy, δω A offers precisely the other information. We note that the in-and out-of-plane anisotropy δω A is determined by J Z − J XY = Γ. Thus, for the ideal edge sharing octahedral environment, we can first estimate Γ from the measured δω A , and then extract the Kitaev strength by subtracting the Γ contribution from the measured δω K (θ).Below we show numerical results of spin excitations obtained by ED on a 24-site cluster which can be measured by angle-dependent FMR and INS techniques under magnetic field angles of θ and −θ with φ = 0.Angle-Dependent Ferromagnetic Resonance -FMR is a powerful probe to study ferromagnetic or spin correlated materials. FMR spectrometers record the radio-frequency (RF) electromagnetic wave that is absorbed by the sample of interest placed under an external magnetic field. To observe the resonance signal, the resonant frequency of the sample is changed to match that of the RF wave under a scan of the external magnetic field, so the excitation anisotropy δω(θ) leads to the anisotropy in the resonant magnetic field. FMR provides highly resolved spectra over a large energy range and has been used to investigate exchange couplings[35][36][37][38] and anisotropies 39,40 due to its dependence on the Angle-dependent spin excitations in ferromagnetic resonance (FMR) using exact diagonalizaiton on a C 3 -symmetric 24-site cluster. Various sets of parameters with Zeeman energy gµ B h = 1 are used. δω A is the difference in the spin excitation energies ω between fields along a-axis and c-axis, and δω K is the difference between ω(θ) (blue) and ω(−θ) (red), ashighlighted by the arrows. J, K and Γ are the Heisenberg, Kitaev and off-diagonal interactions respectively. (a) J = −1 and K = Γ = 0.5. (b) J = −1, K = 1, and Γ = 0. FMR in the b − c plane is shown in green: θ (up triangle) and −θ (down triangle). (c) J = −0.5, Γ = 0.5, and K = 0. (d) J = −0.1, K = −1, Γ = 0.5. See the FMR subsection for implication of the results.
Figure 2 (
2b) shows the J = −1, K = 1, and Γ = 0 case, which shows a finite δω K (θ) between −π/2 < θ < 0 and 0 < θ < π/2 in the a−c plane. On the other hand, no δω K (θ) by sweepingθ in the b − c plane (up and down triangles with green line) is observed, consistent with the symmetry analysis presented above. Note the conventional anisotropy δω A in both a − c and b − c planes are not exactly zero, because the Kitaev interaction selects the magnetic moment along the cubic axes in the ferromagnetic state via order by disorder 31,41 . This leads to a tiny anisotropy between the plane θ = 0 and the c-axis θ = π/2 when Γ = 0 and J XY = J Z . This anisotropy becomes weaker when the magnetic field increases, i.e, when the moment polarization overcomes the order by disorder effect. Supplementary Note 1 shows that the anisotropy is almost gone when the field is increased by three times with the same set of parameters, where the Heisenberg limit (black line) is added for a reference. When Γ becomes finite favouring either the a − b plane or the c-axis depending on the sign of the Γ, this conventional anisotropy is determined by the Γ interaction as shown in Fig. 2(c) and (d), and the order by disorder effect becomes silent. Figure 2(c) shows the J = −0.5, Γ = 0.5, and K = 0 case. The Γ interaction alone can generate a finite δω K due to the broken C 2a by J ac . In addition, the Γ interaction generates a large δω A , different from Fig. 2(b). Figure 2(d) presents the J = −0.1, K = −1 and Γ = 0.5 case, which is close to a set of parameters proposed for J eff = 1 2
Inelastic Neutron Scattering -Complementary to FMR, INS can measure excitations between different points in the reciprocal space based on the momentum transfer of the scattered neutrons. The magnon dispersions of the ordered states of magnetic ma-
FIG. 3 .
3Dynamic spin structure factor (DSSF) of the spin excitations at accessible wavevectors using exact diagonalizaiton (ED) on a C 3 -symmetric 24-site cluster and linear spin wave theory (LSWT). The boxes and the dashed lines are DSSF obtained by ED and LSWT respectively. The colour bars represent the intensity of DSSF. The same parameters for Fig. 2(d) are used, i.e. (J, K, Γ) = (−0.1, −1, 0.5) in units of ω 0 = gµ B h = 1. The magnetic field angles in the a − c plane are 30 • (blue) and −30 • (red). (b) DSSF with the same parameters as (a) except a larger field gµ B h = 8, showing a better match between the ED and LSWT results; see the Inelastic Neutron Scattering subsection for further discussions. (c) C 3 -symmetric 24-site cluster used for the ED. (d) Accessible momentum points labeled in the x-axis of (a) and (b). terials measured via INS have been used to determine the spin exchange Hamiltonian parameters 10,17,42-46 . Figure 3(a) and (b) show the spin excitations at accessible wavevectors on a C 3 -symmetric 24-site cluster with the same exchange parameters for Fig. 2(d) and with h = 1 and h = 8, respectively. The cluster and the accessible momenta are shown in (c)and (d), respectively. We set the magnetic field angles θ = 30 • (blue) and θ = −30 • (red)
θ
M is found via minimizing the classical ground state energy (details can be found in the Methods); the LSWT with the set of parameters used for Fig. 3(a)'s ED results leads to θ M ∼ 12.1 • . The spin excitations within the LSWT are shown as dashed lines together
-Numerical ED was used to compute spin excitations under a magnetic field. ED was performed on a 24-site honeycomb cluster with periodic boundary conditions, where the Lanczos method 50,51 was used to obtain the lowestlying eigenvalues and eigenvectors of the Hamiltonian in Eq. (2). The 24-site honeycomb shape and accessible momentum points in the Brillouin zone are shown in the Fig. 3(c) and (d). The probability of the spin excitation of momentum q and energy ω is proportional to the dynamic spin structure factor 52 (DSSF) given by
For a general field, the Hamiltonian in Eq. (2) is first written in new axes a b c . a = (sin θ M cos φ M , sin θ M sin φ M , − cos θ M ), b = (− sin φ M , cos φ M , 0) and c = (cos θ M cos φ M , cos θ M sin φ M , sin θ M ). c is parallel to the magnetization S(θ M , φ M ), which is not the same direction as the magnetic field, unless the field is very large to fully polarize the moment. The magnetization angles (θ M , φ M ) are obtained by minimizing the classical ground state energy, and LSWT is applied on the ground state 47 . Arbitrary a and b axes obtained by rotation around c are valid and do not affect the result.
K. A. Modic, Tess E. Smidt, Itamar Kimchi, Nicholas P. Breznay, Alun Biffin, Sungkyun Choi,
Acknowledgements
Correlated quantum phenomena in the strong spin-orbit regime. William Witczak-Krempa, Gang Chen, Yong Baek Kim, Leon Balents, 10.1146/annurev-conmatphys-020911-125138Annual Review of Condensed Matter Physics. 5William Witczak-Krempa, Gang Chen, Yong Baek Kim, and Leon Balents, "Correlated quan- tum phenomena in the strong spin-orbit regime," Annual Review of Condensed Matter Physics 5, 57-82 (2014).
Spin-orbit physics giving rise to novel phases in correlated systems: Iridates and related materials. Jeffrey G Rau, Eric Kin-Ho Lee, Hae-Young Kee, 10.1146/annurev-conmatphys-031115-011319Annual Review of Condensed Matter Physics. 7Jeffrey G. Rau, Eric Kin-Ho Lee, and Hae-Young Kee, "Spin-orbit physics giving rise to novel phases in correlated systems: Iridates and related materials," Annual Review of Condensed Matter Physics 7, 195-221 (2016).
Jeroen van den Brink, Yogesh Singh, Philipp Gegenwart, and Roser Valentí. M Stephen, Alexander A Winter, Maria Tsirlin, Daghofer, Journal of Physics: Condensed Matter. 29493002Models and materials for generalized Kitaev magnetismStephen M. Winter, Alexander A. Tsirlin, Maria Daghofer, Jeroen van den Brink, Yogesh Singh, Philipp Gegenwart, and Roser Valentí, "Models and materials for generalized Kitaev magnetism," Journal of Physics: Condensed Matter 29, 493002 (2017).
Physics of the Kitaev model: Fractionalization, dynamic correlations, and material connections. M Hermanns, I Kimchi, J Knolle, 10.1146/annurev-conmatphys-033117-053934Annual Review of Condensed Matter Physics. 9M. Hermanns, I. Kimchi, and J. Knolle, "Physics of the Kitaev model: Fractionalization, dynamic correlations, and material connections," Annual Review of Condensed Matter Physics 9, 17-33 (2018).
Heisenberg-kitaev physics in magnetic fields. Lukas Janssen, Matthias Vojta, 10.1088/1361-648x/ab283eJournal of Physics: Condensed Matter. 31423002Lukas Janssen and Matthias Vojta, "Heisenberg-kitaev physics in magnetic fields," Journal of Physics: Condensed Matter 31, 423002 (2019).
Spin-orbit-entangled electronic phases in 4d and 5d transition-metal compounds. Tomohiro Takayama, Jiří Chaloupka, Andrew Smerald, Giniyat Khaliullin, Hidenori Takagi, http:/arxiv.org/abs/https:/doi.org/10.7566/JPSJ.90.062001Journal of the Physical Society of Japan. 9062001Tomohiro Takayama, Jiří Chaloupka, Andrew Smerald, Giniyat Khaliullin, and Hidenori Tak- agi, "Spin-orbit-entangled electronic phases in 4d and 5d transition-metal compounds," Journal of the Physical Society of Japan 90, 062001 (2021), https://doi.org/10.7566/JPSJ.90.062001.
Anyons in an exactly solved model and beyond. Alexei Kitaev, 10.1016/j.aop.2005.10.005Ann. Phys. (N. Y.). 321Special IssueAlexei Kitaev, "Anyons in an exactly solved model and beyond," Ann. Phys. (N. Y.) 321, 2 - 111 (2006), January Special Issue.
Mott insulators in the strong spin-orbit coupling limit: From Heisenberg to a quantum compass and Kitaev models. George Jackeli, Giniyat Khaliullin, 10.1103/PhysRevLett.102.017205Phys. Rev. Lett. 10217205George Jackeli and Giniyat Khaliullin, "Mott insulators in the strong spin-orbit coupling limit: From Heisenberg to a quantum compass and Kitaev models," Phys. Rev. Lett. 102, 017205 (2009).
Relevance of the Heisenberg-Kitaev model for the honeycomb lattice iridates A 2 IrO 3. Yogesh Singh, S Manni, J Reuther, T Berlijn, R Thomale, W Ku, S Trebst, P Gegenwart, 10.1103/PhysRevLett.108.127203Phys. Rev. Lett. 108127203Yogesh Singh, S. Manni, J. Reuther, T. Berlijn, R. Thomale, W. Ku, S. Trebst, and P. Gegen- wart, "Relevance of the Heisenberg-Kitaev model for the honeycomb lattice iridates A 2 IrO 3 ," Phys. Rev. Lett. 108, 127203 (2012).
Spin waves and revised crystal structure of honeycomb iridate Na 2 IrO 3. S K Choi, R Coldea, A N Kolmogorov, T Lancaster, I I Mazin, S J Blundell, P G Radaelli, Yogesh Singh, P Gegenwart, K R Choi, S.-W Cheong, P J Baker, C Stock, J Taylor, 10.1103/PhysRevLett.108.127204Phys. Rev. S. K. Choi, R. Coldea, A. N. Kolmogorov, T. Lancaster, I. I. Mazin, S. J. Blundell, P. G. Radaelli, Yogesh Singh, P. Gegenwart, K. R. Choi, S.-W. Cheong, P. J. Baker, C. Stock, and J. Taylor, "Spin waves and revised crystal structure of honeycomb iridate Na 2 IrO 3 ," Phys. Rev.
. Roger D Johnson, Radu Coldea, Pilanda Watkins-Curry, Gregory T Mccandless, Julia Y , Roger D. Johnson, Radu Coldea, Pilanda Watkins-Curry, Gregory T. McCandless, Julia Y.
. Felipe Chan, Z Gandara, Ashvin Islam, Vishwanath, Arkady Shekhter, Ross D. McDonald, andChan, Felipe Gandara, Z. Islam, Ashvin Vishwanath, Arkady Shekhter, Ross D. McDonald, and
Realization of a three-dimensional spin-anisotropic harmonic honeycomb iridate. James G Analytis, 10.1038/ncomms5203Nat. Commun. 54203James G. Analytis, "Realization of a three-dimensional spin-anisotropic harmonic honeycomb iridate," Nat. Commun. 5, 4203 (2014).
Kitaev magnetism in honeycomb α-RuCl 3 with intermediate spin-orbit coupling. Heung-Sik Kim, Vijay Shankar, V , Andrei Catuneanu, Hae-Young Kee, 10.1103/PhysRevB.91.241110Phys. Rev. B. 91241110Heung-Sik Kim, Vijay Shankar V., Andrei Catuneanu, and Hae-Young Kee, "Kitaev magnetism in honeycomb α-RuCl 3 with intermediate spin-orbit coupling," Phys. Rev. B 91, 241110(R) (2015).
Magnetic order in α-RuCl 3 : A honeycomb-lattice quantum magnet with strong spin-orbit coupling. J A Sears, M Songvilay, K W Plumb, J P Clancy, Y Qiu, Y Zhao, D Parshall, Young-June Kim, 10.1103/PhysRevB.91.144420Phys. Rev. B. 91144420J. A. Sears, M. Songvilay, K. W. Plumb, J. P. Clancy, Y. Qiu, Y. Zhao, D. Parshall, and Young-June Kim, "Magnetic order in α-RuCl 3 : A honeycomb-lattice quantum magnet with strong spin-orbit coupling," Phys. Rev. B 91, 144420 (2015).
Scattering continuum and possible fractionalized excitations in α-RuCl 3. Luke J Sandilands, Yao Tian, W Kemp, Young-June Plumb, Kenneth S Kim, Burch, 10.1103/PhysRevLett.114.147201Phys. Rev. Lett. 114147201Luke J. Sandilands, Yao Tian, Kemp W. Plumb, Young-June Kim, and Kenneth S. Burch, "Scattering continuum and possible fractionalized excitations in α-RuCl 3 ," Phys. Rev. Lett. 114, 147201 (2015).
Monoclinic crystal structure of α-RuCl 3 and the zigzag antiferromagnetic ground state. R D Johnson, S C Williams, A A Haghighirad, J Singleton, V Zapf, P Manuel, I I Mazin, Y Li, H O Jeschke, R Valentí, R Coldea, 10.1103/PhysRevB.92.235119Phys. Rev. B. 92235119R. D. Johnson, S. C. Williams, A. A. Haghighirad, J. Singleton, V. Zapf, P. Manuel, I. I. Mazin, Y. Li, H. O. Jeschke, R. Valentí, and R. Coldea, "Monoclinic crystal structure of α-RuCl 3 and the zigzag antiferromagnetic ground state," Phys. Rev. B 92, 235119 (2015).
. A Banerjee, C A Bridges, J.-Q Yan, A A Aczel, L Li, M B Stone, G E Granroth, M , A. Banerjee, C. A. Bridges, J.-Q. Yan, A. A. Aczel, L. Li, M. B. Stone, G. E. Granroth, M. D.
Proximate Kitaev quantum spin liquid behaviour in a honeycomb magnet. Y Lumsden, J Yiu, S Knolle, D L Bhattacharjee, R Kovrizhin, D A Moessner, D G Tennant, S E Mandrus, Nagler, 10.1038/nmat4604Nature Materials. 15articleLumsden, Y. Yiu, J. Knolle, S. Bhattacharjee, D. L. Kovrizhin, R. Moessner, D. A. Tennant, D. G. Mandrus, and S. E. Nagler, "Proximate Kitaev quantum spin liquid behaviour in a honeycomb magnet," Nature Materials 15, 733 (2016), article.
Crystal structure and magnetism in α-RuCl 3 : An ab initio study. Heung-Sik Kim, Hae-Young Kee, 10.1103/PhysRevB.93.155143Phys. Rev. B. 93155143Heung-Sik Kim and Hae-Young Kee, "Crystal structure and magnetism in α-RuCl 3 : An ab initio study," Phys. Rev. B 93, 155143 (2016).
Microscopic mechanism for a higherspin kitaev model. P Peter Stavropoulos, D Pereira, Hae-Young Kee, 10.1103/PhysRevLett.123.037203Phys. Rev. Lett. 12337203P. Peter Stavropoulos, D. Pereira, and Hae-Young Kee, "Microscopic mechanism for a higher- spin kitaev model," Phys. Rev. Lett. 123, 037203 (2019).
On the origin of magnetic anisotropy in two dimensional CrI3. J L Lado , J Fernández-Rossier, 10.1088/2053-1583/aa75ed2D Mater. 435002J L Lado and J Fernández-Rossier, "On the origin of magnetic anisotropy in two dimensional CrI3," 2D Mater. 4, 035002 (2017).
Interplay between kitaev interaction and single ion anisotropy in ferromagnetic cri3 and crgete3 monolayers. Changsong Xu, Junsheng Feng, Hongjun Xiang, Laurent Bellaiche, 10.1038/s41524-018-0115-6Computational Materials. 457Changsong Xu, Junsheng Feng, Hongjun Xiang, and Laurent Bellaiche, "Interplay between kitaev interaction and single ion anisotropy in ferromagnetic cri3 and crgete3 monolayers," npj Computational Materials 4, 57 (2018).
Fundamental spin interactions underlying the magnetic anisotropy in the kitaev ferromagnet cri 3. Inhee Lee, Franz G Utermohlen, Daniel Weber, Kyusung Hwang, Chi Zhang, Johan Van Tol, Joshua E Goldberger, Nandini Trivedi, P Chris Hammel, 10.1103/PhysRevLett.124.017201Phys. Rev. Lett. 12417201Inhee Lee, Franz G. Utermohlen, Daniel Weber, Kyusung Hwang, Chi Zhang, Johan van Tol, Joshua E. Goldberger, Nandini Trivedi, and P. Chris Hammel, "Fundamental spin interactions underlying the magnetic anisotropy in the kitaev ferromagnet cri 3 ," Phys. Rev. Lett. 124, 017201 (2020).
Kitaev-Heisenberg model on a honeycomb lattice: Possible exotic phases in iridium oxides A 2 IrO 3. Jiří Chaloupka, George Jackeli, Giniyat Khaliullin, 10.1103/PhysRevLett.105.027204Phys. Rev. Lett. 10527204Jiří Chaloupka, George Jackeli, and Giniyat Khaliullin, "Kitaev-Heisenberg model on a hon- eycomb lattice: Possible exotic phases in iridium oxides A 2 IrO 3 ," Phys. Rev. Lett. 105, 027204 (2010).
Zigzag magnetic order in the iridium oxide Na 2 IrO 3. Jiří Chaloupka, George Jackeli, Giniyat Khaliullin, 10.1103/PhysRevLett.110.097204Phys. Rev. Lett. 11097204Jiří Chaloupka, George Jackeli, and Giniyat Khaliullin, "Zigzag magnetic order in the iridium oxide Na 2 IrO 3 ," Phys. Rev. Lett. 110, 097204 (2013).
Generic spin model for the honeycomb iridates beyond the Kitaev limit. Jeffrey G Rau, Eric Kin-Ho Lee, Hae-Young Kee, 10.1103/PhysRevLett.112.077204Phys. Rev. Lett. 11277204Jeffrey G. Rau, Eric Kin-Ho Lee, and Hae-Young Kee, "Generic spin model for the honeycomb iridates beyond the Kitaev limit," Phys. Rev. Lett. 112, 077204 (2014).
Trigonal distortion in the honeycomb iridates: Proximity of zigzag and spiral phases in Na 2 IrO 3. G Jeffrey, Hae-Young Rau, Kee, arXiv:1408.4811cond-mat.str-elJeffrey G. Rau and Hae-Young Kee, "Trigonal distortion in the honeycomb iridates: Proximity of zigzag and spiral phases in Na 2 IrO 3 ," arXiv:1408.4811 [cond-mat.str-el].
Challenges in design of Kitaev materials: Magnetic interactions from competing energy scales. M Stephen, Ying Winter, Harald O Li, Roser Jeschke, Valentí, 10.1103/PhysRevB.93.214431Phys. Rev. B. 93214431Stephen M. Winter, Ying Li, Harald O. Jeschke, and Roser Valentí, "Challenges in design of Kitaev materials: Magnetic interactions from competing energy scales," Phys. Rev. B 93, 214431 (2016).
Magnetization processes of zigzag states on the honeycomb lattice: Identifying spin models for α-RuCl 3 and Na 2 IrO 3. Lukas Janssen, Eric C Andrade, Matthias Vojta, 10.1103/PhysRevB.96.064430Phys. Rev. B. 9664430Lukas Janssen, Eric C. Andrade, and Matthias Vojta, "Magnetization processes of zigzag states on the honeycomb lattice: Identifying spin models for α-RuCl 3 and Na 2 IrO 3 ," Phys. Rev. B 96, 064430 (2017).
Gapless quantum spin liquid in a honeycomb γ magnet. Qiang Luo, Jize Zhao, Hae-Young Kee, Xiaoqun Wang, 10.1038/s41535-021-00356-znpj Quantum Materials. 657Qiang Luo, Jize Zhao, Hae-Young Kee, and Xiaoqun Wang, "Gapless quantum spin liquid in a honeycomb γ magnet," npj Quantum Materials 6, 57 (2021).
Rethinking α−rucl 3. P A Maksimov, A L Chernyshev, 10.1103/PhysRevResearch.2.033011Phys. Rev. Research. 233011P. A. Maksimov and A. L. Chernyshev, "Rethinking α−rucl 3 ," Phys. Rev. Research 2, 033011 (2020).
Magnetic anisotropy in the kitaev model systems na 2 iro 3 and rucl 3. Jiří Chaloupka, Giniyat Khaliullin, 10.1103/PhysRevB.94.064435Phys. Rev. B. 9464435Jiří Chaloupka and Giniyat Khaliullin, "Magnetic anisotropy in the kitaev model systems na 2 iro 3 and rucl 3 ," Phys. Rev. B 94, 064435 (2016).
Effective quantum pseudospin-1/2 model for yb pyrochlore oxides. Shigeki Onoda, 10.1088/1742-6596/320/1/012065Journal of Physics: Conference Series. 32012065Shigeki Onoda, "Effective quantum pseudospin-1/2 model for yb pyrochlore oxides," Journal of Physics: Conference Series 320, 012065 (2011).
Quantum Spin Ice. 10.1103/PHYSREVX.1.021002/FIGURES/6/MEDIUMarXiv:1107.0761Physical Review X. 1Quantum Spin Ice," Physical Review X 1, 1-10 (2011), arXiv:1107.0761.
Hidden symmetries of the extended kitaev-heisenberg model: Implications for the honeycomb-lattice iridates A 2 iro 3. Jiří Chaloupka, Giniyat Khaliullin, 10.1103/PhysRevB.92.024413Phys. Rev. B. 9224413Jiří Chaloupka and Giniyat Khaliullin, "Hidden symmetries of the extended kitaev-heisenberg model: Implications for the honeycomb-lattice iridates A 2 iro 3 ," Phys. Rev. B 92, 024413 (2015).
Angular dependence of ferromagnetic resonance in exchange-coupled Co/Ru/Co trilayer structures. Z Zhang, L Zhou, P E Wigen, K Ounadjela, 10.1103/PHYSREVB.50.6094Physical review. B, Condensed matter. 50Zhang Z, Zhou L, Wigen PE, and Ounadjela K, "Angular dependence of ferromagnetic res- onance in exchange-coupled Co/Ru/Co trilayer structures," Physical review. B, Condensed matter 50, 6094-6112 (1994).
Dynamic exchange coupling in magnetic bilayers. B Heinrich, Y Tserkovnyak, G Woltersdorf, A Brataas, R Urban, G E Bauer, 10.1103/PHYSREVLETT.90.187601Physical review letters. 904Heinrich B, Tserkovnyak Y, Woltersdorf G, Brataas A, Urban R, and Bauer GE, "Dynamic exchange coupling in magnetic bilayers," Physical review letters 90, 4 (2003).
Ferromagnetic resonance study of the exchange bias field in NiFe,FeMn,NiFe trilayers. V P Nascimento, E Saitovitch, F Pelegrini, L C Figueiredo, A Biondo, E C Passamani, 10.1063/1.2176334Journal of Applied Physics. 99V. P. Nascimento, E. Baggio Saitovitch, F. Pelegrini, L. C. Figueiredo, A. Biondo, and E. C. Passamani, "Ferromagnetic resonance study of the exchange bias field in NiFe,FeMn,NiFe tri- layers," Journal of Applied Physics 99, 08C108 (2006).
In situ ferromagnetic resonance in coupled ultrathin trilayers with perpendicularly oriented easy axes. K Lenz, Kosubek, Toliński, K Lindner, Baberschke, 10.1088/0953-8984/15/43/003Journal of Physics: Condensed Matter. 157175K Lenz, E Kosubek, T Toliński, J Lindner, and K Baberschke, "In situ ferromagnetic resonance in coupled ultrathin trilayers with perpendicularly oriented easy axes," Journal of Physics: Condensed Matter 15, 7175 (2003).
A ferromagnetic resonance study of NiFe alloy thin films. M Díaz De Sihues, C A Durante-Rincón, J R Fermin, 10.1016/J.JMMM.2007.02.181Journal of Magnetism and Magnetic Materials. 316M. Díaz de Sihues, C. A. Durante-Rincón, and J. R. Fermin, "A ferromagnetic resonance study of NiFe alloy thin films," Journal of Magnetism and Magnetic Materials 316 (2007), 10.1016/J.JMMM.2007.02.181.
A study of the magnetic resonance in a single-crystal Ni(50.47)Mn(28.17)Ga(21.36) alloy. V G Gavriljuk, A Dobrinsky, B D Shanina, S P Kolesnik, 10.1088/0953-8984/18/32/010Journal of physics. Condensed matter : an Institute of Physics journal. 18Gavriljuk VG, Dobrinsky A, Shanina BD, and Kolesnik SP, "A study of the magnetic resonance in a single-crystal Ni(50.47)Mn(28.17)Ga(21.36) alloy," Journal of physics. Condensed matter : an Institute of Physics journal 18, 7613-7627 (2006).
Counter-rotating spiral order in three-dimensional iridates: Signature of hidden symmetry in the kitaev-Γ model. P , Peter Stavropoulos, Andrei Catuneanu, Hae-Young Kee, 10.1103/PhysRevB.98.104401Phys. Rev. B. 98104401P. Peter Stavropoulos, Andrei Catuneanu, and Hae-Young Kee, "Counter-rotating spiral order in three-dimensional iridates: Signature of hidden symmetry in the kitaev-Γ model," Phys. Rev. B 98, 104401 (2018).
Spin Waves in 3d Metals. G Shirane, V J Minkiewicz, R Nathans, 10.1063/1.2163453Journal of Applied Physics. 39383G. Shirane, V. J. Minkiewicz, and R. Nathans, "Spin Waves in 3d Metals," Journal of Applied Physics 39, 383 (1968).
Dynamics of an ¡span class. Y Endoh, G Shirane, R J Birgeneau, Peter M Richards, S L Holt, 10.1103/PhysRevLett.32.170Physical Review Letters. 32170Y. Endoh, G. Shirane, R. J. Birgeneau, Peter M. Richards, and S. L. Holt, "Dynamics of an ¡span class," Physical Review Letters 32, 170 (1974).
Spin-Wave Excitations Evidencing the Kitaev Interaction in Single Crystalline α-RuCl3. Jinghui Kejing Ran, Wei Wang, Wang, Yang Zhao, Xiao Dong, Song Ren, Shichao Bao, Zhen Li, Yuan Ma, Youtian Gan, J T Zhang, Park, S Deng, Danilkin, Li Shun, Jian Xin Yu, Jinsheng Li, Wen, 10.1103/PHYSREVLETT.118.107203/FIGURES/4/MEDIUMarXiv:1702.04920Physical Review Letters. 118107203Kejing Ran, Jinghui Wang, Wei Wang, Zhao Yang Dong, Xiao Ren, Song Bao, Shichao Li, Zhen Ma, Yuan Gan, Youtian Zhang, J. T. Park, Guochu Deng, S. Danilkin, Shun Li Yu, Jian Xin Li, and Jinsheng Wen, "Spin-Wave Excitations Evidencing the Kitaev Interaction in Single Crystalline α-RuCl3," Physical Review Letters 118, 107203 (2017), arXiv:1702.04920.
Majorana fermions in the Kitaev quantum spin system α-RuCl3. Sang Youn Seung Hwan Do, Junki Park, Joji Yoshitake, Yukitoshi Nasu, Yong Motome, D T Seung Kwon, D J Adroja, Kyoo Voneshen, T H Kim, J H Jang, Yong Park, Sungdae Choi, Ji, 10.1038/nphys4264Nature Physics. 1311Seung Hwan Do, Sang Youn Park, Junki Yoshitake, Joji Nasu, Yukitoshi Motome, Yong Seung Kwon, D. T. Adroja, D. J. Voneshen, Kyoo Kim, T. H. Jang, J. H. Park, Kwang Yong Choi, and Sungdae Ji, "Majorana fermions in the Kitaev quantum spin system α-RuCl3," Nature Physics 2017 13:11 13, 1079-1084 (2017).
. Arnab Banerjee, Paula Lampen-Kelley, Johannes Knolle, Christian Balz, Adam Anthony Aczel, Barry Winn, Yaohua Liu, Daniel Pajerowski, Jiaqiang Yan, Craig A Bridges, Andrei T Savici, Bryan C Chakoumakos, Mark D Lumsden, David Alan Tennant, Roderich Moessner, David GArnab Banerjee, Paula Lampen-Kelley, Johannes Knolle, Christian Balz, Adam Anthony Aczel, Barry Winn, Yaohua Liu, Daniel Pajerowski, Jiaqiang Yan, Craig A. Bridges, Andrei T. Savici, Bryan C. Chakoumakos, Mark D. Lumsden, David Alan Tennant, Roderich Moessner, David G.
Excitations in the field-induced quantum spin liquid state of α-rucl3. Stephen E Mandrus, Nagler, 10.1038/s41535-018-0079-2npj Quantum Materials. 3Mandrus, and Stephen E. Nagler, "Excitations in the field-induced quantum spin liquid state of α-rucl3," npj Quantum Materials 3, 8 (2018).
Heisenberg-kitaev model in a magnetic field: 1/s expansion. Pedro M Cônsoli, Lukas Janssen, Matthias Vojta, Eric C Andrade, Phys. Rev. B. 102155134Pedro M. Cônsoli, Lukas Janssen, Matthias Vojta, and Eric C. Andrade, "Heisenberg-kitaev model in a magnetic field: 1/s expansion," Phys. Rev. B 102, 155134 (2020).
Magnetic anisotropy in spin-3/2 with heavy ligand in honeycomb mott insulators: Application to cri 3. P Peter Stavropoulos, Xiaoyu Liu, Hae-Young Kee, Phys. Rev. Research. 313216P. Peter Stavropoulos, Xiaoyu Liu, and Hae-Young Kee, "Magnetic anisotropy in spin-3/2 with heavy ligand in honeycomb mott insulators: Application to cri 3 ," Phys. Rev. Research 3, 013216 (2021).
Magnetic field effect on topological spin excitations in cri 3. Lebing Chen, Jae-Ho Chung, Matthew B Stone, Alexander I Kolesnikov, Barry Winn, V Ovidiu Garlea, Douglas L Abernathy, Bin Gao, Mathias Augustin, Elton J G Santos, Pengcheng Dai, Phys. Rev. X. 1131047Lebing Chen, Jae-Ho Chung, Matthew B. Stone, Alexander I. Kolesnikov, Barry Winn, V. Ovidiu Garlea, Douglas L. Abernathy, Bin Gao, Mathias Augustin, Elton J. G. Santos, and Pengcheng Dai, "Magnetic field effect on topological spin excitations in cri 3 ," Phys. Rev. X 11, 031047 (2021).
An iteration method for the solution of the eigenvalue problem of linear differential and integral operators. C Lanczos, 10.6028/JRES.045.026Journal of research of the National Bureau of Standards. 45282C. Lanczos, "An iteration method for the solution of the eigenvalue problem of linear differential and integral operators," Journal of research of the National Bureau of Standards 45, 282 (1950).
Exact diagonalization techniques. Alexander Weiße, Holger Fehske, 10.1007/978-3-540-74686-7_18Computational Many-Particle Physics. H. Fehske, R. Schneider, and A. WeißeBerlin Heidelberg; Berlin, HeidelbergSpringerAlexander Weiße and Holger Fehske, "Exact diagonalization techniques," in Computational Many-Particle Physics, edited by H. Fehske, R. Schneider, and A. Weiße (Springer Berlin Heidelberg, Berlin, Heidelberg, 2008) pp. 529-544.
S W , W.) Lovesey, Theory of neutron scattering from condensed matter. StephenClarendon PressS. W. (Stephen W.) Lovesey, Theory of neutron scattering from condensed matter (Clarendon Press, 1984).
Field Dependence of the Intrinsic Domain Magnetization of a Ferromagnet. T Holstein, H Primakoff, 10.1103/PhysRev.58.1098Physical Review. 581098T. Holstein and H. Primakoff, "Field Dependence of the Intrinsic Domain Magnetization of a Ferromagnet," Physical Review 58, 1098 (1940).
Diagonalization of the quadratic boson hamiltonian. J H P Colpa, 10.1016/0378-4371(78)90160-7Physica A: Statistical Mechanics and its Applications. 93J. H.P. Colpa, "Diagonalization of the quadratic boson hamiltonian," Physica A: Statistical Mechanics and its Applications 93, 327-353 (1978).
|
[] |
[
"Ab initio spin-strain coupling parameters of divacancy qubits in silicon carbide",
"Ab initio spin-strain coupling parameters of divacancy qubits in silicon carbide"
] |
[
"Péter Udvarhelyi \nDepartment of Biological Physics\nLoránd Eötvös University\nPázmány Péter sétány 1/AH-1117BudapestHungary\n\nWigner Research Centre for Physics\nHungarian Academy of Sciences\nP.O. Box 49H-1525BudapestHungary\n",
"Adam Gali \nWigner Research Centre for Physics\nHungarian Academy of Sciences\nP.O. Box 49H-1525BudapestHungary\n\nDepartment of Atomic Physics\nBudapest University of Technology and Economics\nBudafokiút 8H-1111BudapestHungary\n"
] |
[
"Department of Biological Physics\nLoránd Eötvös University\nPázmány Péter sétány 1/AH-1117BudapestHungary",
"Wigner Research Centre for Physics\nHungarian Academy of Sciences\nP.O. Box 49H-1525BudapestHungary",
"Wigner Research Centre for Physics\nHungarian Academy of Sciences\nP.O. Box 49H-1525BudapestHungary",
"Department of Atomic Physics\nBudapest University of Technology and Economics\nBudafokiút 8H-1111BudapestHungary"
] |
[] |
Cubic silicon carbide is an excellent platform for integration of defect qubits into established wafer scale device architectures for quantum information and sensing applications, where divacancy qubit, that is similar to the negatively charged nitrogen-vacancy (NV) center in diamond, has favorable coherence properties. We demonstrate by means of density functional theory calculations that divacancy in 3C SiC has superior spin-stress coupling parameters and stress sensitivity for nanoscale, quantum enhanced photonic, optoelectronic and optomechanical devices.
|
10.1103/physrevapplied.10.054010
|
[
"https://arxiv.org/pdf/1805.04706v1.pdf"
] | 119,055,574 |
1805.04706
|
5fb2aaebba5e31d53f2d2c4b2b91401d885361a4
|
Ab initio spin-strain coupling parameters of divacancy qubits in silicon carbide
Péter Udvarhelyi
Department of Biological Physics
Loránd Eötvös University
Pázmány Péter sétány 1/AH-1117BudapestHungary
Wigner Research Centre for Physics
Hungarian Academy of Sciences
P.O. Box 49H-1525BudapestHungary
Adam Gali
Wigner Research Centre for Physics
Hungarian Academy of Sciences
P.O. Box 49H-1525BudapestHungary
Department of Atomic Physics
Budapest University of Technology and Economics
Budafokiút 8H-1111BudapestHungary
Ab initio spin-strain coupling parameters of divacancy qubits in silicon carbide
(Dated: May 15, 2018)
Cubic silicon carbide is an excellent platform for integration of defect qubits into established wafer scale device architectures for quantum information and sensing applications, where divacancy qubit, that is similar to the negatively charged nitrogen-vacancy (NV) center in diamond, has favorable coherence properties. We demonstrate by means of density functional theory calculations that divacancy in 3C SiC has superior spin-stress coupling parameters and stress sensitivity for nanoscale, quantum enhanced photonic, optoelectronic and optomechanical devices.
I. INTRODUCTION
Silicon carbide (SiC) is an emerging host for qubit defects 1-6 . The main advantage of SiC as a host material is its industrial scale availability, high quality single crystal growth in substrate scale and epitaxial thin layer growth on silicon wafer 7 . Furthermore, advanced microfabrication techniques are already available for potential integration of spin qubit sensors into semiconductor devices. In particular, divacancy spins exhibit optical addressability and long coherence time in the most common polytypes of SiC, cubic 3C, and hexagonal 4H and 6H (Ref. 8). In this paper we focus on the 3C polytype and its neutral divacancy defect consisting of neighboring silicon and carbon vacancy (see Fig. 1). The divacancy qubit in 3C SiC is especially interesting because nanoelectromechanical-sensors (NEMS) can be produced from thin films of 3C SiC that was grown on silicon wafers 9 , and it has been shown 10 that divacancy qubits can be engineered into these 3C SiC thin films. The fingerprints of 3C divacancy are the Ky5 electron paramagnetic resonance (EPR) center 11 with S = 1 spin and L3 optically detected magnetic resonance (ODMR) center 12 with near infrared (NIR) photoluminescence line at 1.12 eV. The defect has remarkable Hahn echo coherence time of 0.9 ms in 3C SiC (Ref. 13), similar to the 1.2 ms coherence time of divacancies in 4H SiC (Ref. 14), observed in natural isotope abundant samples at 20 K. We conclude that the divacancy NIR color center has similar spin and optical properties to the negatively charged nitrogen-vacancy (NV) center in diamond 1,13 , even surpassing its coherence time of 0.6 ms 13,15 . These 3C divacancy qubits in NEMS can be harnessed to measure strain at the nanoscale.
Although NV center is presently the most studied nanoscale strain sensor [16][17][18][19][20][21][22][23][24][25] , the diamond host suffers from difficulties in crystal growth and fabrication at large scale. Thus, finding alternative defect qubit nanoscale sensor in technologically mature materials, such as SiC, with similar or superior sensitivities is of high importance. The straightforward production of 3C SiC thin films hosting divacancy qubits with favorable coherence properties makes 3C divacancy a very attractive potential candidate in realizing strain sensors at the nanoscale. However, the strengths of spin-strain couplings for the divacancy qubit in 3C SiC have not been determined so far that are critical parameters in the sensitivity of future pressure and electromechanical sensors. We note that, parallel to our study, qubits controlled by alternating pressure and electric fields have been demonstrated as 4H SiC divacancies of which phenomenon should rely on considerable spin-strain coupling in these qubits 26 , that further strengthens the need of studying the spin-strain coupling parameters of divacancies in SiC.
In this paper, we calculate the spin-strain coupling parameters of divacancy qubit in 3C SiC by means of first principles calculations, and estimate the stress sensitivity with taking realistic key parameters of the divacancy qubit and the host SiC. We show that divacancy qubits in SiC have generally larger spin-stress coupling parameters than that of NV center in diamond, where the stiffness of SiC gives rise to this phenomena. As a consequence, the sensitivity of SiC divacancy qubits can be harnessed to realize nanoscale, quantum enhanced photonic, optoelectronic and optomechanical devices on a platform that is compatible with semiconductor technology and electronics.
II. METHODS
A. Spin-strain Hamiltonian
Recent works have been carried out to describe the spin-strain coupling parameters of NV center in diamond [16][17][18][19][20][21][22][23][24][25] . Since NV center and divacancy in 3C SiC share the same symmetry the spin-strain Hamiltonian developed for NV center in diamond can be directly applied to divacancy in 3C SiC. Very recently, we have advanced and completed the theory for the spin-strain Hamiltonian for the NV center 27 which has the form
H ε = H ε0 + H ε1 + H ε2 , (1a) H ε0 /h = [h 41 (ε xx + ε yy ) + h 43 ε zz ]S 2 z ,(1b)H ε1 /h = 1 2 h 26 ε zx − 1 2 h 25 (ε xx − ε yy ) {S x , S z } + 1 2 (h 26 ε yz + h 25 ε xy ) {S y , S z },(1c)H ε2 /h = 1 2 h 16 ε zx − 1 2 h 15 (ε xx − ε yy ) (S 2 y − S 2 x ) + 1 2 (h 16 ε yz + h 15 ε xy ){S x , S y },(1d)
where ε ij = (∂u i /∂x j + ∂u j /∂x i )/2 are the Cartesian elements of the strain tensor and u(r) is the displacement field. The spin-strain coupling parameters are labeled by h. The spin-stress Hamiltonian has the same symmetryallowed form with coupling parameters labeled by g.
To calculate spin-stress coupling parameters from spinstrain coupling parameters, we use the stiffness tensor C of bulk 3C SiC, with elements in the cubic reference frame: C 11 = 390 GPa, C 12 = 142 GPa, C 44 = 256 GPa (experimental data derived by Lambrecht et al. 28 from measurements of Feldman et al. 29 ). First, we transform the stiffness tensor to the defect frame; we denote the resulting 6 × 6 stiffness matrix in the Voigt notation as C. To convert the spin-strain Hamiltonian Eq. (1) to spin-stress Hamiltonian, we express the strain components in Eq. (1) using stress components via ε = C −1 σ, where ε = (ε xx , ε yy , ε zz , 2ε yz , 2ε zx , 2ε xy ) and σ = (σ xx , σ yy , σ zz , σ yz , σ zx , σ xy ) holds in Voigt notation.
B. Ab initio spin-strain coupling parameters
We determined the spin-strain coupling parameters using density functional theory (DFT). We applied DFT for electronic structure calculation and geometry optimization, using the PBE functional 30 in the plane-wave-based Vienna Ab initio Simulation Package (VASP) [31][32][33][34] . The core electrons were treated in the projector augmentedwave (PAW) formalism 35 . The calculations were performed with 600 eV plane wave cutoff energy. The model of the divacancy in bulk 3C SiC was constructed using a 512-atom simple cubic supercell within the Γ-point approximation. We use a negative sign convention for compressive strain. The model of mechanical strain, described by the strain tensor ε, was the deformed cubic supercell with edge vectors obtained by transforming the nondeformed edge vectors with the matrix 1 + ε in the cubic reference frame, and allow the atomic positions to relax. For each strain configuration, the elements of the 3 × 3 zero-field splitting matrix D, defining the groundstate spin Hamiltonian via H = S T ·D· S, were calculated using the VASP implementation by Martijn Marsman with the PAW formalism 36 . We calculated the deformed supercells at several points and we applied a linear regression to read out the coupling-strength parameters as explained in Ref. 27.
In order to test the accuracy of our method, we studied the hh divacancy qubit in 4H SiC, for which experimental data was available 37 . Our results show good agreement with the observed spin-strain couplings (see Appendix A).
III. RESULTS AND DISCUSSION
The ab initio spin-strain and the derived spin-stress coupling parameters of the neutral divacancy in 3C SiC are summarized in Table I. The h 25 and h 26 couplings are responsible for the flipping of the electron spin which are comparable to the other coupling parameters. This holds for the hh divacancy in 4H SiC (see Appendix A) which explains the recently observed electromechanical driving of these spins 26 . By comparing these results with our recent data on NV center 27 , we realize that 3C divacancy exhibits slightly smaller spin-strain coupling parameters but greater spin-stress coupling parameter for most of the types of distortion than diamond NV does. This is caused by the smaller stiffness parameters of 3C SiC than those of diamond. Our results demonstrate that the mechanical properties of the host material can seriously affect the final response of the embedded qubits to external stress.
A. Stress sensitivity of 3C divacancy based on ODMR readout
We discuss here how the spin-stress coupling in 3C divacancy can be harnessed in nanoscale sensing applications. The most common readout mechanism of the defect spins is the ODMR method. In this case, the shot noise-limited sensitivity for sensing magnetic fields, elec-
η = 1 4gC √ βT 2 ,(2)
where g is the coupling parameter to spin, T 2 is the homogeneous spin coherence time, and C is the fluorescence readout contrast. Here we approximated the measurement time and free precession time by T 2 . The contrast C is defined as
C = p 0 − p 1 p 0 + p 1(3)
with p 0 and p 1 detected photon counts in the bright and dark state, respectively. β is the average fluorescence intensity that is approximated by p 0 . The truly intrinsic parameter associated with the qubit in η is the g coupling parameter. The photon counts and T 2 time depend on the quality and shape of the host material and other experimental conditions. The off-resonant readout contrast of isolated 3C divacancy is about C = 7.5%, the saturation photon count rate is 26 kcts/s and the spin coherence time is T 2 = 0.9 ms in nearly dopant-free crystal at 20 K temperature 13 . We note that the quality of 3C SiC samples still did not reach the quality of 4H SiC samples because 4H SiC is employed in SiC semiconductor devices that has driven the improvement of specifically 4H polytype of SiC. We note that further reduction of the nitrogen donor concentration in 3C SiC to the typical values of high quality 4H SiC would certainly converge the coherence times of the divacancies in the two polytypes, and the ODMR contrast of divacancy in 3C SiC can be further improved by optimizing the excitation wavelength and lowering the background from other defects, similarly to the divacancy qubits in 4H SiC. Therefore, we assume that these high quality 3C SiC samples are in reach, and we estimated the sensitivity of 3C divacancy from 4H SiC data of C = 15% and T 2 = 1.2 ms (Ref. 14). We assumed the same ODMR readout time for 3C divacancy as was reported for diamond NV center at about 350 ns (Ref. 39), from which one can estimate the total photon counts during single readout event. Finally, we have all the parameters that enter in Eq. (2). We find that η ∼ 10 −5 GPa Hz −1/2 for 3C divacancy qubits.
We illustrate the results as blue columns in Fig. 2(a) and (c) where we show the inverse of η which implicates that the larger is the value (height of the column) the better is the sensitivity. In particular, we plot the inverse sensitivities of g 43 and g 41 coupling parameters that corresponds to the pressure along the symmetry axis and in the plane perpendicular to the symmetry axis, respectively. We note that the former corresponds to the c-axis in the hexagonal 4H SiC lattice in Appendix A. We find more sensitivity toward g 41 over g 43 but the order of magnitude is the same.
We note that the T 2 time can be greatly improved by isotope engineering of the host material, i.e., by removing the nuclear spin noise as demonstrated for NV center in diamond 40 . They could increase the T 2 = 0.6 ms (Refs. 41 and 42) up to T 2 = 1.8 ms. Tight control of isotope engineering of 4H SiC was already demonstrated 43 that can be basically perpetuated for 3C SiC. We estimate similar improvements on 3C divacancies' T 2 time going from natural abundant to isotopically purified SiC samples that results in T 2 = 3.6 ms. The corresponding results are shown as blue columns in Fig. 2(b) and (d) which show almost a factor of two improvement in the sensitivity.
We finally compare the sensitivities of 3C divacancy to that of NV diamond. For NV center in diamond, the offresonant contrast is about 30%, the photon count rate is about 28 kcts/s in mechanical resonator experimental setup, so we could estimate the sensitivity with these and the previously mentioned parameters. We find similar but smaller sensitivities for diamond NV center (red columns in Fig. 2) than for 3C divacancy. In particular, the sensitivity for g 43 coupling parameter of divacancy in isotopically purified 3C SiC samples is clearly superior over isotopically purified diamond NV center.
IV. CONCLUSION
We have calculated the spin-strain coupling parameters for divacancy center in 3C SiC. In comparison to the most promising NV center in the field of nanoscale sensing, the intrinsic stress-spin coupling parameters of divacancy are superior. The actual sensitivity of the 3C divacancy depends on the quality of the 3C SiC crystal. We estimated that improvement on the quality of the 3C SiC crystal leads to favorable sensitivity of 3C divacancy nanosensors. Non-optical spin readout techniques such as photocurrent detection of magnetic resonance (PDMR) 44 may further improve sensitivity for both centers by substituting the low photon collection efficiency with a high photocurrent efficiency. In particular, by realizing PDMR on 3C divacancy on Si substrate, an all-silicon based electronic chip sensor could be constructed for nanoscale measurement of pressure and electric fields.
FIG. 2. Inverse stress sensitivity (1/η) comparison of the negatively charged nitrogen-vacancy center in diamond (red column) and divacancy in silicon carbide (blue column). Estimated values for natural abundant crystals (nat) and isotopically purified samples (iso) are shown in panels (a) and (c) and panels (b) and (d), respectively, where the corresponding T2 times are depicted. Other parameters used in the calculations of (1/η) are discussed in the text.
ACKNOWLEDGEMENT
We thank for the support of NKFIH within the Quantum Technology National Excellence Program (Project No. 2017-1.2.1-NKP-2017-00001).
Appendix A: 4H-SiC divacancy spin-strain coupling parameters DFT calculated spin-strain coupling parameters of hh divacancy (PL1) in 4H-SiC are summarized in Table II where the defect was modeled in a 576-atom supercell with Γ-point sampling. The experimental ODMR shift for perpendicular strain was reported (2 − 4) GHz/strain (Ref. 37) corresponding to our calculated h 41 = 5 GHz/strain coupling parameter. The good agreement validates our DFT method for calculating divacancy's spin-strain coupling parameters in 3C-SiC too. For the stress conversion, we used the stiffness tensor elements of 4H SiC in the cubic reference frame C 11 = 507 GPa, C 12 = 108 GPa, C 13 = 52 GPa, C 33 = 547 GPa, C 44 = 159 GPa (Ref. 45).
FIG. 1 .
1Divacancy in silicon carbide (cubic Bravais cell shown in black). {XY Z} shows the crystal reference frame and {xyz} defines the local reference frame of the center. Deformed cell is visualized in red for εxx = 0.1 strain.
TABLE I .
ISpin-strain (h) and spin-stress (g) coupling parameters of divacancy in 3C SiC as obtained from density functional theory. Results are rounded to significant digits.parameter
h (MHz/strain)
g (MHz/GPa)
h43, g43
2530 ± 30
6.01 ± 0.07
h41, g41
−4700 ± 200
−8.1 ± 0.3
h25, g25
−900 ± 100
−0.7 ± 0.3
h26, g26
−1760 ± 20
−5.00 ± 0.1
h15, g15
3200 ± 200
7.1 ± 0.5
h16, g16
1320 ± 50
1.3 ± 0.3
tric fields, temperature and strain in a Hahn echo mea-
surement is generally written in the form 38
TABLE II .
IISpin-strain (h) and spin-stress (g) couplingstrength parameters of hh divacancy in 4H-SiC calculated from density functional theory. Results are rounded to significant digits.parameter
h (MHz/strain)
g (MHz/GPa)
h43, g43
3110 ± 30
7.33 ± 0.06
h41, g41
−4940 ± 60
−8.65 ± 0.09
h25, g25
1130 ± 40
2.8 ± 0.1
h26, g26
−1580 ± 30
−5.0 ± 0.1
h15, g15
7600 ± 300
18.9 ± 0.6
h16, g16
1600 ± 60
5.0 ± 0.2
A. Gali, physica status solidi (b) 248, 1337, https://onlinelibrary.wiley.com/doi/pdf/10.1002/pssb.201046254. 2 J. R. Weber, W. F. Koehl, J. B. Varley, A. Janotti, B. B. Buckley,
C G Van De Walle, D D Awschalom, Proceedings of the National Academy of Sciences. the National Academy of Sciences1078513C. G. Van de Walle, and D. D. Awschalom, Proceedings of the Na- tional Academy of Sciences 107, 8513 (2010), http://www.pnas.org/content/107/19/8513.full.pdf.
. W F Koehl, B B Buckley, F J Heremans, G Calusine, D D Awschalom, Nature. 47984W. F. Koehl, B. B. Buckley, F. J. Heremans, G. Calusine, and D. D. Awschalom, Nature 479, 84 EP (2011).
. P G Baranov, A P Bundakova, A A Soltamova, S B Orlinskii, I V Borovykh, R Zondervan, R Verberk, J Schmidt, Phys. Rev. B. 83125203P. G. Baranov, A. P. Bundakova, A. A. Soltamova, S. B. Orlinskii, I. V. Borovykh, R. Zondervan, R. Verberk, and J. Schmidt, Phys. Rev. B 83, 125203 (2011).
. D Riedel, F Fuchs, H Kraus, S Väth, A Sperlich, V Dyakonov, A A Soltamova, P G Baranov, V A Ilyin, G V Astakhov, Phys. Rev. Lett. 109226402D. Riedel, F. Fuchs, H. Kraus, S. Väth, A. Sperlich, V. Dyakonov, A. A. Soltamova, P. G. Baranov, V. A. Ilyin, and G. V. Astakhov, Phys. Rev. Lett. 109, 226402 (2012).
Astakhov. H Kraus, V A Soltamov, D Riedel, S Väth, F Fuchs, A Sperlich, P G Baranov, V Dyakonov, G , Nature Physics. 10157articleH. Kraus, V. A. Soltamov, D. Riedel, S. Väth, F. Fuchs, A. Sperlich, P. G. Baranov, V. Dyakonov, and G. V. As- takhov, Nature Physics 10, 157 EP (2013), article.
. C A Zorman, A J Fleischman, A S Dewa, M Mehregany, C Jacob, S Nishino, P Pirouz, 10.1063/1.359745Journal of Applied Physics. 78C. A. Zorman, A. J. Fleischman, A. S. Dewa, M. Mehregany, C. Jacob, S. Nishino, and P. Pirouz, Journal of Applied Physics 78, 5136 (1995), https://doi.org/10.1063/1.359745.
. A L Falk, B B Buckley, G Calusine, W F Koehl, V V Dobrovitski, A Politi, C A Zorman, P X L Feng, D D Awschalom, Nature Communications. 41819articleA. L. Falk, B. B. Buckley, G. Calusine, W. F. Koehl, V. V. Dobrovitski, A. Politi, C. A. Zorman, P. X. L. Feng, and D. D. Awschalom, Nature Communications 4, 1819 EP (2013), article.
. Y T Yang, K L Ekinci, X M H Huang, L M Schiavone, M L Roukes, C A Zorman, M Mehregany, 10.1063/1.1338959Applied Physics Letters. 78Y. T. Yang, K. L. Ekinci, X. M. H. Huang, L. M. Schiavone, M. L. Roukes, C. A. Zorman, and M. Mehregany, Applied Physics Letters 78, 162 (2001), https://doi.org/10.1063/1.1338959.
. G Calusine, A Politi, D D Awschalom, 10.1063/1.4890083Applied Physics Letters. 10511123G. Calusine, A. Politi, and D. D. Awschalom, Applied Physics Letters 105, 011123 (2014), https://doi.org/10.1063/1.4890083.
. V Bratus, ' , R Melnik, S Okulov, V Rodionov, B Shanina, M Smoliy, Physica B: Condensed Matter. 4044739V. Bratus', R. Melnik, S. Okulov, V. Rodionov, B. Shan- ina, and M. Smoliy, Physica B: Condensed Matter 404, 4739 (2009).
. N T Son, E Sörman, W M Chen, C Hallin, O Kordina, B Monemar, E Janzén, J L Lindström, Phys. Rev. B. 552863N. T. Son, E. Sörman, W. M. Chen, C. Hallin, O. Kordina, B. Monemar, E. Janzén, and J. L. Lindström, Phys. Rev. B 55, 2863 (1997).
. D J Christle, P V Klimov, C F De Las Casas, K Szász, V Ivády, V Jokubavicius, J Hassan, M Syväjärvi, W F Koehl, T Ohshima, N T Son, E Janzén, A Gali, D D Awschalom, Phys. Rev. X. 721046D. J. Christle, P. V. Klimov, C. F. de las Casas, K. Szász, V. Ivády, V. Jokubavicius, J. Ul Hassan, M. Syväjärvi, W. F. Koehl, T. Ohshima, N. T. Son, E. Janzén, A. Gali, and D. D. Awschalom, Phys. Rev. X 7, 021046 (2017).
. D J Christle, A L Falk, P Andrich, P V Klimov, J U Hassan, N Son, E Janzén, T Ohshima, D D Awschalom, Nature Materials. 14160D. J. Christle, A. L. Falk, P. Andrich, P. V. Klimov, J. U. Hassan, N. Son, E. Janzén, T. Ohshima, and D. D. Awschalom, Nature Materials 14, 160 EP (2014).
. P L Stanwix, L M Pham, J R Maze, D Le Sage, T K Yeung, P Cappellaro, P R Hemmer, A Yacoby, M D Lukin, R L Walsworth, Phys. Rev. B. 82201201P. L. Stanwix, L. M. Pham, J. R. Maze, D. Le Sage, T. K. Yeung, P. Cappellaro, P. R. Hemmer, A. Yacoby, M. D. Lukin, and R. L. Walsworth, Phys. Rev. B 82, 201201 (2010).
. J Teissier, A Barfuss, P Appel, E Neu, P Maletinsky, Phys. Rev. Lett. 11320503J. Teissier, A. Barfuss, P. Appel, E. Neu, and P. Maletinsky, Phys. Rev. Lett. 113, 020503 (2014).
. A Barfuss, J Teissier, E Neu, A Nunnenkamp, P Maletinsky, Nature Physics. 11820A. Barfuss, J. Teissier, E. Neu, A. Nunnenkamp, and P. Maletinsky, Nature Physics 11, 820 (2015).
. P Ovartchaiyapong, K W Lee, B A Myers, A C B Jayich, Nature Communications. 5P. Ovartchaiyapong, K. W. Lee, B. A. Myers, and A. C. B. Jayich, Nature Communications 5, 4429 EP (2014), arti- cle.
. M S J Barson, P Peddibhotla, P Ovartchaiyapong, K Ganesan, R L Taylor, M Gebert, Z Mielens, B Koslowski, D A Simpson, L P Mcguinness, J Mc-Callum, S Prawer, S Onoda, T Ohshima, A C Bleszynski Jayich, F Jelezko, N B Manson, M W Doherty, 10.1021/acs.nanolett.6b04544pMID: 28146361Nano Letters. 171496M. S. J. Barson, P. Peddibhotla, P. Ovartchaiyapong, K. Ganesan, R. L. Taylor, M. Gebert, Z. Mielens, B. Koslowski, D. A. Simpson, L. P. McGuinness, J. Mc- Callum, S. Prawer, S. Onoda, T. Ohshima, A. C. Bleszyn- ski Jayich, F. Jelezko, N. B. Manson, and M. W. Do- herty, Nano Letters 17, 1496 (2017), pMID: 28146361, http://dx.doi.org/10.1021/acs.nanolett.6b04544.
. E R Macquarrie, T A Gosavi, S A Bhave, G D Fuchs, Phys. Rev. B. 92224419E. R. MacQuarrie, T. A. Gosavi, S. A. Bhave, and G. D. Fuchs, Phys. Rev. B 92, 224419 (2015).
. E R Macquarrie, M Otten, S K Gray, G D Fuchs, Nature Communications. 814358E. R. MacQuarrie, M. Otten, S. K. Gray, and G. D. Fuchs, Nature Communications 8, 14358 (2017).
. E R Macquarrie, T A Gosavi, A M Moehle, N R Jungwirth, S A Bhave, G D Fuchs, Optica. 2233E. R. MacQuarrie, T. A. Gosavi, A. M. Moehle, N. R. Jungwirth, S. A. Bhave, and G. D. Fuchs, Optica 2, 233 (2015).
. D A Golter, T Oo, M Amezcua, K A Stewart, H Wang, Phys. Rev. Lett. 116143602D. A. Golter, T. Oo, M. Amezcua, K. A. Stewart, and H. Wang, Phys. Rev. Lett. 116, 143602 (2016).
. D A Golter, T Oo, M Amezcua, I Lekavicius, K A Stewart, H Wang, Phys. Rev. X. 641060D. A. Golter, T. Oo, M. Amezcua, I. Lekavicius, K. A. Stewart, and H. Wang, Phys. Rev. X 6, 041060 (2016).
. S Meesala, Y.-I Sohn, H A Atikian, S Kim, M J Burek, J T Choy, M Lončar, Phys. Rev. Applied. 534010S. Meesala, Y.-I. Sohn, H. A. Atikian, S. Kim, M. J. Burek, J. T. Choy, and M. Lončar, Phys. Rev. Applied 5, 034010 (2016).
. S J Whiteley, G Wolfowicz, C P Anderson, A Bourassa, H Ma, M Ye, G Koolstra, K J Satzinger, M , S. J. Whiteley, G. Wolfowicz, C. P. Anderson, A. Bourassa, H. Ma, M. Ye, G. Koolstra, K. J. Satzinger, M. V.
. F J Holt, A N Heremans, D I Cleland, G Schuster, D D Galli, Awschalom, arXiv:1804.10996ArXiv e-prints. quant-phHolt, F. J. Heremans, A. N. Cleland, D. I. Schuster, G. Galli, and D. D. Awschalom, ArXiv e-prints (2018), arXiv:1804.10996 [quant-ph].
. P Udvarhelyi, V O Shkolnikov, A Gali, G Burkard, A Pályi, arXiv:1712.02684condmat.mes-hallP. Udvarhelyi, V. O. Shkolnikov, A. Gali, G. Burkard, and A. Pályi, ArXiv e-prints (2017), arXiv:1712.02684 [cond- mat.mes-hall].
. W R L Lambrecht, B Segall, M Methfessel, M Van Schilfgaarde, Phys. Rev. B. 443685W. R. L. Lambrecht, B. Segall, M. Methfessel, and M. van Schilfgaarde, Phys. Rev. B 44, 3685 (1991).
. D W Feldman, J H Parker, W J Choyke, L Patrick, Phys. Rev. 173787D. W. Feldman, J. H. Parker, W. J. Choyke, and L. Patrick, Phys. Rev. 173, 787 (1968).
. J P Perdew, K Burke, M Ernzerhof, Phys. Rev. Lett. 773865J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77, 3865 (1996).
. G Kresse, J Hafner, Phys. Rev. B. 47558G. Kresse and J. Hafner, Phys. Rev. B 47, 558 (1993).
. G Kresse, J Furthmüller, Phys. Rev. B. 5411169G. Kresse and J. Furthmüller, Phys. Rev. B 54, 11169 (1996).
. G Kresse, J Furthmüller, Computational Materials Science. 615G. Kresse and J. Furthmüller, Computational Materials Science 6, 15 (1996).
. J Paier, M Marsman, K Hummer, G Kresse, I C Gerber, J G Ángyán, The Journal of Chemical Physics. 124154709J. Paier, M. Marsman, K. Hummer, G. Kresse, I. C. Ger- ber, and J. G.Ángyán, The Journal of Chemical Physics 124, 154709 (2006).
. P E Blöchl, Phys. Rev. B. 5017953P. E. Blöchl, Phys. Rev. B 50, 17953 (1994).
. Z Bodrog, A Gali, Journal of Physics: Condensed Matter. 2615305Z. Bodrog and A. Gali, Journal of Physics: Condensed Matter 26, 015305 (2014).
. A L Falk, P V Klimov, B B Buckley, V Ivády, I A Abrikosov, G Calusine, W F Koehl, A Gali, D D Awschalom, Phys. Rev. Lett. 112187601A. L. Falk, P. V. Klimov, B. B. Buckley, V. Ivády, I. A. Abrikosov, G. Calusine, W. F. Koehl, A. Gali, and D. D. Awschalom, Phys. Rev. Lett. 112, 187601 (2014).
. L M Pham, D L Sage, P L Stanwix, T K Yeung, D Glenn, A Trifonov, P Cappellaro, P R Hemmer, M D Lukin, H Park, A Yacoby, R L Walsworth, New Journal of Physics. 1345021L. M. Pham, D. L. Sage, P. L. Stanwix, T. K. Yeung, D. Glenn, A. Trifonov, P. Cappellaro, P. R. Hemmer, M. D. Lukin, H. Park, A. Yacoby, and R. L. Walsworth, New Journal of Physics 13, 045021 (2011).
. D M Toyli, D J Christle, A Alkauskas, B B Buckley, C G Van De Walle, D D Awschalom, Phys. Rev. X. 231001D. M. Toyli, D. J. Christle, A. Alkauskas, B. B. Buckley, C. G. Van de Walle, and D. D. Awschalom, Phys. Rev. X 2, 031001 (2012).
. G Balasubramanian, P Neumann, D Twitchen, M Markham, R Kolesov, N Mizuochi, J Isoya, J Achard, J Beck, J Tissler, V Jacques, P R Hemmer, F Jelezko, J Wrachtrup, Nature Materials. 8383G. Balasubramanian, P. Neumann, D. Twitchen, M. Markham, R. Kolesov, N. Mizuochi, J. Isoya, J. Achard, J. Beck, J. Tissler, V. Jacques, P. R. Hemmer, F. Jelezko, and J. Wrachtrup, Nature Materials 8, 383 EP (2009).
. J R Maze, P L Stanwix, J S Hodges, S Hong, J M Taylor, P Cappellaro, L Jiang, M V G Dutt, E Togan, A S Zibrov, A Yacoby, R L Walsworth, M D Lukin, Nature. 455644J. R. Maze, P. L. Stanwix, J. S. Hodges, S. Hong, J. M. Taylor, P. Cappellaro, L. Jiang, M. V. G. Dutt, E. Togan, A. S. Zibrov, A. Yacoby, R. L. Walsworth, and M. D. Lukin, Nature 455, 644 (2008).
. N Mizuochi, P Neumann, F Rempp, J Beck, V Jacques, P Siyushev, K Nakamura, D J Twitchen, H Watanabe, S Yamasaki, F Jelezko, J Wrachtrup, Phys. Rev. B. 8041201N. Mizuochi, P. Neumann, F. Rempp, J. Beck, V. Jacques, P. Siyushev, K. Nakamura, D. J. Twitchen, H. Watanabe, S. Yamasaki, F. Jelezko, and J. Wrachtrup, Phys. Rev. B 80, 041201 (2009).
I G Ivanov, M Yazdanfar, B Lundqvist, J T Chen, J Hassan, P Stenberg, R Liljedahl, N T Son, J W Ager, O Kordina, E Janzén, Silicon Carbide and Related Materials. Trans Tech Publications778I. G. Ivanov, M. Yazdanfar, B. Lundqvist, J. T. Chen, J. ul Hassan, P. Stenberg, R. Liljedahl, N. T. Son, J. W. Ager, O. Kordina, and E. Janzén, in Silicon Carbide and Related Materials 2013 , Materials Science Forum, Vol. 778 (Trans Tech Publications, 2014) pp. 471-474.
. E Bourgeois, E Londero, K Buczak, J Hruby, M Gulka, Y Balasubramaniam, G Wachter, J Stursa, K Dobes, F Aumayr, M Trupke, A Gali, M Nesladek, Phys. Rev. B. 9541402E. Bourgeois, E. Londero, K. Buczak, J. Hruby, M. Gulka, Y. Balasubramaniam, G. Wachter, J. Stursa, K. Dobes, F. Aumayr, M. Trupke, A. Gali, and M. Nesladek, Phys. Rev. B 95, 041402 (2017).
. K Kamitani, M Grimsditch, J C Nipko, C.-K Loong, M Okada, I Kimura, 10.1063/1.366100Journal of Applied Physics. 82K. Kamitani, M. Grimsditch, J. C. Nipko, C.-K. Loong, M. Okada, and I. Kimura, Journal of Applied Physics 82, 3152 (1997), https://doi.org/10.1063/1.366100.
|
[] |
[
"Isoperiodic classical systems and their quantum counterparts",
"Isoperiodic classical systems and their quantum counterparts"
] |
[
"M Asorey \nDepartamento de Física Teórica\nFacultad de Ciencias\nUniversidad de Zaragoza\n50009ZaragozaSpain ‡\n",
"J F Cariñena \nDepartamento de Física Teórica\nFacultad de Ciencias\nUniversidad de Zaragoza\n50009ZaragozaSpain ‡\n",
"G Marmo \nDipartimento di Scienze Fisiche\nUniversitá Federico II di Napoli c\nINFN\nSezione di Napoli\nComplesso Univ. di Monte Sant'Angelo\nVia Cintia80125NapoliItaly\n",
"A Perelomov \nInstitute for Theoretical and Experimental Physics\n117259MoscowRussia\n"
] |
[
"Departamento de Física Teórica\nFacultad de Ciencias\nUniversidad de Zaragoza\n50009ZaragozaSpain ‡",
"Departamento de Física Teórica\nFacultad de Ciencias\nUniversidad de Zaragoza\n50009ZaragozaSpain ‡",
"Dipartimento di Scienze Fisiche\nUniversitá Federico II di Napoli c\nINFN\nSezione di Napoli\nComplesso Univ. di Monte Sant'Angelo\nVia Cintia80125NapoliItaly",
"Institute for Theoretical and Experimental Physics\n117259MoscowRussia"
] |
[] |
One-dimensional isoperiodic classical systems have been first analyzed by Abel. Abel's characterization can be extended for singular potentials and potentials which are not defined on the whole real line. The standard shear equivalence of isoperiodic potentials can also be extended by using reflection and inversion transformations. We provide a full characterization of isoperiodic rational potentials showing that they are connected by translations, reflections or Joukowski transformations. Upon quantization many of these isoperiodic systems fail to exhibit identical quantum energy spectra. This anomaly occurs at order O( 2 ) because semiclassical corrections of energy levels of order O( ) are identical for all isoperiodic systems. We analyze families of systems where this quantum anomaly occurs and some special systems where the spectral identity is preserved by quantization. Conversely, we point out the existence of isospectral quantum systems which do not correspond to isoperiodic classical systems.
|
10.1016/j.aop.2006.07.003
|
[
"https://arxiv.org/pdf/0707.4465v1.pdf"
] | 14,824,979 |
0707.4465
|
dd15e442d41f1bdae77559ebdc07412943c3a636
|
Isoperiodic classical systems and their quantum counterparts
30 Jul 2007
M Asorey
Departamento de Física Teórica
Facultad de Ciencias
Universidad de Zaragoza
50009ZaragozaSpain ‡
J F Cariñena
Departamento de Física Teórica
Facultad de Ciencias
Universidad de Zaragoza
50009ZaragozaSpain ‡
G Marmo
Dipartimento di Scienze Fisiche
Universitá Federico II di Napoli c
INFN
Sezione di Napoli
Complesso Univ. di Monte Sant'Angelo
Via Cintia80125NapoliItaly
A Perelomov
Institute for Theoretical and Experimental Physics
117259MoscowRussia
Isoperiodic classical systems and their quantum counterparts
30 Jul 2007Preprint submitted to Elsevier February 1, 2008arXiv:0707.4465v1 [hep-th]Isoperiodicity Shear equivalence Isospectral potentials Quantum anomalies Darboux transformation Joukowski transformations
One-dimensional isoperiodic classical systems have been first analyzed by Abel. Abel's characterization can be extended for singular potentials and potentials which are not defined on the whole real line. The standard shear equivalence of isoperiodic potentials can also be extended by using reflection and inversion transformations. We provide a full characterization of isoperiodic rational potentials showing that they are connected by translations, reflections or Joukowski transformations. Upon quantization many of these isoperiodic systems fail to exhibit identical quantum energy spectra. This anomaly occurs at order O( 2 ) because semiclassical corrections of energy levels of order O( ) are identical for all isoperiodic systems. We analyze families of systems where this quantum anomaly occurs and some special systems where the spectral identity is preserved by quantization. Conversely, we point out the existence of isospectral quantum systems which do not correspond to isoperiodic classical systems.
Introduction
The connection between classical and quantum physics has always been tantalizing and elusive. The establishment of quantization rules for classical system has been the algorithmic method which dominated the construction of quantum systems. This pathway has been plagued with surprises: existence of quantum anomalies, operator ordering problems, quantum divergences, spontaneous symmetry breaking, renormalization of couplings and observables, etc. The way back to classical mechanics from quantum dynamics has revealed also problematic due to the failure of semiclassical expansion and the existence of quantum states without a natural classical analogue. One of the most explicit realizations of the genuine differences between classical and quantum systems is provided by the analysis of boundary conditions in systems evolving in constrained spaces [1,2]. However, this mismatch has been very useful to introduce new quantum inspired classical structures: quantum groups, non-commutative geometry, etc.
In this note we explore the analogies and differences between the equivalences of classical and quantum systems from a spectral point of view. There is a natural equivalence relation between classical mechanical systems based on the analysis of the periods of closed orbits and its dependence on the orbit energy. Two bounded mechanical systems might be considered equivalent if they employ the same time periods for closed orbits with the same energy, and then they are said to be isoperiodic. This equivalence relation was introduced by Abel in 1826 [3]. It can be shown that the equivalence classes of equivalent potentials include potentials related by shear transformations but this does not exhaust all possibilities as we will show below. This fact is related with the concept of Steiner symmetrization which was used in [4] to establish that all potentials with only one minimum having the same Steiner symmetrized potential have the same dependence of periods on the energy
T (E).
Another open problem is the characterization of all potentials which give rise to isochronous motions, i.e. the period does not depend on the energy (see e.g. [5,6], and references therein). The origin of the problem is even older, it goes back to Huygens in 1673 [7]. In the one-dimensional case with rational potentials it can be shown that the only symmetric isochronous potentials with a constant period T = 2π/ω correspond either to the harmonic oscillator U(x) = 1 2 ω 2 x 2 or to the isotonic potential of the form U(x) = 1 8 ω 2 x 2 + αx −2 , up to a translation [8].
There is a similar equivalence relation for quantum systems. Two quantum systems with bounded classical analogs are said to be spectrally equivalent if their energy levels are identical. It is well known that the isoperiodicity equivalence is the classical version of the quantum isospectrality condition (see e.g. [9] and [10] for a recent discussion). In the same way it is obvious that the quantum counterpart of isochronicity is the harmonic spectrum (for regular potentials). However, there is not a theorem characterizing the potentials with equally spaced energy levels in an analogous way as for the isochronous systems. In particular, we shall show the existence of many isochronous classical systems which do not have equally spaced quantum energy spectra.
More generally, the classical equivalence associated to isoperiodicity is not always preserved by the quantization process, i.e. given two isoperiodic classical systems the corresponding quantum systems might not be isospectral. The exploration of the anomalies of this correspondence is one of the goals of this paper. In particular, we will exhibit many isochronous classical systems which do not have equally spaced quantum energy spectra. Conversely, we will also show that there are spectrally equivalent quantum systems which are not classically isoperiodic.
In the path integral approach to quantum mechanics the anomaly can be understood by the simple fact that the contribution of paths which do not correspond to classical solutions of motion equations give different contributions for some isoperiodic potentials. However, the equivalence between isoperiodic classical systems is not broken in the semiclassical approximation O( ). Thus, the quantum anomalies, when they exist, can only appear in higher order corrections O( 2 ).
Isoperiodic deformations of an potential are of two types: shear transformations and timespace scale transformations. The difference between both deformations is that in the first case the energy of the orbits is preserved whereas in the second case the energy levels are scaled. The quantum anomaly in the first case can be interpreted as an obstruction to the shear transformation which requires an additional amount of energy to be performed unlike for the classical systems. On the contrary, the scale transformation always involves energy transfer in both cases. One of the main results of the paper is the proof that the full characterization of isoperiodic rational potentials can be achieved in terms of translations, reflections and Joukowski transformations.
On the other hand, there are quantum mechanical systems whose potentials are related by a Darboux transformation that implies that they have (almost) identical energy spectra. Some of those systems turn out to be classically isoperiodic but some others do not. These facts illuminate the relations existing between quantum isospectrality and classical isoperiodicity, two similar but not identical dynamical concepts. The analysis of these equivalences at the classical and quantum levels provides a very illuminating picture of the quantum/classical transition.
The paper is organized as follows: In Section 2 we analyze the notion of isoperiodicity and provide a characterization of polynomial isochronous potentials. The generalization of Abel's theory for singular potentials is approached in Section 3, where we prove the main results of the paper concerning the characterization of isoperiodic rational potentials which are illustrated with some illuminating examples. In Section 4 we analyze the role of scale invariance in the analysis of isoperiodicity. The quantum analogue of isoperiodicity is isospectrality. The appearance of anomalies in the quantization of isoperiodic potentials prevents the isospectrality of some isoperiodic potentials, although their first order semiclassical corrections are identical. This is shown in Section 5 while in Section 6 we analyze the opposite case, where we analyze some isospectral potentials related by Darboux transforms which are not classically isoperiodic.
Isoperiodic Potentials
The complete identification of all potentials of one-dimensional mechanical systems giving rise to the same dependence T (E) of the period T of recurrent trajectories with an energy E was provided by Abel [3] (see also [11]). This classification can also be obtained by identifying the deformations of the potential that do not change the T (E) dependence. For one dimensional problems this period/energy dependence is given by
T (E) = √ 2 m x M (E) xm(E) dx E − U(x) .(1)
where m is the mass of the particle and x m (E) and x M (E) denote the two turning points which are the roots of the equation U(x) = E.
Here U(x) will be assumed to be a convex potential of the form displayed in Figure 1.
Having in mind the invariance under translations, we can assume in the simplest case, the asymptotic behavior lim x→±∞ U(x) = ∞ and that U has two branches,
U(x) = U 1 (x) if x < 0 U 2 (x) if x > 0
where U 1 (x) and U 2 (x) are two monotone decreasing/increasing functions, i.e. such that x U ′ (x) > 0. The inverse maps of U 1 (x) and U 2 (x) will be denoted x 1 (U) and x 2 (U), respectively. Note that their values for U = E are those of the turning points.
For more general non-convex potentials like those of Figure 2 the period does not depend only on the energy E but also depends on the periodic branch specified by x m (E) and
x M (E), i.e. T (E, x m , x M ). Note that such type potentials cannot be isochronous [8].
We shall restrict ourselves in this section to the convex case and we shall postpone the discussion of other interesting cases, for instance those in which the potential presents poles, for next sections. The existence of nodes splits the one-dimensional space into isolated domains bounded by the poles where the dynamics of the system is confined.
It was shown by Abel [3] that the relation between energy and period given by (1) does not uniquely determine the potential U, but only the difference x 2 (U) − x 1 (U), which is given by:
x U U U 1 2x 2 (U) − x 1 (U) = 1 π √ 2 m U 0 T (E) √ U − E dE .(2)
For a proof using Laplace transformation see e.g. [17]. This expression shows that the general potential U(x) having a given period/energy dependence can be expressed by means of a particular solution x 0 i , i = 1, 2, and choosing an arbitrary function g : R → R, and in terms of g the general solution of the Abel equation is
x 2 (U) = x 0 2 (U) + g(U), x 1 (U) = x 0 1 (U) + g(U) .(3)
Obviously g should not alter the assumed convex character of the potential.
In particular, if we choose g(U) = − 1 2 (x 0 1 (U) + x 0 2 (U)) we find a solution (x s 1 , x s 2 ) such that
x s 1 (U) = −x s 2 (U), x s 2 (U) = 1 2 (x 2 (U) − x 1 (U)) , x s 1 (U) = −x s 2 (U) ,
and therefore corresponding to a potential U s which is symmetric under reflection with respect to the origin, i.e. U s (−x) = U s (x). Such potential is nothing but the Steiner symmetrization [4] of the starting potential U. Using such a particular solution, the general solution is given by
x 2 (U) = x s 2 (U) + g(U), x 1 (U) = −x s 2 (U) + g(U) .(4)
Under the additional assumption that the potential U is symmetric under reflection with respect to the origin there is a unique potential U with a given period/energy dependence U x Fig. 2. Non-convex potential satisfying (1), which will be the one given for U > 0 by [12,13,14]:
x s 2 (U) = 1 2π √ 2 m U 0 T (E) √ U − E dE .(5)
Now, because of the convex character of U the relation (5) can be inverted giving U s as a function of x s 2 for x s 2 ≥ 0. Note that relation (3) can be inverted giving rise to a relation
U(x) = U 0 (x + f (U(x))) ,(6)
and more particularly, when U 0 is the symmetric potential U s in its equivalence class of potentials, we obtain from (4) the relation
U(x) = U s (x + g(U(x)) .
There is a very fundamental identity which characterizes all potentials (related by a shear transformation) having the T U (E) dependency of the period as a function of the energy corresponding to the potential U:
U(x) = U (x + W U (U(x))) where W U (V ) = 1 π √ 2 m V 0 T U (E) √ V − E dE ,(7)
which is easily derived from Abel relation (2).
The potentials in the same class of shear equivalence as a given potential U can be characterized as the fixed points of the following transformation
U (x) =Ū x + W U (Ū(x)) .(8)
This transformation can be considered as a classical analog of a renormalization group transformation. When compared with (6), it shows that it is completely characterized by the choice of the basic potential U which determines the nature of the fixed point potentials which are shear equivalent to U.
This renormalization group transformation is very useful to further characterize the isoperiodic potentials within a certain class of potentials. In particular, is very useful to prove some theorems concerning isochronous rational potentials.
It is commonly believed that the harmonic oscillator is the only polynomial potential which is isochronous. This guess can be substantiated in more rigorous terms [15] .
Theorem 1. A convex polynomial potential U(x) is isochronous iff U(x) = ax 2 + bx + c Proof: If the potential U(x)
is an isochronous potential with period T we can use (2) when T (E) is constant and we obtain
x 2 (U) − x 1 (U) = 1 π √ 2 m U 0 T √ U − E dE = 2 T π √ 2m √ U , which implies that [8] U(x) = U x + 2T π √ 2m U(x) .(9)
From the analysis of the leading term of (9) we see that a solution U of (9) can be polynomial only if U is the square of a linear polynomial, i.e. U(x) = (α x + β) 2 which proves the theorem. The condition U(0) = 0 fixes β = 0 and yields to the standard harmonic oscillator.
A generalization for the case of rational functions is also possible and will be considered in next section.
Let us examine some examples of the isochronous case for which T (E) takes a constant value T . In that case, using (5) we see that x s (U) = (T /π) U/(2m) and the general solutions x 1 (U) and x 2 (U) are, respectively [3],
x 1 (U) = − T π U 2m + g(U) , x 2 (U) = T π U 2m + g(U) .(10)
Case A. If we choose g(U) = a we find
x 1 (U) = − T π U 2m + a , x 2 (U) = T π U 2m + a ,(11)
from which we obtain the harmonic oscillator potential centered at x = a.
U(x) = m ω 2 2 (x − a) 2 , with ω = 2 π T .(12)
Case B. If, instead, the function g is chosen as g(U) = α (T /π) U/(2m), then
x 1 (U) = (−1 + α) T π U 2m , x 2 (U) = (1 + α) T π U 2m ,(13)
which for |α| = 1, corresponds to the potential of two half-oscillators [17,18,19] sometimes called split-harmonic oscillator [16,10]:
U(x) = 1 2 m ω 2 1 x 2 if x ≤ 0 1 2 m ω 2 2 x 2 if x ≥ 0(14)
with different angular frequencies
ω 1 = 2π (1 − α)T = ω 0 1 − α , ω 2 = 2π (1 + α)T = ω 0 1 + α ,(15)
glued together at the origin of coordinates [18]. Note that
1/ω 1 + 1/ω 2 = 2/ω 0 ,(16)
where ω 0 = 2π/T . The harmonic oscillator with ω = ω 0 /2 is the Steiner symmetrized of this split-harmonic oscillator [4].
Conversely, given a potential like in (14) we can reduce it to Case B with the choice [17]
α = ω 1 − ω 2 ω 1 + ω 2 and ω 0 = 2 ω 1 ω 2 (ω 1 + ω 2 )
.
Note that this potential (14) is not analytic at
x = 0 but U ′′ (0+) − U ′′ (0−) = m(ω 2 2 − ω 2 1 )
. The limit cases α = ±1 correspond to the half harmonic oscillator and its reflected one, to be studied later.
Singular potentials and shear equivalence
There are two slight generalizations of Abel theorem for non-convex and singular potentials. The first one arises as a consequence of the Euclidean symmetry of the kinetic term of mechanical systems. In particular, it is invariant under space translations and reflections. The space translation symmetry is the cause for the ambiguity in isoperiodic systems associated to the choice of the shear function g(U) = a. Now, the reflection symmetry interchanges the order of turning points of closed trajectories and establishes the mechanical equivalence (isoperiodicity) of a potential U and its space reflected pair
U R (x) = U(−x), for which x R 1 (U) = −x R 2 (U) and x R 2 (U) = −x R 1 (U).
It is obvious that this operation preserves the relations period/energy for any potential. In the case of convex potentials the equivalence is included in the Abel family of isoperiodic solutions. However, for non-convex potentials the reflection transformation introduces a new type of solution not included in Abel's family of isoperiodic potentials. The most general solution for any smooth potential is thus given from a particular solution U * , its reflected pair U R * and their shear equivalents
U g (x) = U * (x − g(U g (x))) U R g (x) = U * (−x + g(U g (−x))) .(17)
The reflection symmetry could be in principle defined with respect to any point of the real line, but the isoperiodic potentials obtained by this transformation are included in those of (17) because the most general reflection can be expressed as a composition of reflections with respect to the origin and translations, both considered in (17).
The second generalization of Abel's solution (3) concerns the case of singular potentials or potentials which are not defined on the whole real line. In that case one has to look for new types of isoperiodic potentials. Let us analyze once again the isochronous case.
In that case we have a generalization of Theorem 1 for the case of rational potentials.
Theorem 2. A rational potential U(x) which does not reduces to a polynomial is isochronous iff 1
U(x) = ax 2 + bx + c x + d 2 .
Proof: Any rational potential U(x) solution to (9) requires that U(x) has to be the square of the irreducible quotient of two polynomials P (x) and Q(x),
U(x) = P (x) Q(x) 2 .
The stability of the leading term under the non-linear constraint (9) requires that the degree of P cannot be higher than one unit more than that of Q. As U(x) was assumed to be rational we can consider the analytic continuation of such a function to the complex plane. If Q is not constant the potential develops at least one pole in the complex plane which is not a zero of P because P and Q cannot have common zeros. We will show that Q(x) cannot have two different zeros and therefore that in such a case the zero should be real. Indeed, if w is a zero of Q let us consider the function R w (z) given by
R w (z) = Q(z)(z − w) − 2T π √ 2m P (z) .(18)
Such a function cannot have a zero, because if we assume that R w (ζ) = 0, and Q(ζ) = 0, then ζ is the partner of w because
ζ = w + 2T π √ 2m P (ζ) Q(ζ) ,
and therefore ζ is a pole of U(z), what is not possible because we assumed that Q(ζ) = 0. On the other side, had we assumed that ζ is a zero of R w for which Q(ζ) = 0, then (18) shows that also P (ζ) = 0, what is once again against our hypothesis that P and Q have no common zeroes.
As the polynomial function R w (z) has not zeroes, it should be a constant. Now, if we have two different zeros of Q, w 1 and w 2 , the preceding argument shows the existence of two constants c 1 and c 2 such that
Q(z)(z − w 1 ) − 2T π √ 2m P (z) = c 1 , Q(z)(z − w 2 ) − 2T π √ 2m P (z) = c 2 ,
from where we find that
w 1 − w 2 = c 2 − c 1 Q(x) ,
which implies that Q must be a constant, reducing the problem to the previously considered case of U being polynomial. Therefore the only possible non-polynomial solution is the one given by a polynomial P of degree two and a polynomial Q of degree one with one single real zero, which completes the proof of the claim.
Note that using translational symmetry we can fix the real pole at x = 0, (i.e. d = 0) and the classical motion can be then restricted to the open interval (0, ∞).
Some other examples with a non-analytic behavior are the following.
Case C. The half harmonic oscillator whose potential is
U(x) = ∞ if x ≤ 0 1 2 mω 2 x 2 if x ≥ 0(19)
is only defined in half a line. However it does not define a new family of isochronous potentials because it can be included in the Abel's family of the regular harmonic oscillator U(x) = 2mω 2 x 2 . In fact, it is related to the oscillator by the shear transformation defined by [17] g
(U) = − √ U √ 2mω 2 .(20)
Note that this half-harmonic system can be considered as the limit when ω 1 tends to infinity of the two half-oscillators system (14). In fact, using the relation (16) with ω 2 = ω and taking the limit when ω 1 tends to ∞ we obtain ω 0 = 2 ω and therefore the potential (19) is in the same equivalence class as the harmonic oscillator given by U(x) = 2mω 2 x 2 .
Finally, as indicated before, this potential is obtained in Case B for α = 1.
Case D. To the same family belongs the potential [8,20,21,22]
U(x) = 2 α 2 mω 2 x 2 + 1 2 mω 2 x 2 − 2 α = 1 2 mω 2 2 α mω 2 x − x 2 .(21)
It is obvious that this potential is isochronous because in fact it is related with the half harmonic oscillator (19), by means of a shear transformation
g(U) = U 2mω 2 − 4 α + U 2mω 2(22)
and to the symmetric oscillator by the shear transformation [17] g
(U) = − 4 α + U 2mω 2 (23) U x Fig. 4. Isochronous potential U (x) = 1 2 mω 2 2 α mω 2 x − x 2
Note that according to Chalykh-Vesselov theorem [8] (theorem 2), this potential and the harmonic oscillator potential are the only rational isochronous potentials.
Another characteristic case of the same family is the following isochronous one:
Case E. For g(U) = α U, then
x 1 (U) = − 2U mω 2 + 2αU mω 2 , x 2 (U) = 2U mω 2 + 2αU mω 2 ,(24)
and therefore, for both values of x we have
x − 2αU mω 2 2 = 2U mω 2(25)
or in other form,
α 2 U 2 m 2 ω 4 − αx + 1 2 U mω 2 + 1 4 x 2 = 0 ,(26)
from which we obtain that [18], if x ≥ −1/(4α),
U(x) = mω 2 2 x α + 1 2α 2 − 1 α x α + 1 4α 2 .(27)
Note that U(0) = 0 and for small values of x,
U(x) ≈ 1 2 mω 2 x 2 − mαω 2 x 3 + · · · .(28)U x Fig. 5. Isochronous potential U (x) = mω 2 2 x α + 1 2α 2 − 1 α x α + 1 4α 2
In this case, although the shear transformation of the oscillator is smooth the final system is only defined on half a line.
Case F. An archetypal case is the reduced Kepler problem (see [11], Chapter III)
U(x) = − e 2 x + l 2 2mx 2 for x > 0(29)
whose period function for negative energies, is well known
T (E) = πe 2 m 2|E| 3(30)
and is a particular case of a more general family of potentials (see also [11], Chapter III)
U(x) = A |x| n(31)
with periods
T (E) = 2 n 2πm E E A 1 n Γ 1 n Γ 1 2 + 1 n .(32)
Case G. A very peculiar different example is the infinite wall, which is only shear equivalent to itself up to space translations. In this case, the Abel inverse of the period function
U(x) = 0 if x ∈ [0, π] ∞ if x / ∈ [0, π](33)T (E) = π 2m E(34)
is uniquely defined up to a shift by a real constant a.
Case H. A similar potential with the same quantum energy spectrum
U(x) = 1 m 1 sin 2 (x) − 1 2 (35)
has a much larger degeneracy [23].
The last two cases show that the orbits of Abel's shear transformations are not of the same type.
For even potentials there is a special case of shear transformation which preserves the periods. It is given by a composition of an inversion with two translation transformations.
The transformation of the complex plane called Joukowski transformation, defined by J λ (z) = z + (λ/z), with λ ∈ R, plays a relevant in aerodynamics applications. We consider here an analogous map of the real line completed with the two points at the infinity:
J g (x) = x 2 − 2 g 2 x , .
We also consider the involution ofR, i g :R →R, given by
i g (x) = − 4g 2 x .(36)
Note that J g (0+) = −∞ and J g (±∞) = ±∞ and the important property J g • i g = J g . Consequently, the points x and −4g 2 /x have the same image. Moreover, only these two points have the same image, because if x/2 − 2g 2 /x = y, then x 2 − 2xy − 4g 2 = 0, and therefore the two roots are given by
x ± (y) = y ± y 2 + 4g 2 ,
i.e. x + (y) > 0, x − (y) < 0 and x + (y) x − (y) = −4g 2 .
We can use the properties of these transformations J g and i g to prove:
Theorem 3. If U(x) is a bounded below even convex potential with lim x→∞ U(x) = ∞, then for any real number g the potential U g given by
U g (x) = U(J g (x)) = U x 2 − 2 g 2 x is isoperiodic with U(x).
Proof: First notice that U g is invariant under the transformation i g , because U g (i g (x)) = U(J g (i g (x))) = U(J g (x)) = U g (x). The parity symmetry of the function U implies that U g (x) = U g (4g 2 /x).
On the other side, as the function U is a bounded below even convex potential the minimum of the potential is at the origin and we can consider without any restriction that the minimum value of is U(0) = 0.
If U g (x 1 ) = U g (x 2 ), then U(J g (x 1 )) = U(J g (x 2 )), and therefore, given an arbitrary positive energy value E > 0 there will be two real numbers x − (E) < 0 and x − (E) > 0 such that −x − (E) = x + (E) and U(x ± (E)) = E. Consequently, using the definition of the new potential function U g , there will exist four points, to be denoted
x − g 1 , x − g 2 , x + g 1 , x + g 2 such that U g (x ± g i ) = E.
They are respectively given by
x − g 1 = −x + (E) − (x + (E)) 2 + g 2 , x − g 2 = x + (E) − (x + (E)) 2 + g 2 , x + g 1 = −x + (E) + (x + (E)) 2 + g 2 , x + g 2 = x + (E) + (x + (E)) 2 + g 2 .
The span between the two U g -equipotential values x − g 1 and x − g 2 , and same for x + g 1 and x + g 2 , is 2x + (E), and it coincides with the span between the corresponding U-equipotential values of the parity symmetric potential U. Consequently, the potentials U g and U are shear related and isoperiodic 2 .
In the case of isochronous potentials this connection between pairs of potentials provides us with the only solutions to isochronous rational potentials in terms of the harmonic oscillator and the isotonic potential (see theorem 2). This result can be further generalized.
Indeed it can be shown that in the rational case these two families of potentials are the only ones which are isoperiodic not only for the isochronous periods but for any frequencyenergy spectral distribution associated to a rational potential.
Theorem 4. Any non-trivial rational potential U * which is isoperiodic to a given even convex polynomial potential U is either of the form c)), for any value of g.
U c = U(x + c) or U c g (x) = U((x − c)/2 − 2g 2 /(x −
Proof:
If there is a rational potential solution of
U * (x) = U * (x + W U (U * (x))) .(37)
the function W U (U * (x)) has to be the irreducible ratio of two polynomials P (x) and Q(x), i.e.
W U (U * (x)) = P (x) Q(x) .
The stability of the leading term under the non-linear constraint (9) requires that the degree of P has to be one unit larger than that of Q. On the other hand by construction the transformation K defined by
K(x) = x + P (x) Q(x)
has to be invertible and involutive, i.e. K • K = Id. It is easy to show that the only rational solutions satisfying this requirement are
K c (x) = −x + c K g c (x) = c − 4g 2 x − c ,(38)
which correspond to the kind of transformations, translations, reflections and inversions, described previously in this section.
Indeed, if K c is polynomial the asymptotic analysis at x ∼ ∞ requires that the leading term K c ∼ a n x n satisfies K c (K c (x)) ∼ a n+1 n x n 2 ≃ x and thus n = 1 and a 2 n = 1. The only non-trivial solutions of these requirements are the translations/reflections of (38). This regular type of solutions keeps the polynomial character of the potential and simply involve a reflection and a translation of the polynomial.
If K c is rational it can have poles in the complex plane. Notice that because of the rational character of the transformation K c the involutive property can be extended to the whole complex plane. If K c is not a pure polynomial it has to have a pole at a point c = ∞ which is the image of x = ∞. and can be rewritten in the form
K c (z) = c − P 0 (z) (z − c)Q 0 (z) .(39)
Since
K c • K c (z) = c − P 0 (K c (z))Q 0 (z)(z − c) P 0 (z)Q 0 (K c (z)) = z(40)
we have that
1 = P 0 (K c (z))Q 0 (z) Q 0 (K c (z))P 0 (z) .(41)
It is easy to show that the only solution is Q 0 (z) = P 0 (z) = cte and the pole has to be a real pole, i.e c * = c. The absence of other poles is excluded by the involutive character of the transformation, i.e. only one point in the complex plane can be involutively mapped into z = ∞.
The second kind of solutions of (38) is more subtle and implies that P = 4g 2 − (x − c) 2 and Q = x − c, which means that W U (U * (x)) and therefore U * develops a single pole singularity at x = c. In this case U * is symmetric under inversion transformations,
U * (x) = U * c + 4g 2 x − c(42)
the same symmetry properties that the potential U c g satisfies. Now, since U is convex even potential its minimum is attained at x = 0. The minimum of U * for x > c is at x = c + 2g and since U * is isoperiodic to U the values of the two potential at both minima have to be identical, i.e. U * (2g) = U(0). By theorem 3 the potential U c g is also isoperiodic to U, has the same symmetry that U * under inversion transformations (42) and verifies that U c g (2g) = U(0) . Now, since U c g and U * are isoperiodic both must attain the same values at x and c + 4g 2 /x − c, which implies that U * = U c g and proves the theorem. 2 In particular, the only singular rational potentials which are isoperiodic to U(x) = x 2 and are singular at x = 0 are those of the form U(x) = (x/2 − 2g 2 /x) 2
Scale transformations and Isoperiodicity
There is another kind of transformations which also preserves isoperiodicity. They are connected with space-time scale transformations.
The time-evolution of one-dimensional systems is described in terms of the potential function U(x), i.e.ẍ = −∂U/∂x. If we introduce a change of space-time coordinates defined by
x = βx , t = √ γt ,(43)
where β and γ are positive real numbers, then the equation of motion becomes
β γ d 2x dt 2 = − 1 β ∂U ∂x (βx)
and therefore, if we define
U (x) = γ β 2 U(β x) ,(44)
we find that the equation of motion reads
d 2x dt 2 = − ∂ U (x) ∂x .
This invariance of the equation of motion is a consequence of the transformation of the action:
S(x) = dt 1 2ẋ 2 − U(x) ,S = γ β 2 S .
This suggests to study the relation between systems described by potentials U and U related as in equation (44).
We introduce next a generalization of a property studied by Dorignac [10].
Let ϕ(ζ) an arbitrary function and define
I ϕ (E) = x + (E) x − (E) ϕ(E − U(x)) dx .
If for any pair of real numbers β, γ ∈ R we define a new potential given by
U (x) = γ β 2 U(β x) , then x ± (E) = 1 β x ± β 2 E γ 2 ,
and consequently
I ϕ (E) = x + (E) x − (E) ϕ(E − U(x)) dx .
Therefore,
I ϕ (E) = (1/β)x+(β 2 E/γ 2 ) (1/β)x−(β 2 E/γ 2 ) ϕ E − γ 2 β 2 U(β x) dx = 1 β x + (β 2 E/γ 2 ) x − (β 2 E/γ 2 ) ϕ E − γ 2 β 2 U(y) dy ,
and defining
E = β 2 E γ 2
the equation can be rewritten as
I ϕ (E) = 1 β x + ( E) x − ( E) ϕ((γ 2 /β 2 ) ( E − U(y)) dy .
If the function ϕ is homogeneous of degree p,
I ϕ (E) = γ 2p β 2p+1 I ϕ ( E) .
When computing the period of an oscillating motion we find a function as I ϕ with ϕ a function proportional to ϕ P (ζ) = ζ −1/2 and when computing the action we arrive to a function ϕ a (ζ) = ζ 1/2 . Therefore,
I ϕ P (E) = 1 γ I ϕ P ( E) , I ϕa (E) = γ β 2 I ϕa ( E) .
As a consequence If U(x) is an isochronous potential with period P , then U is isochronous too and its period is P = P/γ. In particular, for γ = 1 we obtain that if U(x) is an isochronous potential with period P , then U (x) = β −2 U(β x) is isochronous too with the same frequency [10].
In particular, for γ = 1 we see that if U(x) is an isochronous potential, then U (x) = β −2 U(β x) is isochronous too and with the same period. On the contrary, the action for this potential U is not the same as for U, and if the spectrum of the first one is equispaced, it will not be true for the new potential.
In many cases the above scale transformation can be shown to be equivalent to a shear transformation, e.g. in some of the examples of the previous section are related by scale transformations. However, the scale transformation has a different nature. It establishes an equivalence relation among isoperiodic potentials but does not preserve the energy levels unlike the shear transformation. For such a reason one does not expect that quantization will preserve the equivalence at the spectral level.
In physical terms the scale transformation has a energy cost whereas the classical shear transformation is energy preserving. An interesting question is to know whether or not the quantization prescription preserves this classical property. This will be the subject of next section.
Isoperiodicity and the quantum isospectrality
It is clear from the analysis of Section 2 that two potentials U 1 and U 2 related by a shear transformation g : R → R + define similar period functions for periodic orbits. In fact, not only the periods given by (1) are identical for U 1 and U 2 but also any integral between the same limits of the form
T f (E) = x M (E) xm(E) f (E − U(x)) dx(45)
is the same for both potentials. The proof is simple because the integral (45) can be splitted as
T f (E) = N i=1 T i f (E)
in terms of the integrals
T i f (E) = x i+1 (E) x i (E) f (E − U(x)) dx
where x i (E), i = 0, 1, . . . , N, is the monotone sequence of points whose initial and final points are x 0 = x m , x N = x M , i.e. they coincide with the turning points of the classical trajectory, and the remaining points x i , for i = 1, 2, . . . , N − 1, are defined by the values
x i ∈ [x m , x M ] for which there is a stationary point x * i ∈ [x m , x M ] of the potential U ′ (x * i ) = 0 with the same potential level U(x * i ) = U(x i ). In each interval [x i , x i+1
] the potential function U(x) is invertible and the inverse function x i (U) is uniquely defined. Thus,
T i f (E) = U (x i+1 ) U (x i ) dU f (E − U) x ′ i (U) . Now, by construction, for each interval [x i , x i+1 ] there is another one [x i ′ , x i ′ +1 ]
such that the sum of the contributions to the the integral (45)
T i f (E) + T i ′ f (E) = U (x i+1 ) U (x i ) dU f (E − U) |x ′ i (U) − x ′ i ′ (U)|(46)
becomes the same for the two potentials U 1 and U 2 . In fact, the integrand and the integral limits in (46) are identical for any couple U 1 and U 2 of shear equivalent potentials.
In particular, this shows that T f is also the same for all potentials related by a shear transformation and that their first semiclassical quantum corrections to the energy levels, which is given by T f (E) with f (x) = √ x, is also the same.
From the discussion of the previous section it follows that scale transformations (43) also preserve semiclassical corrections to energy levels if p = 1/2 and γ = β. However in such a case the classical system is not isochronous.
However the higher order corrections might break the equivalence at the quantum level. The case B considered in the previous section is the simplest counterexample. The energy levels E B n [16] differ from those of the isoperiodic harmonic oscillator
E A n = ω n + 1 2(47)
by terms which start at first order in perturbation theory for small values of the anharmonicity parameter α << 1
E B n − E A n = ω 0 α 2 (3 − α 2 ) 4(1 − α 2 ) 2 n + 1 2 + ω 0 α 2 (1 − α 2 ) 2 2n + 1 8 (2n + 1)ψ −(−1) n n 2 + 1 2 −(1 + 2n)ψ (−1) n 1 + n 2 + 1 2 − 1 + O α 4 (1 − α 2 ) 4
= ω 0 α 2 2n + 1 8 2 − (1 + 2n)ψ (−1) n 1 + n 2 + 1 2
+(2n + 1)ψ −(−1) n n 2 + 1 2 + O(α 4 )
where ψ(x) is the logarithmic derivative of the Euler gamma function Γ(x). The above perturbative expression for E B n agrees to order α 2 with the asymptotic behavior derived from the exact spectral equation [16]
Γ 3 4 − 1 2 (1 + α)E B n Γ 3 4 − 1 2 (1 − α)E B n + √ 1 + α Γ 1 4 − 1 2 (1 + α)E B n √ 1 − α Γ 1 4 − 1 2 (1 − α)E B n = 0.(48)
One particular case where the quantum energy levels remain equal is when the shearing function g is constant, g = const. = g 0 . In such a case both potentials are related by a simple translation U 2 (x) = U 1 (x − g 0 ), which obviously does not change the quantum spectrum of the Hamiltonian. Further non-trivial examples can be obtained by means of Darboux method.
Shear deformation and Darboux transform
The quantum spectrum is also the same, up to a shift, for two potentials related by a shear transformation when they can be written in the form
U 1 (x) = 2 2m W (x) 2 − W ′ (x) − a 1 ; U 2 (x) = 2 2m W (x) 2 + W ′ (x) − a 2 ,(49)
in terms of a common superpotential W (x) with lim x→±∞ W (x) = +∞ and two constants a 1 and a 2 . Potentials of such a type are not only related by a classical shear transformation but they are also related by a quantum Darboux transformation [24] which guarantees that the corresponding spectra of the Hamiltonians
H i = p 2 2m + U i i = 1, 2 are almost identical 3 .
The case D of Section 3 is also an example of such a type. In fact, choosing
W (x) = 1 x + x(50)
and a 1 = 1 + 2 √ 2, a 2 = 3, 2 = 2m we have the two potentials of case D
U 1 (x) = 2 x 2 + x 2 − 2 √ 2; U 2 (x) = x 2(51)
provided we fix m ω 2 = 2 and α = √ 2 for simplicity. It is also clear from the discussion of previous section that both potentials are related by the shear transformation
g(U) = + √ U 2 + 4 √ 2 + U 2 .(52)
More generally, for any choice of the superpotential W with parity invariance W (x) = W (−x), i.e. W is of the form W (x) = K(x 2 ), it can be shown that the corresponding potentials U 1 , U 2 are related by a parity symmetry U 1 (x) = U 2 (−x). If U 1 and U 2 are convex functions they are obviously related by the shear transformation.
However, it should also emphasized that not any pair of potentials of the form (49) related by a Darboux transform are necessarily related by a classical shear transformation. A simple counterexample is given by W (x) = x 4 − x. In that case one gets the potentials
U 1 (x) = x 8 − 2x 5 − 4x 3 + x 2 + 1(53)
and
U 2 (x) = x 8 − 2x 5 + 4x 3 + x 2 − 1,(54)
respectively. It is obvious from the Figure 7 that both potentials are not shear related.
This illustrates that the generalization of theorems 1 and 2 to the quantum case es more sophisticated.
There are further examples of quantum isospectral systems which are not classically isoperiodic. A very interesting case is the following [25,26,27,28,29]. Let us consider a standard quantum oscillator (m = ω = 1) with Hamiltonian
H 0 = 1 2 (p 2 + x 2 ) .(55)
x U U
x Fig. 7. Pair of isospectral potentials (53) and (54) which are not isoperiodic
We know that the eigenvalues and eigenstates of such operator are given by
H 0 ϕ n (x) = E n ϕ n (x) ,(56)
with E n = n + 1 2 , n = 0, 1, 2, . . . ,
and ϕ n (x) = ( √ π 2 n n!) −1/2 H n (x) exp −
x 2 2 ,(58)
where H n (x) denotes the Hermite polynomial.
Let us now consider the system with the same spectrum, except the lowest eigenvalue E 0 = 1/2: E n = n + 1 2 , n = 1, 2, . . . .
For this we perform a similarity transformation which maps ϕ 0 (x) into ϕ 0 (x) by the formula 4 ϕ 0 (x) = ϕ 0 (x) Φ(x) (60)
x U Fig. 8. Potential U (x) = 1 2 x 2 + 4 χ(x)(χ(x) − x) with equally spaced spectrum where Φ(x) = ∞ x ϕ 2 0 (ξ) dξ = 1 √ π ∞ x e −ξ 2 dξ ,(61)
for which
Φ ′ (x) = −ϕ 2 0 (x) = − 1 √ π e −x 2 .(62)
Note that
Φ(x) = 1 at x → −∞ 1 2 x ϕ 2 0 (x) at x → ∞(63)
and so,
ϕ 0 (x) = ϕ 0 (x) at x → −∞ 2 x ϕ 0 (x) at x → ∞ ,(64)
and then ∞ −∞ | ϕ 0 (x)| 2 dx = ∞. Hence,
H 0 ϕ 0 (x) = 1 2 ϕ 0 (x) , at x → ±∞ .(65)
The function ϕ 0 (x) is the solution of the equation
H ϕ 0 = 1 2 ϕ 0 ,(66)
where H = 1 2
p 2 + U(x)(67)
and
U(x) = U 0 (x) + U 1 (x)(68)
with U 0 (x) = 1 2
x 2 , U 1 (x) = −2 d 2 dx 2 log[ erfc (x)] = −4 χ(x)(χ(x) − x) ,(69)
where the functions erfc (x) and χ(x) are
erfc (x) = 2 √ π ∞ x exp −ξ 2 dξ ,(70)
which satisfies
erfc (x) = 2 x → −∞ 1 x = 0 1 √ π x e −x 2 x → ∞(71)
and χ(x) = ( √ π erfc (x)) −1 exp(−x 2 ) ;
χ(x) ≈ x at x → ∞ .(72)
Notice that the new potential U(x) = U 0 (x) + U 1 (x) is neither shear equivalent to the harmonic oscillator 1 2 x 2 nor isochronous. It can be shown that H 0 and H − 1l have the same spectra. Indeed, H ϕ n (x) = E n ϕ n (x) , n = 1, 2, . . .
where ϕ n (x) = ϕ n (x) − 2 n χ(x) ϕ n−1 (x) ,
i.e the Hamiltonians H 0 and H − 1l are isospectral. The peculiarity of this case is that the two isospectral potentials are neither classically isoperiodic nor related by a Darboux transformation.
Finally, it is also remarkable that the two families of isospectral rational potentials connected by Joukowski transformations as in Theorem 4 are in general not isospectral. Only the case of isochronous potential the half harmonic oscillator and the potentials (51) present the same quantum spectrum.
Fig. 1 .
1Generic convex potential
Fig. 3 .
3Split-harmonic oscillator with two different frequencies ω 1 , ω 2
Fig. 6 .
6Smooth well potential U (x) = 1 sin 2 (x) with the same energy spectrum that the infinity square well.
(x) ϕ n (y) = δ(x − y) .
This theorem was first proved by Chalykh and Veselov[8]. The proof below is a different proof.
Strictly speaking U g has two branches, one in each half-line of positive/negative values of x ∈ R. Therefore, there is a degeneracy of trajectories which is not present in the convex potential U which has only one branch.
The asymptotic behavior of W guarantees that the ground states of the two systems are in one to one correspondence and none has zero energy for a 1 = a 2 = 0.
A similar transformation based on modding out by any eigenstate ϕ n can be formally achieved but because of the existence of nodes in the wave function ϕ n the induced potential is not defined on the whole real line R[29]
Acknowledgments.We thank F.
M Asorey, A Ibort, G Marmo, Global Theory of Quantum Boundary Conditions and Topology Change. 20M. Asorey, A. Ibort and G. Marmo, "Global Theory of Quantum Boundary Conditions and Topology Change", Int. J. Mod. Phys. A 20 (2005) 1001-1025.
Boundary Conditions and Path Integral. M Asorey, A Ibort, G Marmo, Proceedings of A. Galindo Festschrift. Alvarez-Estrada et al, MadridA. Galindo FestschriftM. Asorey, A. Ibort and G. Marmo, "Boundary Conditions and Path Integral", Proceedings of A. Galindo Festschrift, Eds. Alvarez-Estrada et al, Madrid (2004) 165-173.
Auflösung einer mechanischen Aufgabe. N H Abel, J. Reine Angew. Math. 1N.H. Abel, "Auflösung einer mechanischen Aufgabe", J. Reine Angew. Math. 1 (1826) 153- 57.
A lower bound for ground-state energy by Steiner symmetrisation of the potential. R Subramanian, K V Bhagwat, J. Phys. A: Math. Gen. 20R. Subramanian and K.V. Bhagwat, "A lower bound for ground-state energy by Steiner symmetrisation of the potential", J. Phys. A: Math. Gen. 20, 69-78 (1987)
Localization and energy transfer in nonlinear systems. S Bolotin, R S Mackay, World Sci. L. Vázquez, R.S. MacKay and M.P ZorzanoIsochronous potentialsS. Bolotin and R.S. MacKay, "Isochronous potentials", in: Localization and energy transfer in nonlinear systems, p. 217-224, eds. L. Vázquez, R.S. MacKay and M.P Zorzano, World Sci. (2003).
Two new classes of isochronous Hamiltonian systems. F Calogero, J. Nonlin. Math. Phys. 11F. Calogero, "Two new classes of isochronous Hamiltonian systems", J. Nonlin. Math. Phys. 11 (2004) 208-222.
Horologium Oscillatorium. Ch, Huygens, ParisCh. Huygens, "Horologium Oscillatorium", Paris (1673)
A remark on rational isochronous potentials. O A Chalykh, A P Veselov, J. Nonlin. Math. Phys. 12 Suppl. 1O. A. Chalykh and A. P. Veselov, "A remark on rational isochronous potentials", J. Nonlin. Math. Phys. 12 Suppl. 1 (2005) 179-183.
On a classical analog of the isospectral Schrödinger problem. V M Eleonskii, V G Korolev, N E Kulagin, JETP Lett. 65V.M. Eleonskii, V.G. Korolev and N.E. Kulagin, "On a classical analog of the isospectral Schrödinger problem", JETP Lett. 65 (1997) 889-93.
On the quantum spectrum of isochronous potentials. J Dorignac, J. Phys. A:Math. Gen. 38J. Dorignac, "On the quantum spectrum of isochronous potentials", J. Phys. A:Math. Gen. 38 (2005) 6183-210.
Mechanics. L D Landau, E M Lifshitz, Pergamon PressLondonL.D. Landau and E.M. Lifshitz, "Mechanics", Pergamon Press, London (1981).
Three theorems applicable to vibration theory. B F Kimball, Bull. Amer. Math. Soc. 38B.F. Kimball, Three theorems applicable to vibration theory, Bull. Amer. Math. Soc. 38, 718-23 (1933).
Note on a previous paper. B F Kimball, Bull. Amer. Math. Soc. 39386B.F. Kimball, Note on a previous paper, Bull. Amer. Math. Soc. 39, 386 (1933)
A class of inverse problem in physics. A H Carter, Amer. J. Phys. 68A.H. Carter, "A class of inverse problem in physics", Amer. J. Phys. 68 (2000) 698-703.
Traité de mechanique rationalle. P Appell, Gauthiers-Villars1ParisP. Appell, "Traité de mechanique rationalle", Vol 1, Gauthiers-Villars, Paris (1902).
Pseudoharmonic oscillators and Inadequacy of Semiclassical Quantization. F H Stillinger, D K Stillinger, J. Phys. Chem. 93F.H. Stillinger and D.K. Stillinger, "Pseudoharmonic oscillators and Inadequacy of Semiclassical Quantization", J. Phys. Chem. 93 (1989) 6890-92.
Isynchronous motion in classical mechanics. E T Osypowski, M G Olsson, Amer. J. Phys. 55E.T. Osypowski and M.G. Olsson, "Isynchronous motion in classical mechanics", Amer. J. Phys. 55 (1987) 720-25.
On classical and quantum harmonic potentials. P Mohazzabi, Can. J. Phys. 7810P. Mohazzabi, "On classical and quantum harmonic potentials", Can. J. Phys. 78 (10) (2000) 937-946.
Inequivalence of the classes of quantum and classical harmonic potentials: Proof by example. G Ghosh, R W Hasse, Phys. Rev. D. 24G. Ghosh and R. W. Hasse, "Inequivalence of the classes of quantum and classical harmonic potentials: Proof by example", Phys. Rev. D 24 (1981) 1027-29.
Coherent states for general potentials. I. Formalism. M M Nieto, L M Simmons, Phys. Rev. D. 20M.M. Nieto and L.M. Simmons, "Coherent states for general potentials. I. Formalism", Phys. Rev. D 20 (1979) 1321-31
Coherent states for general potentials. II. Confining onedimensional examples. M M Nieto, L M Simmons, Phys. Rev. D. 20M.M. Nieto and L.M. Simmons, "Coherent states for general potentials. II. Confining one- dimensional examples", Phys. Rev. D 20, 1332-41 (1979)
Inequivalence of the classes of classical and quantum harmonic potentials: Proof by example. M M Nieto, V P Gutschick, Phys. Rev. D. 23M.M. Nieto and V.P. Gutschick, "Inequivalence of the classes of classical and quantum harmonic potentials: Proof by example", Phys. Rev. D 23 (1981) 922-926.
R W Robinett, Quantum Mechanics. Oxford U.PR.W. Robinett, Quantum Mechanics, Oxford U.P., 1997.
Sur une proposition relative auxéquations linéaires. G Darboux, Comptes Rendues. 94G. Darboux, "Sur une proposition relative auxéquations linéaires", Comptes Rendues 94 (1882) 1456-1459.
Changes in potentials due to changes in the point spectrum: Anharmonic oscillators with exact solutions. P B Abraham, H E Moses, Phys. Rev. A. 22P.B. Abraham and H.E. Moses, "Changes in potentials due to changes in the point spectrum: Anharmonic oscillators with exact solutions", Phys. Rev. A 22 (1980) 1333-40.
Sturm-Liouville operators on the entire real axis with the same discrete spectrum. B M Levitan, Math. USSR-Sb. 60B. M. Levitan, "Sturm-Liouville operators on the entire real axis with the same discrete spectrum", Math. USSR-Sb. 60 (1988) 77-106.
The isospectral class of the quantum mechanical harmonic oscillator. H P Mckean, E Trubowitz, Commun. Math. Phys. 82H.P. McKean and E. Trubowitz, "The isospectral class of the quantum mechanical harmonic oscillator", Commun. Math. Phys. 82 (1981) 471-495.
Zel'dovich. A M Perelomov, Ya B , Quantum Mechanics: Selected Topics. World ScientificA.M. Perelomov and Ya. B. Zel'dovich, "Quantum Mechanics: Selected Topics", World Scientific (1998).
Equivalent potentials. R Jost, W Kohn, Phys. Rev. 88R. Jost and W. Kohn, "Equivalent potentials", Phys. Rev. 88 (1952) 382-385.
|
[] |
[
"Modelling depth for nonparametric foreground segmentation using RGBD devices",
"Modelling depth for nonparametric foreground segmentation using RGBD devices"
] |
[
"Gabriel Moyà-Alcover ",
"Ahmed Elgammal ",
"Antoni Jaume-I-Capó ",
"Javier Varona "
] |
[] |
[] |
The problem of detecting changes in a scene and segmenting the foreground from background is still challenging, despite previous work. Moreover, new RGBD capturing devices include depth cues, which could be incorporated to improve foreground segmentation. In this work, we present a new nonparametric approach where a unified model mixes the device multiple information cues. In order to unify all the device channel cues, a new probabilistic depth data model is also proposed where we show how handle the inaccurate data to improve foreground segmentation. A new RGBD video dataset is presented in order to introduce a new standard for comparison purposes of this kind of algorithms. Results show that the proposed approach can handle several practical situations and obtain good results in all cases.
|
10.1016/j.patrec.2016.09.004
|
[
"https://arxiv.org/pdf/1609.09240v1.pdf"
] | 18,418,435 |
1609.09240
|
b5e0676cd40e6fdddf4abc6183268e126242652b
|
Modelling depth for nonparametric foreground segmentation using RGBD devices
Gabriel Moyà-Alcover
Ahmed Elgammal
Antoni Jaume-I-Capó
Javier Varona
Modelling depth for nonparametric foreground segmentation using RGBD devices
1
The problem of detecting changes in a scene and segmenting the foreground from background is still challenging, despite previous work. Moreover, new RGBD capturing devices include depth cues, which could be incorporated to improve foreground segmentation. In this work, we present a new nonparametric approach where a unified model mixes the device multiple information cues. In order to unify all the device channel cues, a new probabilistic depth data model is also proposed where we show how handle the inaccurate data to improve foreground segmentation. A new RGBD video dataset is presented in order to introduce a new standard for comparison purposes of this kind of algorithms. Results show that the proposed approach can handle several practical situations and obtain good results in all cases.
Introduction
Background subtraction is a widely used technique for detecting moving foreground objects in image sequences. It is considered the first step in many computer vision algorithms. Foreground segmentation, provides an important cue for numerous applications in computer vision, such as surveillance, tracking, recognition and human pose estimation. The main objective is to detect objects that do not belong to the scene by comparing the current observation with previous references. This reference can be a single image or a more complex model of the real scene, called scene model. A scene model is a statistical representation of the scene, and it is updated to adapt to variations of its conditions. This problem has been widely addressed in the literature. Reviews can be found in [? ? ]. Despite this previous research, there is no universal technique covering all requirements of applications for which the foreground of a scene must be detected [? ]. In [? ] several important challenges of background subtraction were described. Some of them are strongly related to the nature of color information, such as: shadows, changes in scene illumination, camouflage and foreground aperture. These problems continue to be challenging for modern approaches, as described in [? ], where 29 different algorithms were evaluated and compared.
A feasible solution to overcome these problems consists of adding physical information to the background model. For example, geometrical descriptions of buildings may be added to help to predict shadows [? ].
Different approaches for obtaining 3D information of the scene were proposed using stereo devices or camera networks [? ]. Depth measures provide geometrical information about the scene where each pixel value represents the distance from the device to the point in the real world. To obtain an accurate dense map of correlations between two stereo images, time-consuming stereo algorithms are required. Without specialized hardware, most of these algorithms are too slow for real-time background subtraction. In addition, multi-camera networks introduce other problems, such as camera installation, calibration and data fusion.
Currently, low-cost RGBD devices that are able to capture depth and color images simultaneously at frame rates up to 30 fps are available off the shelf. These devices have certain limitations such as lower sensitivity at long distances, the production of depth camouflage and absent observations due to scene characteristics.
Our aim is to use this type of noisy depth information in a unified model that mixes multiple information cues from the devices. We present a new per-pixel scene modeling approach which uses both depth and color information. We propose a model that keeps a sample for each pixel of the scene and estimates the probability that a newly observed pixel value belongs to the background. The model estimates these probabilities independently for each new frame. The model is updated in each iteration of the algorithm, depending on partial results. The model adapts itself to changes in the background process and detects targets with high sensitivity. We construct our model using a Kernel Density Estimation (KDE) process. KDE has been already used in other state-ofthe-art techniques. In particular, in [? ], KDE has been applied using only color information with good results. When using a Gaussian Kernel, the probability density function can be thought of as a generalization of the Gaussian mixture model, in which each single sample is considered to be a Gaussian distribution by itself. However unlike in the Gaussian Mixture Model, in KDE no mixture parameters need to be estimated. This allows us to estimate the density function more accurately, without assumptions about the density model, depending only on recent information from the sequence.
Adding a depth channel to the KDE background model is not an obvious process because the depth channel differs in its characteristics from color channels. In particular, the depth channel has a significant amount of missing information from instances in which the sensor is unable to estimate the depth at certain pixels. In this paper, we show how to handle the inaccurate depth data in the proposed nonparametric scene model. For this purpose, we properly define the absent depth observations to include them in the scene model. The key idea is that pixels that cannot be classified as background or foreground are classified in a new undefined class. Therefore, absent observations can be handled in a unified manner. In addition, after the introduction of depth data, the proposed scene model is capable of instantly detecting the changes in the background objects.
To properly evaluate the proposed method, we built a new dataset inspired by one of the most widely used color-based datasets [? ]. Each of the proposed sequences is focused on one of the main challenges when both color and depth information are used.
The paper is organized as follows. In Section 2, we describe the related work. In Section 3, we describe the challenges of depth data. In Section 4, we define the proposed scene model, and we explain how depth information is used to construct a unified model. Adaptation to scene changes is discussed in Section 5. In Section 6, we describe an experimental configuration of the proposed algorithm. The results of the evaluation are described in Section 7. Finally, we present the conclusions.
Related work
There is a large body of literature on the subject of background subtraction. We refer to some comprehensive surveys about this subject [? ? ? ? ? ]. We focus here on approaches that fuse color and depth information. Most of these techniques modify traditional background subtraction approaches by adding one extra channel for depth (in addition to the color channels) and suggesting some heuristics to address the heterogeneous characteristics of these different cues.
In [? ], the authors proposed an approximation to Gaussian mixture modeling to describe the recent history of color and depth scene observations at each pixel. A multidimensional Gaussian mixture distribution is constructed, with three components in a luminance-normalized color space and one depth channel. Special processing is performed to address absent depth pixels. This enables foreground decisions to be made when the depth model for a pixel is invalid but its latest depth observation is valid and it is connected to regions where foreground decisions have been made in the presence of valid background data. No update phase is described; therefore, this algorithm can only be used in static scenes.
In [? ], a new Mixture of Gaussians approach is proposed, where depth and infrared data are combined to detect foreground objects. Two independent background models are built using depth and infrared information. Each pixel is classified by binary combinations of foreground masks. The performance of this approach is limited because a failure of one of the models affects the final pixel classification.
Camplani et al.
[? ] proposed a per-pixel background modeling approach that fuses different statistical classifiers based on depth and color data by means of a weighted average combination that takes into account the characteristics of depth and color data. A mixture of Gaussian distributions is used to model the background pixels, and a uniform distribution is used for modeling the foreground. The same authors presented another approach in [? ] based on the fusion of multiple region-based classifiers. Foreground objects are detected by combining a region-based foreground depth data prediction with different background models, providing color and depth descriptions of the scene at the pixel and region levels. The information given by these modules is fused in a mixture-of-experts fashion to improve the foreground detection accuracy.
ViBe is a per-pixel algorithm, based on a Parzen windowslike process [? ]. The update is performed by a random process that substitutes old pixel values with new ones and then samples the spatial neighbourhoods to refine the per-pixel estimates. ViBe gives acceptable detection results in many scenarios, but it has problems with challenging scenarios such as darker backgrounds, shadows and frequent background changes [? ]. In [? ], a new ViBe approach is presented using RGB and ToF (Timeof-Flight) cameras. Each model is processed independently and the foreground masks are then combined using logical operations and then post-processed with morphological operators.
An adaptation of the Codebook [? ] background subtraction algorithm was proposed by [? ] fusing depth and color information to segment foreground regions. A four-channel codebook was used. Depth information is also used to bias the distance in chromaticity space associated with a pixel according to the depth measurements. Therefore, when the depth value is invalid, the detection depends entirely on color information. Their results were tested on a public access database with four different challenging sequences. For each frame in the dataset, depth information was normalized from 0 to 255, where 255 is the maximum depth value in that frame, with the resulting loss of information.
In [? ] the authors presented a background subtraction technique in which a four-dimensional Gaussian distribution was used as the first step of the user identification and object recognition surveillance system. No special processing was performed to address absent depth observation pixels. As they used a single Gaussian approximation, the algorithm was not able to manage multi-modal backgrounds. A similar problem can be observed in other approaches, such as [? ] and [? ].
In the related work there is no general purpose RGBD dataset that covers all of the desirable types of sequences, with which to properly evaluate a scene modeling algorithm. Each algorithm is evaluated using its own dataset and different metrics. That makes it impossible to perform a unified comparison between the different methods. For that purpose, we propose a comprehensive dataset that covers the challenges that occur when combining depth and color information.
Challenges of depth data
Depth sensors provide partial geometrical information about the scene, where each pixel depth value is proportional to the estimated distance from the device to the point in the real world. Among several technologies, recently, two types of consumer depth sensors have become widely popular and accessible: sensors based on structured light and on time-of-flight.
Structured light sensors consists of an infrared (IR) emitter and an IR camera. It estimates depth by structured light coding technology. Its IR emitter projects an IR speckle pattern onto the scene. The IR camera captures the reflected pattern and correlates it against a stored reference pattern on a plane. These sensors have a lack of sensitivity and are not able to estimate depth at all pixels in the scene. The noise in depth measurements increases quadratically with increasing distance from the sensor [? ].
Time-of-flight sensors resolve the distance based on the known speed of light. Depth is proportional to the time needed by the active illumination source to travel from emitter to target. Typically, IR light is used for this purpose. This technology provides better accuracy than structured light sensors and is less susceptible to generate shadows in the scene. Noise can be well approximated by means of a normal distribution [? ].
Independently of which technology is used, depth data estimated by these devices suffer from several problems, which we describe here. Fig. 1 illustrates examples of these problems.
1. Depth camouflage ( Fig. 1-a): Due to sensor sensitivity, when the foreground and background are close in depth, the sensor gives the same depth data values. This makes it hard to segment the foreground from the background based on depth. 2. Specular materials ( Fig. 1-b): Rays from a single incoming direction are reflected back in a single outgoing direction without causing the diffusion needed to obtain depth information. 3. Near objects ( Fig. 1-c): Sensors have minimum depth specifications. Due to the proximity of the foreground objects, the sensor is unable to measure depth. Typically, both structured light sensors and time-of-flight sensors have a depth limit of 0.5 meters. 4. Remote parts of the scene ( Fig. 1-d): Sensors have maximum distances at which they can detect depth. Parts of the scene farther from this distance appear as gaps in depth images. 5. Non reachable areas ( Fig. 1-e): Depending on the imaging geometry and the sensor position, parts of the background may be occluded. This makes the sensor unable to estimate the depths at these locations.
6. Shadows ( Fig. 1-f): Foreground objects block the active light emitted by the sensor from reaching the background, which causes shadows to be cast on the background. Thus the sensors are unable to estimate the depth at these blocked regions. Therefore, RGBD sensors exhibit two different types of shadows: visible-light shadows in the RGB channels, and IR shadows in the depth channel. These two different types of shadows are different in their geometries and in their spatial extents in the image.
When depth cannot be measured at a given pixel, as in cases 2 to 6 above, the sensor returns a special non-value code to indicate its inability to measure depth. Such pixels appear as holes in the images with absent depth value. In this paper we denote these pixels as Absent Depth Observations (ADO).
In the next section, we define a scene model that manages these data issues using both color and depth information in a unified way.
Non-parametric Scene Model
Statistical background model
Our model is based on recent scene information. Given the last n observations of a pixel, denoted by x i , i = 1, . . . , n in the d-dimensional observation space R d , which enclose the sensor data values, it is possible to estimate the probability density function (pdf) of each pixel with respect to all previously observed values [? ? ]
P(x) = 1 n |H| − 1 2 n i=1 K(H − 1 2 (x − x i )) ,(1)
where K is a multivariate kernel, satisfying K(x)dx = 1 and K(u) ≥ 0. H is the bandwidth matrix, which is a symmetric positive d×d-matrix.
The choice of the bandwidth matrix H is the single most important factor affecting the estimation accuracy because it controls the amount and orientation of smoothing induced [? ]. Diagonal matrix bandwidth kernels allow different amounts of smoothing in each of the dimensions and are the most widespread due to computational reasons [? ]. The most commonly used kernel density function is the Normal function, in our approach
N(0, H) is selected H = σ 2 1 0 · · · 0 0 σ 2 2 · · · 0 . . . . . . . . . . . . 0 0 · · · σ 2 d
where σ 2 i is bandwith of the kernel in the i-th dimension, i.e. independence between the different channels is assumed. The final probability density function can be written as
P(x) = 1 n n i=1 d j=1 1 2πσ 2 j e − 1 2 (x j −x i j ) 2 σ 2 j .(2)
Given this estimate at each pixel, a pixel is considered foreground if its probability is under a certain threshold.
To estimate the kernel bandwidth σ 2 j for the jth dimension for a given pixel, similar to [? ], we compute the median absolute deviation over the data for consecutive values of the pixel. That is, the median, m j , of each consecutive pair in the data is calculated independently for each dimension. Because we are measuring deviations between two consecutive values, each pair usually comes from the same local-in-time distribution and only few pairs are expected to come from cross distributions. Assuming that this local in-time distribution is Normal N(µ, σ) then the deviation is Normal N(0, 2σ 2 j ). Therefore, the standard deviation of the first distribution can be estimated as in
σ j = m j 0.68 √ 2 .(3)
To create a fast implementation of the algorithm, the probability is estimated given the pixel value difference and the Kernel function bandwidth using a precalculated lookup table.
Prior to the use of this scene model, it is necessary to perform a training stage in which models of color and depth information are learned and the bandwidth of each channel used is calculated at each pixel.
Depth data modeling
The scene model cannot be applied in a standard way because the sensors ADO requires a special treatment in which depth is treated as just a fourth channel, in addition to RGB. These ADOs can introduce errors into our model as well as into any typical background model. A pixel can be ADO throughout the sequence or switch in a random manner between ADO and a valid value.
We distinguish two categories of ADO:
• ADOs provoked by the scene's physical configuration. They belong to the background, even in the absence of foreground objects. These include specular background materials, remote parts of the scenes and non-reachable areas.
• ADOs caused by the foreground objects. These include nearby objects, specular foreground objects and shadows.
We want to differentiate these two classes of ADO pixels (see Fig. 1). We propose a probabilistic model, which we call the ADO model. The probability of a pixel being ADO and belonging to the background model is denoted by P A . This probability is calculated for the depth component of each pixel D. The ADO model is updated for each pixel during the training stage. P A is calculated recursively as follows:
P A (D 0 ) = 0 (4) P A (D t ) = α * mask t + (1 − α) * P A (D t−1 ) ,
where α ∈ [0, 1] is the update rate, and mask t is a binary value corresponding to an ADO-mask, where each pixel have value of 1 if D t is ADO and 0 if not. We try to avoid adding an ADO pixel to the background model. We selected a strategy based on overwriting the pixel with the previous one. Let D t , t = 1, . . . , n be the n recently sampled depth values at a given pixel, D t is calculated as follows:
D t = D t−1 , if D t = ADO D t , otherwise ,(5)
where for D 0 , we use the inpainting strategy suggested by [? ], in which the initial image reconstruction algorithm of [? ] tries to estimate the correct values for ADO regions (see Fig. 2). The ADO model is calculated for each pixel during the training phase. Fig. 3 depicts the ADO model's computation process. Pixels with P A higher than a threshold θ are overwritten with a previous value and incorporated into the background model. The other pixels remain undefined and are then classified as the undefined class.
Background moving object detection
In real scenes, a background object can be moved. Such area should not be considered part of the background forever; therefore, the scene model has to adapt and understand that the scene layout may be physically updated [? ].Typically, in background subtraction algorithms, the new background is incorporated into the model at a speed that depends on the corresponding update rate. By introducing depth data values, we present a new approach which permits instantaneous pixel classification.
The idea is based on the fact that if a new depth observation is located farther than the modelled values, this is probably because it became part of the background when some object was removed from the scene. To detect these changes, we compare the difference between this new observation and the previous observation with the background model, and we check whether that difference is larger than the modelled one for each pixel. The cumulative density function (cdf) over the absolute difference of two consecutive observations of a pixel allows us to formalize this idea.
Given Then, we define P(k) = 1 n #{V i : V i = k}, and
F x (k) = k j=1 P( j) .(6)
Finally, given a new observation D t and the observations D 1 , . . . , D n , we define the ith component of evaluation set C D t as the threshold to zero of the difference. That is
C D t,i = D t − D i , if D t − D i > 0 0, otherwise ,(7)
∀ i = [1 . . . n]. The evaluation function for background moving objects detection:
U(D t ) = ∀k∈C Dt F x (k) n .(8)
If U(D t ) is higher than a predefined threshold, ξ, the pixel is considered part of the background. This detection is very relevant, so physical changes in the scene are detected when occurs.
Model update
In previous sections, we detailed how to detect foreground regions of a scene given a recent history of samples for each pixel. This model needs to be updated to properly respond to changes in the scene. Because the kernel bandwidth estimation requires all of the samples to be close in time, the update is performed using a first-in first-out queue: the oldest sample is discarded and a new sample is added to the model. Different updating strategies are used to keep the model updated. On one hand, color information tends to have quick variations due to shadows and varying luminance; therefore, we consider it an unstable model. On the other hand, depth information tends to be more stable.
Color update
The intensity distribution of the color information can change dramatically over very short periods of time [? ]. For each frame, the color model is updated so the model can adapt very quickly to changes in the background process. A new observation is added to the model only if it is classified as a background sample. If a pixel is updated with the foreground color value, the error will be propagated over time and misclassification problems will appear. To avoid the adaptation of the model to the foreground object characteristics, a higher threshold is proposed to relax the condition and avoid updating pixels that are very close to belonging to the foreground.
Depth update
Unlike color, depth information represents a stable long-term description of the scene. Therefore, it is not necessary to update the model for each frame as pixel values do not change as fast as color values. Pixels detected as a part of a background moving object are automatically classified as background and their models are updated. In fact, updates to the depth model are highly related to physical changes in the real-world scene; therefore, pixels detected as background moving objects (see Section 4.3) are selected to be updated. In addition, the ADO model is updated for these pixels during this update phase.
Generic Scene Modeling (GSM)
In this Section, we describe an experimental configuration of the scene modeling algorithm for evaluation purposes. Specifically, we explain the sensor color and depth inputs, including the algorithm parameters used. Finally, we define the generic scene model. In Fig. 4 the complete algorithm details are given.
Depth input
To evaluate the previously defined scene model algorithm, a Microsoft Kinect 1 sensor is used as an RGBD device. The devices technology is based on structured light. The image processor of the sensor uses the relative positions of the dots in the pattern to calculate the depth displacement at each pixel position in the image [? ]. We use the sensors continuous raw depth information, where D in [650, 1500] which corresponds to a valid depth range between 0.5 and 4.5 meters. In addition, for this sensor, all ADOs have the same value of 650.
Color input
Usually color information is useful for suppressing shadows from detection by separating color information from lightness information. To construct a robust algorithm that is independent of illumination variations, we separated color information from luminance information using a non-luminance dependent color space. Then, color is defined as a combination of luminance, hue and saturation. Chromaticity is the description of a color ignoring its luminance, and it can be described as a combination of hue and saturation. Given the device's three color channels R, G, B, the chromaticity coordinates r, g and b are:
r = R/(R + G + B), g = G/(R + G + B), b = B/(R + G + B) where: r + g + b = 1 [? ]
. In our model we use two dimensions: r and g.
Evaluation
The evaluation is performed using two implementations of the proposed GSM algorithm. GSM U B is used if undefined pixels are considered as background and GSM UF is used if undefined pixels are considered as foreground. We used two datasets to perform the tests.First, we perform the comparison using a dataset that emphasizes camouflage and shadows problems. We selected this dataset because it facilitates comparison with other background subtraction algorithms that use both color and depth information [? ].
Second, a new RGBD sequence dataset is built inspired by the WallFlower [? ] dataset, one of the most widely used colorbased background subtraction datasets. This dataset is built to test all background subtraction issues described in Wallflower besides the new depth challenges described in Section 3.
Different metrics are used to measure the algorithm's performance, in each test. All are based on True Positives (TP), which count the number of correctly detected foreground pixels; False positives (FP), which count the number of background pixels incorrectly classified as foreground; True negatives (TN), which count the number of correctly classified background pixels; and False negatives (FN), which count the number of foreground pixels incorrectly classified as background.
Camplani Dataset
In [? ], the authors presented a six-video RGBD dataset with hand-labelled ground truth (see Fig. 5 and Table 1). The authors compared eight different background subtraction algorithms: the Camplani algorithm: CL W ; two weak classifiers,CL C and CL D , defined in their paper; four different implementations of mixture of Gaussians and ViBe. To perform the evaluation, they used seven measures: FN; FP; Total error (TE), the total number of misclassified pixels normalized with respect to the image size; a Similarity measure (S), which is a non-linear measure that fuses FP and FN; and a Similarity measure in object boundaries (S B ). S is close to 1 if detected foreground regions correspond to the real ones; otherwise its value is close to 0. S B is calculated similarly to S, but considering only the regions of the image surrounding the ground-truth object boundaries of 10 pixel width. Finally, two different metrics are used to rank the accuracy of the analyzed algorithms. RM ranks each method for each performance metric for one sequence. RC, a global ranking of the algorithms across different sequences, is the mean of RM for each method across all of the sequences. In Fig. 6, global results are depicted. For each algorithm the ranking of each sequence is shown (RM). Finally, the RC classification is performed to establish a global result. Both GSM and CL W have the best results according to the RC ranking. To understand the global results, we analyze the performances of both algorithms for each sequence.
The results in Table 2 show that GSM UF has higher FN due to the classification of the entire undefined pixels-class as foreground. GSM U B achieves best results. In addition, both of our proposed solutions achieve better values in contours (S B ) than CL W .
The sequence ColCamSeq (see Table 4) gives results opposite to those of GenSeq. GSM U B has higher FN due to the misclassification of all undefined pixels as background. Again, our method gives better results for both sequences in contours (S B ) and in the similarity measure (S ) than the CL W algorithm.
Our algorithm also achieves the best results in depth camouflage situations (see Table 3), due to the combination of two type of information, color and depth, in the same model.
The results in Table 5 show that in shadows our method obtains higher values of FP compared with the CL W algorithm. In other measurements the proposed method gives better results.
GSM dataset
The Camplani dataset does not permit the proper evaluation of scene modeling algorithms due to the impossibility of evaluating over illumination changes, bootstrapping or waking objects, as there are no specific sequences with which to evaluate these issues. We built a new RGBD dataset to enable algorithm comparison and for generalization purposes. This dataset includes 7 different sequences (see Table 6) designed to test each of the main problems in scene modeling when both color and depth information are used. Each sequence starts with 100 training frames and has a hand-labelled foreground ground truth. In Fig. 7, examples of each sequence of the dataset can be found. Dataset and algorithm details can be found on gsm.uib.es.
As for performance measures, we computed them using the framework of the CVPR 2014 CDnet challenge [? ], which implements the following seven different measures: recall, specificity, false positive ratio (FPR), false negative ratio (FNR), percentage of wrong classifications (PWC), precision and fmeasure. Tables 7 and 8 show the specific results of GSM for each metric and sequence.
A ranking of the tested algorithms is also computed, starting from the partial ranks on these measures (see Table 9). We use the RM and the RC metrics, as in the previous evaluation. We evaluate our proposed algorithm against three different fusion algorithms: ViBe [? ], a mixture of Gaussians (MoG) implementation in the Opencv library by Zivkovic [? ] and a background subtraction algorithm [? ] that uses a Gaussian kernel (KDE). Following the CDnet rules, each algorithm uses a single set of parameters.
It is important to notice that adding depth information leads to more robust scene modeling algorithms due the invariance of depth information to different types of illumination changes and the greater sensitivity of color information in cases of depth camouflage or ADO situations.
To understand the global results, we analyze the performance of these algorithms for each sequence (see Fig 8). The proposed algorithm has good results when we test different color situations. GSM U B and GSM UF prove to be the most stable under sudden illumination changes (Ls ds), with significant difference from the other algorithms. In Depth camouflage (Despatx ds) situations, it is important to notice that the results of KDE approach are very similar as to those of the proposed algorithm. This is because, in this sequence, the important information is the color information and we model that in the same way.
The addition of one geometric dimension to our model permits us to obtain a small advantage in the evaluation of the Time of Day situations and in the Color Camouflage situations.
The Sleeping ds sequence allows us to test whether the background object movement detection obtains the expected results. In Sleeping ds sequence results from GSM UF and MOG are better than those from GSM U B , as is shown in Table 9. This occurs because in the last part of the sequence, the user is near the sensor provoking the apparition of a large region with ADO pixels.
In the case of shadow evaluation (Shadows ds), KDE algorithm has good results due to special treatment of color information to avoid shadows. GSM UF and GSM U B have the best results, proving that adding depth information can help to avoid some color problems (see Table 9).
Our proposed algorithm has the best results in Bootstrapping situations (see Table 9). This sequence is very challenging because, in the training stage, we have the assumption that depth information is constant over all frames; therefore, it is possible to model incorrect distributions, which leads to misclassification.
Conclusion
We presented a new scene modeling approach, GSM, that uses both depth and color information in a unified way. We constructed a background model for each pixel of the scene and estimated the probability that a newly observed pixel value belongs to that model. These probabilities are estimated independently for each new frame. We constructed our model using a Kernel Density Estimation (KDE) process with a Gaussian Kernel. To construct only one model, we used a three-dimensional kernel, with one dimension to model depth information and two for normalized chromaticity coordinates. We modelled sensor Absent Depth Observations (ADOs) using a probabilistic strategy to distinguish the pixels belonging to the background model from those which are provoked by foreground objects and detected each of these types of pixels. Pixels that cannot be classified as background or foreground were placed in a third classification class, which we called undefined, to classify these pixels. We developed an algorithm to detect changing background objects in the same frame in which they move based on the cdf of the pixel model. Two updating strategies are used, to adapt the update phase to the different natures of the color and depth information.
We provided all technical details to allow algorithm replication, including an algorithm description and the thresholds we used. We also constructed a new dataset (available at gsm.uib.es) to evaluate all background subtraction issues in related work, adding the new depth challenges.
Results show that the proposed algorithm is the most regular, having good results in a wide range of situations and solving the problems of the depth data sensors. This means that the algorithm can handle many different situations. We can conclude that the combination of two types of information in a 3D kernel helps to achieve better modeling algorithms.
The proposed algorithm has three different classes: background, foreground and undefined. To enable direct comparisons with the state-of-the-art algorithms, we decided to develop two implementations: GSM UF and GSM U B .
Selection between the GSM UF and GSM U B implementations depends on the final application in which the scene modeling is used. Basically, we can distinguish two different situations: In applications where the changes occur at certain camera distances or when the scene tends to be static, as in surveillance applications, we recommend using the GSM U B implementation, as ADO tend to be provoked by remote parts of the scene, specular materials in the background and shadows that are reflected in the background of the scene. Instead, if the method is used in human interaction applications, such as tracking or human pose estimation, when the action occurs near the camera we recommend using the GSM UF implementation. In this case, ADO are normally provoked by near objects that appear during the sequence.
Our algorithm is designed following a per-pixel approach and is easily parallelizable because each pixel has its own model independent from the others. During the experimentation process, new RGBD sensors have appeared with more depth resolution. It could be interesting to test our algorithm with different devices. It is necessary to remark that our background subtraction algorithm is not designed only for with Microsoft Kinect. It can be adapted to different types information cues, such as thermal imagery. In the first frame we apply an inpainting process. Following depth frames each ADO pixel is overwritten by a previous value. Once this process is done pixels are added to background model. Let define an observation x = {r, g, D}, therefore d = 3. θ, γ and ξ are constant values over all algorithm, where: γ = 10 −8 , θ = 0.0050 and ξ = 0.6.
Training stage Initialization step:
• Image inpainting algorithm to compute D 0 .
• P A (D 0 ) = 0. Let (x 1 , . . . , x i , x n ) be the observations used for modelling the scene for each D i ∈ x i Depth treatment:
• Compute the ADO-model: P A (D i ) = α * mask + (1 − α) * P A (D i−1 ).
• If D i is ADO-pixel thenD i =D i−1 elseD i = D i .
• SubstituteD i for D i in x i .
Calculate the kernel bandwidth for each dimension d. • If D t is ADO-pixel thenD t =D t−1 elseD t = D t .
• Measure the probability of a pixel being part of a background moving object:
U(D t ) = 1 n ∀k∈CD t F x (k).
• If U(D t ) > ξ then x t ∈ background.
Calculate the probability of a pixel being background for all dimensions, d: • If P(x t ) < γ then x t ∈ foreground else x t ∈ background.
• Classify x t ∈ as undefined If D t is ADO-pixel and P A (D t ) < θ.
Update background model:
• If x t ∈ background then update color model. . It can be found results of four different sequences for all tested algorithms and final comparisons. As GSM UF and GSM U B are not the best for each sequence, are the most regular ones as it can be seen with the RC line, the lower the better.
the observations D 1 , . . . , D n the ith component of vector V is defined as V i = | D i − D i−1 | , ∀ i = [2 . . . n] for all k possible sensor values, k ∈ {0. . . L}, where L is the maximum number of depth levels.
Figure 1 :
1Challenges of depth data: Each row illustrate a different problem. Second column corresponds to depth channel of Structured light sensor and third column corresponds to to depth channel of Time-of-flight sensor observations. Black regions in depth images corresponds Absent Depth Observations.
Figure 2 :
2The inpainting process is done over first training frame in order to overwrite undefined values. In next frames these values are propagated under the undefined values model.
Figure 3 :
3Training step:
•
∀ j ∈ [1..d] compute the median absolute deviation m j for consecutive values of observations:m j =| x ji − x ji+1 | ∀ i ∈ [1 . . . t].• Calculate de bandwith σ j =
•
If U(D t ) > ξ then update depth model.
Figure 4 :Figure 5 :
45The generic scene modelling algorithm for RGBD devices. Color and depth frames examples for each sequence of Test I.
Figure 6 :Figure 7 :
67Camplani dataset simulation Results (RM). It can be found results of four different sequences for all tested algorithms and final comparisons. As GSM UF and GSM U B are not the best for each sequence, are the most regular ones as it can be seen with the RC line, the lower the better. Sequences of our new RGBD dataset. Each row shows the depth configuration of each scene and 2 different color frames.
Figure 8 :
8GSM dataset simulation Results (RM)
Table 1 :
1Characteristics of evaluated sequences from [? ] dataset.Sequence
GenSeq
DCamSeq
ColCamSeq
ShSeq
Number
of frames
300
670
360
250
Number
of ground
truth
frames
39
102
45
25
Test
Objective
General
scenes
Depth
camouflage
Color
camouflage
Shadows
Table 2 :
2Results for GenSeq. FP: False positives. FN: False negatives. TE: Total error. S: Similarity measure. S B : Similarity measure in object boundaries.TE
FN
FP
S
S B
RM
Avg
Std
Avg
Std
Avg
Std
Avg
Std
Avg
Std
CL W
1.30 0.42 1.49 0.002 1.27 0.01 0.83 0.21 0.53 0.14
3.2
GSM U B
1.38 0.56 1.04
0.78
1.44 0.66 0.83
0.2
0.78 0.11
2.6
GSM UF
1.3
0.52 4.08 15.38
1.3
0.6
0.83
0.2
0.78 0.14
3.2
Table 3 :
3Results for DCamSeq. FP: False positives. FN: False negatives. TE: Total error. S: Similarity measure. S B : Similarity measure in object boundaries.TE
FN
FP
S
S B
RM
Avg
Std
Avg
Std
Avg
Std
Avg
Std
Avg
Std
CL W
2.46 1.82 32.21
0.26
0.66 0.01 0.55 0.14
0.5
10.12
6.2
GSM U B
1.74
1.7
20.45 10.73 0.46 1.57 0.64 0.17 0.54
0.14
3.8
GSM UF
1.65 1.49 22.06
11.6
0.61 1.73 0.65 0.18 0.55
0.14
3.6
Table 4 :
4Results for ColCamSeq. FP: False positives. FN: False negatives. TE: Total error. S: Similarity measure. S B : Similarity measure in object boundaries.TE
FN
FP
S
S B
RM
Avg
Std
Avg
Std
Avg
Std
Avg
Std
Avg
Std
CL W
3.20 2.77 3.52 0.09 2.92 0.10 0.89 0.15 0.77 0.16
4.8
GSM U B
2.3
2.26
7.1
14.5 3.21
6.3
0.9
0.15 0.52 0.11
5.2
GSM UF
2.2
2.27 2.94 5.53 4.36 6.42 0.92 0.08 0.53 0.09
4
Table 5 :
5Results for ShSeq. FP: False positives. FN: False negatives. TE: Total error. S: Similarity measure. S B : Similarity measure in object boundaries.TE
FN
FP
S
S B
RM
Avg
Std
Avg
Std
Avg
Std
Avg
Std
Avg
Std
CL W
0.81 0.35 1.60 0.05 0.68 0.02 0.94 0.04 0.71 0.07 2.80
GSM U B
0.87 0.33 0.98 0.88 0.88 0.42 0.93 0.03 0.76 0.06
3
GSM UF
1.66 0.38 0.14 0.19 1.92 0.44 0.89 0.04 0.65 0.05
4.2
Table 6 :
6Characteristics of evaluated sequences from our dataset.Sequence
Sleeping ds
TimeOf-
Day ds
Cespatx ds
Despatx ds
Shadows ds
Ls ds
Bootstraping ds
Number of
frames
200
1231
428
465
330
407
300
Number of
ground
truth
frames
10
23
11
12
11
9
11
Test
Objective
Waking
Object
Time of Day
Color
camouflage
Depth
camouflage
Shadows
Light Switch
Bootstraping
Table 7 :
7Complete results for our proposed method, GSM UF , for each category of the evaluation dataset.GSM UF
Recall Specificity
FPR
FNR
PWC
F-Measure Precision
Sleeping
0.959
0.961
0.039 0.041
3.981
3.74
0.953
TimeOfDay
0
0.997
0.003
0
0.307
0
0
Color Camouflage
0.981
0.99
0.01
0.019
1.489
3.922
0.993
Depth Camouflage
0.971
0.989
0.011 0.029
2.008
3.878
0.988
Shadows
0.983
0.995
0.005 0.017
1.043
3.931
0.994
LightSwitch
0
0.997
0.003
0
0.343
0
0
BootStraping
0.85
0.995
0.005
0.15
3.907
3.493
0.979
Average
0.630
0.99
0.01
0.08
3.67
2.58
0.71
Table 8 :
8Complete results for our proposed method, GSM U B , for each category of the evaluation dataset.GSM U B
Recall Specificity
FPR
FNR
PWC
F-Measure Precision
Sleeping
0.808
0.984
0.016 0.192 10.389
3.373
0.98
TimeOfDay
0
0.998
0.002
0
0.187
0
0
Color Camouflage
0.956
0.993
0.007 0.044
2.888
3.851
0.995
Depth Camouflage
0.941
0.992
0.008 0.059
3.388
3.796
0.991
Shadows
0.964
0.997
0.003 0.036
1.813
3.881
0.997
LightSwitch
0
0.999
0.001
0
0.114
0
0
BootStraping
0.743
0.996
0.004 0.257
6.941
3.19
0.984
Average
0.68
0.99
0.01
0.04
1.58
2.71
0.70
Table 9 :
9Evaluation results for all algorithms averaged over all sequences (RM). Last column shows final average ranking (RC). Bold entries indicate the best result and italics the second one.Sequence
Sleeping ds TimeOfDay ds
Cespatx ds
Despatx ds
Shadows ds Ls ds Bootstraping ds
RC
Camouflage Camouflage
GSM-UB
3
1
2
2.857
2
1
2.429
2.457
GSM-UF
1.857
1.429
2.714
2.714
2.571
1.429
2
2.429
MOG [? ]
2.571
2.286
4.571
3.286
3.286
1.857
3.571
3.629
ViBe [? ]
4.286
2.714
2.714
3.143
4.143
2.714
3.857
3.886
KDE [? ]
3.286
1.857
3
3
3
2.286
3.143
3.314
AcknowledgmentThis work was partially funded by the Project TIN2012-35427 of the Spanish Government, with FEDER support.
|
[] |
[
"SIMPLE PROOFS OF CLASSICAL EXPLICIT RECIPROCITY LAWS ON CURVES USING DETERMINANT GROUPOIDS OVER AN ARTINIAN LOCAL RING",
"SIMPLE PROOFS OF CLASSICAL EXPLICIT RECIPROCITY LAWS ON CURVES USING DETERMINANT GROUPOIDS OVER AN ARTINIAN LOCAL RING"
] |
[
"Greg W Anderson [email protected] \nSchool of Mathematics\nDepartamento de Matemáticas\nUniversity of Minnesota Minneapolis\n55455MNUSA\n",
"Fernando Pablos Romo \nUniversidad\nde Salamanca Plaza de la Merced 1-437008SalamancaEspaña\n"
] |
[
"School of Mathematics\nDepartamento de Matemáticas\nUniversity of Minnesota Minneapolis\n55455MNUSA",
"Universidad\nde Salamanca Plaza de la Merced 1-437008SalamancaEspaña"
] |
[] |
The notion of determinant groupoid is a natural outgrowth of the theory of the Sato Grassmannian and thus well-known in mathematical physics. We briefly sketch here a version of the theory of determinant groupoids over an artinian local ring, taking pains to put the theory in a simple concrete form suited to number-theoretical applications. We then use the theory to give a simple proof of a reciprocity law for the Contou-Carrère symbol. Finally, we explain how from the latter to recover various classical explicit reciprocity laws on nonsingular complete curves over an algebraically closed field, namely sum-of-residues-equals-zero, Weil reciprocity, and an explicit reciprocity law due to Witt. Needless to say, we have been much influenced by the work of Tate on sum-of-residues-equals-zero and the work of Arbarello-DeConcini-Kac on Weil reciprocity. We also build in an essential way on a previous work of the second-named author.
|
10.1081/agb-120027853
|
[
"https://arxiv.org/pdf/math/0207311v1.pdf"
] | 16,538,928 |
math/0207311
|
d813237d7050425f17e273fde04813755e2997bd
|
SIMPLE PROOFS OF CLASSICAL EXPLICIT RECIPROCITY LAWS ON CURVES USING DETERMINANT GROUPOIDS OVER AN ARTINIAN LOCAL RING
31 Jul 2002
Greg W Anderson [email protected]
School of Mathematics
Departamento de Matemáticas
University of Minnesota Minneapolis
55455MNUSA
Fernando Pablos Romo
Universidad
de Salamanca Plaza de la Merced 1-437008SalamancaEspaña
SIMPLE PROOFS OF CLASSICAL EXPLICIT RECIPROCITY LAWS ON CURVES USING DETERMINANT GROUPOIDS OVER AN ARTINIAN LOCAL RING
31 Jul 2002
The notion of determinant groupoid is a natural outgrowth of the theory of the Sato Grassmannian and thus well-known in mathematical physics. We briefly sketch here a version of the theory of determinant groupoids over an artinian local ring, taking pains to put the theory in a simple concrete form suited to number-theoretical applications. We then use the theory to give a simple proof of a reciprocity law for the Contou-Carrère symbol. Finally, we explain how from the latter to recover various classical explicit reciprocity laws on nonsingular complete curves over an algebraically closed field, namely sum-of-residues-equals-zero, Weil reciprocity, and an explicit reciprocity law due to Witt. Needless to say, we have been much influenced by the work of Tate on sum-of-residues-equals-zero and the work of Arbarello-DeConcini-Kac on Weil reciprocity. We also build in an essential way on a previous work of the second-named author.
Introduction
In 1968 J. Tate [8] gave a definition of the residues of differentials on a curve in terms of traces of certain linear operators on infinite-dimensional vector spaces. Further, Tate deduced the residue theorem ("sum-of-residues-equals-zero") on a nonsingular complete curve X from the finite-dimensionality of the cohomology groups H 0 (X, O X ) and H 1 (X, O X ). This work of Tate has been enormously influential.
In 1989 E. Arbarello, C. De Concini and V. G. Kac [1] interpreted the tame symbol at a point of a complete nonsingular algebraic curve X over an algebraically closed field as a commutator in a certain central extension of groups and then, in the style of Tate, deduced a reciprocity law on X for the tame symbol ("Weil reciprocity") from the finite-dimensionality of H 0 (X, O X ) and H 1 (X, O X ). Recently the second-named author of this paper has provided an interpretation [6] of the central extension of [1] in terms of determinants associated to infinite-dimensional vector subspaces valid for curves over a perfect field. The logical organization of this paper to a significant extent parallels that of [6].
In 1994 C. Contou-Carrère [3] defined a natural transformation greatly generalizing the tame symbol. In the case of an artinian local base ring k with maximal ideal m, the natural transformation takes the following form. Let f, g ∈ k((t)) × be given, where t is a variable. (Here and below A × denotes the multiplicative group of a ring A with unit.) It is possible in exactly one way to write
f = a 0 · t w(f ) · ∞ i=1 (1 − a i t i ) · ∞ i=1 (1 − a −i t −i ) g = b 0 · t w(g) · ∞ i=1 (1 − b i t i ) · ∞ i=1 (1 − b −i t −i )
with w(f ), w(g) ∈ Z, a i , b i ∈ k for i > 0, a 0 , b 0 ∈ k × , a −i , b −i ∈ m for i > 0, and a −i = b −i = 0 for i ≫ 0. By definition the value of the Contou-Carrère symbol is
f, g := (−1) w(f )w(g) a w(g) 0 ∞ i=1 ∞ j=1 1 − a j/(i,j) i b i/(i,j) −j (i,j) b w(f ) 0 ∞ i=1 ∞ j=1 1 − a j/(i,j) −i b i/(i,j) j (i,j) ∈ k × .
The definition makes sense because only finitely many of the terms appearing in the infinite products differ from 1. The symbol ·, · is clearly antisymmetric and, although it is not immediately obvious from the definition, also bimultiplicative.
If k is a field, and hence m = 0, then the infinite products go away and the Contou-Carrère symbol reduces to the tame symbol. If k = k 0 [ǫ]/(ǫ 3 ), where k 0 is a field, then 1 − ǫf, 1 − ǫg ≡ 1 − ǫ 2 Res t=0 (g df ) mod ǫ 3 for all f, g ∈ k 0 ((t)), and so the Contou-Carrère symbol also contains the residue as a special case. If k is a Q-algebra and f ∈ 1 + m((t)), then f, g = exp(Res t=0 log f · d log g).
This last formula renders the bimultiplicativity of the Contou-Carrère symbol at least plausible and motivates the definition. The main aims of this paper are (i) to interpret the Contou-Carrère symbol f, g -up to signs-as a commutator of liftings of f and g to a certain central extension of a group containing k((t)) × (see Thm. 3.4.3) and then (ii) to exploit the commutator interpretation to prove in the style of Tate a reciprocity law for the Contou-Carrère symbol on a nonsingular complete curve defined over an algebraically closed field (see Thm. 4.2.1). The commutator interpretation of the Contou-Carrère symbol provided here formally resembles the commutator formula [7,Prop. 3.6] stated in Segal-Wilson. In more detail, the general reciprocity law proved here takes the following form. Let F be an algebraically closed field. Let X/F be a complete nonsingular curve.
Let S be a finite nonempty set of (closed) points of X. For any ring or group A, put A S := {(a s ) s∈S |a s ∈ A}. By choosing uniformizers at each point belonging to S, identify R 0 := H 0 (X \ S, O X ) with an F -subalgebra of F ((t)) S . Further, suppose now that the artinian local ring k considered above is a finite F -algebra. Put R := R 0 ⊗ F k and make the evident identification of R × with a subgroup of k((t)) ×S . We prove that s∈S f s , g s = 1 for all f, g ∈ R × . With k = F we get back Weil reciprocity. With k = F [ǫ]/(ǫ 3 ) we get back sum-of-residues-equals-zero. With F of characteristic p > 0 and k = F [ǫ]/(ǫ p n−1 +1 ), we get back an explicit reciprocity law due to Witt [9].
The general reciprocity law proved here seems in principle to be known in the mathematical physics world, albeit no reference convenient for a number theorist can be cited. We claim novelty only for the simplicity and directness of our approach to the reciprocity law. We hope to popularize the reciprocity law among number theorists, and expect it to have many applications.
The backdrop for our constructions is provided by the theory of determinant groupoids over k. The notion of determinant groupoid is a natural outgrowth of the theory of the Sato Grassmannian and hence is quite familiar in mathematical physics. But inconveniently, in its native habitat, this notion is packaged with a lot of extra structure unneeded for studying reciprocity laws. We sketch here a "minimalist" version of the theory just adequate for the purposes we have in mind. We have taken pains to make the theory concrete and easy to apply, and also suitable for study by beginning graduate students in number theory and algebraic geometry. The theory very likely has applications beyond those discussed in this paper. We hope that our approach can be generalized to yield an "integrated version" of Beilinson's multi-dimensional generalization [2] of Tate's theory.
Determinant groupoids over an artinian local ring
We fix an artinian local ring k throughout the paper. We denote the maximal ideal of k by m and the multiplicative group of k by k × . Given a free k-module V of finite rank, we denote the rank of V over k by rk V and the maximal exterior power of V over k by det V .
Background on k-modules.
Lemma 2.1.1. A k-module V is flat if and only if for all integers r > 0, row vectors x ∈ Mat 1×r (k) and column vectors v ∈ Mat r×1 (V ) such that xv = 0, there exists an integer s > 0, a matrix y ∈ Mat r×s (k), and a column vector w ∈ Mat s×1 (V ) such that v = yw and xy = 0.
Proof. This is a standard flatness criterion holding over any commutative ring with unit. See [5, (3
Proof. (⇒) Trivial. (⇐) Let V ′ be the k-span of {v i } i∈I . We have V = V ′ + mV = V ′ + m 2 V = ... = V ′
because the ideal m is nilpotent. Lemma 2.1.3. A family {v i } i∈I of elements of a flat k-module V is k-linearly independent if and only if the family {v i mod mV } i∈I of elements of V /mV is (k/m)-linearly independent.
Proof. We may assume that I = {1, . . . , r}. For convenience assemble the vectors v i into a column vector v ∈ Mat r×1 (k). (⇒) Suppose that there exists x ∈ Mat 1×r (k) such that x ≡ 0 mod m but xv ≡ 0 mod mV . Let T be a minimal ideal of k and select 0 = t ∈ T . Then tm = 0, hence txv = 0 but tx = 0, a contradiction. (Flatness of V was not needed to prove this implication.) (⇐) Suppose there exists x ∈ Mat 1×r (k) such that xv = 0. By Lemma 2.1.1 there exists an integer s > 0, a matrix y ∈ Mat r×s (k) and a column vector w ∈ Mat s×1 (k) such that v = yw and xy = 0. By hypothesis the matrix y mod m ∈ Mat r×s (k/m) must be of maximal rank. Therefore some maximal square submatrix of y is invertible and hence x = 0. Lemma 2.1.5. Let V be a free k-module and let M be a finitely generated kmodule. For every k-linear map f : V → M there exists a k-submodule V ′ ⊂ V such that the quotient k-module V /V ′ is free of finite rank and V ′ ⊂ ker f .
Proof. By induction on the length of M as a k-module, we may assume that M = k/m and f = 0. Since V is free, f is the reduction modulo m of a surjective k-linear functionalf : V → k. Put V ′ := kerf . Then V ′ has the desired properties.
Proposition 2.1.6. Let V be a free k-module. Let A, B ⊂ V be free k-submodules. The quotient (A + B)/(A ∩ B)
is finitely generated if and only if there exists a free k-submodule P ⊂ A ∩ B such that the quotient k-modules A/P and B/P are free of finite rank. (We frequently apply this proposition below but rarely cite it explicitly.)
Proof. (⇒) By Lemma 2.1.5 applied to the natural map B → B/(A ∩ B) there exists a k-submodule P ⊂ A ∩ B such that the quotient k-module B/P is free of finite rank. It is then easily verified that P has all the desired properties. (⇐) Trivial.
2.2.
Commensurability and related notions.
2.2.1.
Commensurability. Let V be a free k-module. Given free k-submodules A, B ⊂ V we write A ∼ B and say that A and B are commensurable if the quotient A+B A∩B is finitely generated over k. It is easily verified that commensurability is an equivalence relation.
2.2.2. Relative rank. Let V be a free k-module. Let A, B ⊂ V be free k-submodules such that A ∼ B. Put Rk V (A, B) := rk A/P − rk B/P
where P ⊂ A ∩ B is any free k-submodule such that A/P and B/P are free of finite rank, thereby defining the relative rank of A and B. It is easily verified that Rk V (A, B) is independent of the choice of P and hence well-defined. It follows that
Rk V (A, C) = Rk V (A, B) + Rk V (B, C) for all free k-submodules A, B, C ⊂ V such that A ∼ B ∼ C. 2.2.3.
The restricted general linear group. Given a free k-module V and a free k-
submodule V + ⊂ V , let G V V+ denote the set of k-linear automorphisms σ of V such that V + ∼ σV + . It is easily verified that G V
V+ is a subgroup of the group of k-linear automorphisms of V depending only on the commensurability class of V + . We call G V V+ the restricted general linear group associated to V and V + .
The index.
Let V be a free k-module and let V + ⊂ V be a free k-submodule.
Given σ ∈ G V V+ , put ind V V+ σ := Rk(V + , σV + ),
thereby defining the index of σ. It is easily verified that ind V V+ σ depends only on the commensurability class of V + . It follows that the function
ind V V+ : G V V+ → Z is a group homomorphism. Lemma 2.2.5. Let V be a free k-module equipped with a direct sum decomposition V = V + ⊕ V − . For all σ ∈ G V V− ∩ G V V+ we have ind V V+ σ + ind V V− σ = 0. Proof. We have ind V V+ σ + ind V V− σ = rk V + /P + − rk σV + /P + + rk V − /P − − rk σV − /P − = rk(V + + V − )/(P + + P − ) − rk(σV + + σV − )/(P + + P − ) = rk V /(P + + P − ) − rk V /(P + + P − ) = 0
for any free k-submodules P ± ⊂ V ± ∩ σV ± such that V ± /P ± and σV ± /P ± are free of finite rank.
The construction
Det V (A, B; P ). Fix a free k-module V .
2.3.1. Definition. Given commensurable free k-submodules A, B ⊂ V and a free k-submodule P ⊂ A ∩ B such that the quotient k-modules A/P and B/P are free of finite rank, put
Det V (A, B; P ) := k-linear isomorphisms det(A/P ) ∼ − → det(B/P ) .
Note that we have at our disposal an operation of scalar multiplication
((x, α) → x · α) : k × × Det V (A, B; P ) → Det V (A, B; P )
with respect to which Det V (A, B; P ) becomes a k × -torsor.
2.3.2.
The composition law. Given free k-submodules A, B, C ⊂ V belonging to the same commensurability class and a free k-submodule P ⊂ A ∩ B ∩ C such that the k-modules A/P , B/P and C/P are free of finite rank, we have a composition law
((α, β) → β • α) : Det V (A, B; P ) × Det V (B, C; P ) → Det V (A, C; P )
at our disposal. The composition law is compatible with scalar multiplication in the sense that
x · (β • α) = (x · β) • α = β • (x · α).
Clearly the composition law is associative.
2.3.3.
The cancellation rule. Given commensurable free k-submodules A, B ⊂ V and free k-submodules Q ⊂ P ⊂ A ∩ B such that the quotient k-modules A/Q and B/Q are free of finite rank, we have at our disposal a canonical isomorphism
(∧p ℓ ) ∧ (∧ã i ) → (∧p ℓ ) ∧ ∧b j → ((∧a i ) → (∧b j )) : Det V (A, B; Q) ∼ − → Det V (A, B; P )
where {a i }, {b j } and {p ℓ } are any ordered k-bases for A/P , B/P and P/Q, respectively, and {ã i } and {b j } are liftings to A/Q and B/Q, respectively, of {a i } and {b j }, respectively. We refer to this isomorphism as the cancellation rule. It is easily verified that the cancellation rule commutes with the composition law and with scalar multiplication.
2.3.4. Inverse system structure. Let V be a free k-module and let A, B ⊂ V be commensurable free k-submodules. Consider the family P of free k-submodules P ⊂ A ∩ B such that A/P and B/P are free of finite rank. Partially order the family P by reverse inclusion. Then P is a directed set and it is easily verified that the cancellation rule gives the family of sets Det V (A, B; P ) indexed by P ∈ P the structure of inverse system.
2.4.
The connected groupoid Det V V+ . Fix a free k-module V and a free k-
submodule V + ⊂ V . 2.4.1. Definition. For all free k-submodules A, B ⊂ V belonging to the commensu- rability class of V + put Det V V+ (A, B) := lim ← Det V (A, B; P ),
the limit extended over the inverse system defined in §2.3.4. Since scalar multiplication commutes with the cancellation rule, we have at our disposal an operation of scalar multiplication
((x, α) → x · α) : k × × Det V V+ (A, B) → Det V V+ (A, B)
endowing the set Det V V+ (A, B) with the structure of k × -torsor.
2.4.2.
The composition law. Since the cancellation rule and the composition law commute, we obtain in the limit a composition law
((α, β) → β • α) : Det V V+ (A, B) × Det V V+ (B, C) → Det V V+ (A, C)
for all free k-submodules A, B, C ⊂ V belonging to the same commensurability class. The composition law is associative and moreover is compatible with scalar multiplication in the evident sense.
2.4.3. Definition of Det V V+ .
The rule sending each pair A, B ⊂ V of free k-submodules commensurable to V + to the set Det V V+ (A, B) makes the commensurability class of V + into a category. We denote this category by Det V V+ . It is easily verified that every morphism in Det V V+ is an isomorphism and that all the objects of Det V V+ are isomorphic. Thus Det V V+ is a connected groupoid. Note that Det V V+ depends only on the commensurability class of V + .
2.4.4. Abstract nonsense. Fix σ ∈ G V V+ and let σ * be the functor from Det V V+ to itself induced in evident fashion by σ. A natural transformation Φ from the identity functor of Det V V+ to σ * is a rule associating to each object A of Det V V+ a morphism Φ(A) ∈ Det V V+ (A, γA) such that for any two objects A and B of Det V V+ the diagram
A Φ(A) − −− → γA α ↓ ↓ γ * α B Φ(B) −−−→ γB commutes for all α ∈ Det V V+ (A, B)
. It is easily verified that the natural transformations from the identity functor of Det V V+ to σ * are in bijective correspondence with Det V
V+ (V + , γV + ) via the map Φ → Φ(V + ). 2.4.5. The central extensionG V V+ .
We defineG V V+ to be the set consisting of pairs (σ, Φ) where σ ∈ G V V+ and Φ is a natural transformation from the identity functor of Det V V+ to σ * . We compose elements ofG V V+ by the rule
(σ 1 , Φ 1 )(σ 2 , Φ 2 ) := (σ 1 σ 2 , A → (σ 1 * (Φ 2 (A))) • Φ 1 (A)).
It is easily verified that this composition law is a group law. The groupG V V+ thus defined depends only on the commensurability class of V + . The groupG V V+ fits into a canonical exact sequence
1 → k × x →(1,A →x·1A) − −−−−−−−−− →G V V+ (σ,Φ) →σ −−−−−→ G V V+ → 1, where 1 A ∈ Det V V+ (A, A)
denotes the identity map. The exact sequence identifies k × with a subgroup of the center ofG V V+ . 2.4.6. Remark. When k is a field, the central extension of the restricted general linear group G V V+ constructed here has cohomology class in H 2 (G V V+ , k × ) equal to the cohomology class associated to the central extension studied in the paper [1] and equal to the opposite of the cohomology class associated to the central extension studied in [6]. So what we study here should be regarded as the deformation theory of the central extensions of [1] and [6].
σ, τ ∈ G V V+ , put {σ, τ } V V+ :=στσ −1τ −1 ∈ ker G V V+ → G V V+ = k ×
whereσ,τ ∈G V V+ are any liftings of σ and τ , respectively. It is easily verified that {σ, τ } V V+ is independent of the choice of liftings and hence well-defined. By the definitions we have (σ * β) • α = {σ, τ } V V+ · ((τ * α) • β) for all α ∈ Det V V+ (V + , σV + ) and β ∈ Det V V+ (V + , τ V + ). The latter formula is the one we rely upon in practice to calculate {σ, τ } V V+ . Proof.
[α, γ][β, γ][αβ, γ] −1 = αγα −1 γ −1 [β, γ]γαβγ −1 β −1 α −1 = αγ[β, γ]βγ −1 β −1 α −1 = α[β, γ][β, γ] −1 α −1 = 1.
3.1.3. Basic properties. Fix elements σ, σ ′ , τ, τ ′ ∈ G V V+ such that the σ's commute with the τ 's. (But we need assume neither that σσ ′ = σ ′ σ nor that τ τ ′ = τ ′ τ .) The following relations hold:
• {σ, σ} V V+ = 1. • {σ, τ } V V+ = {τ, σ} V V+ −1 . • {σσ ′ , τ } V V+ = {σ, τ } V V+ {σ ′ , τ } V V+ . • {σ, τ τ ′ } V V+ = {σ, τ } V V+ {σ, τ ′ } V V+ . • σV + = V + = τ V + ⇒ {σ, τ } V V+ = 1. • If V + = {0} or V + = V , then {σ, τ } V V+ = 1.
• {σ, τ } V V+ depends only on the commensurability class of V + .
For the most part these facts are straightforwardly deduced from the definitions.
Only the proof of bimultiplicativity offers any difficulty and the essential point of that proof is contained in the preceding lemma.
3.2. The four square identity. Fix a free k-module V , a free k-submodule V + ⊂ V and commuting elements σ, τ ∈ G V V+ . We work out an explicit formula for the symbol {σ, τ } V V+ in terms of determinants.
Choices. Let
P ⊂ V + ∩ σV + , Q ⊂ V + ∩ τ V +
be free k-submodules such that the quotient k-modules V + /P, σV + /P, V + /Q, τ V + /Q are free of finite rank. Let
R ⊂ P ∩ Q ∩ τ P ∩ σQ{a i mod P }, {b i mod P }, {c i mod Q}, {d i mod Q}, {e i mod R}, {f i mod R}, {g i mod R}, {h i mod R}, are k-bases of V + /P, σV + /P, V + /Q, τ + V /Q, P/R, Q/R, τ P/R, σQ/R,
respectively. Fix morphisms
α ∈ Det V V + (V + , σV + ), β ∈ Det V V+ (V + , τ V + ).
3.2.2.
Construction of representations of the morphisms α, β. Since our purpose is to calculate {σ, τ } V V+ by means of the last formula of §3.1.1, we may simply assume that α is represented by (∧(a i mod P ) → ∧(b i mod P )) ∈ Hom k (det(V + /P ), det(σV + /P )) and that β is represented by
(∧(c i mod Q) → ∧(d i mod Q)) ∈ Hom k (det(V + /Q), det(τ V + /Q)) .
By the cancellation rule α is also represented by
(∧ē i ) ∧ (∧ā i ) → (∧ē i ) ∧ ∧b i ∈ Hom k (det(V + /R), det(σV + /R))
and β is also represented by
∧f i ∧ (∧c i ) → ∧f i ∧ ∧d i ∈ Hom k (det(V + /R), det(τ V + /R)) ,
where here and below v →v denotes reduction modulo R.
3.2.3.
Construction of representations of the morphisms τ * α and σ * β. By definition τ * α is represented by
(∧(τ a i mod τ P ) → ∧(τ b i mod τ P ))
∈ Hom k (det(τ V + /τ P ), det(τ σV + /τ P )) and σ * β is represented by (∧(σc i mod σQ) → ∧(σd i mod σQ)) ∈ Hom k (det(σV + /σQ), det(στ V + /σQ)) .
By the cancellation rule, τ * α is also represented by
(∧ḡ i ) ∧ (∧τ a i ) → (∧ḡ i ) ∧ ∧τ b i ∈ Hom k (det(τ V + /R), det(τ σV + /R))
and σ * β is also represented by ∧h i ∧ (∧σc i ) → ∧h i ∧ ∧σd i ∈ Hom k (det(σV + /R), det(στ V + /R)) .
Conclusion of the calculation.
We have now represented all the morphisms α, β, τ * α and σ * β in such a way that we can obtain representations for the compositions (σ * β) • α and (τ * α) • β in the same rank one free k-module, namely Hom k (det(V + /R), det(στ V + /R)) = Hom k (det(V + /R), det(τ σV + /R)) .
It is a straightforward matter to calculate the ratio. We find that
{σ, τ } V V+ = (∧h i ) ∧ (∧σd i ) (∧ḡ i ) ∧ (∧τ b i ) (∧ē i ) ∧ (∧b i ) (∧h i ) ∧ (∧σc i ) (∧f i ) ∧ (∧c i ) (∧ē i ) ∧ (∧ā i ) (∧ḡ i ) ∧ (∧τ a i ) (∧f i ) ∧ (∧d i ) .
The right side of the formula makes sense because in each of the fractions, both numerator and denominator generate the same rank one k-submodule of the exterior algebra over k of V /R. We call this result the four square identity because in an obvious way the diagram
V + {a i } P {b i } σV + {c i } {e i } {σc i } Q {f i } R {h i } σQ {d i } {g i } {σd i } τ V + {τ a i } τ P {τ b i } στ V + = τ σV +
serves as a mnemonic. We call the diagram above a template for the calculation of {σ, τ } V V+ and we say that the right side of the four square identity is the value of the template.
3.3.
General rules of calculation. Fix a free k-module V and a free k-submodule V + ⊂ V . Then we have {σ, τ } V V+ = {σ,τ }V V+ . Proof. We return to the setting of §3.2. Let T be the template above for calculating {σ, τ } V V+ . The very same template T serves also to calculate {σ,τ } W V+ . Therefore (i) holds. LetT be the projection of the template T intoV . ThenT is a template for the calculation of {σ,τ }V V+ and moreover it is easily verified that the values of the templates T andT are equal. Therefore (ii) holds.
Proposition 3.3.2. Suppose V is equipped with a direct sum decomposition V = V 0 ⊕ V 1 . Put V i+ := V i ∩ V + for i = 0, 1 and assume that V + = V 0+ ⊕ V 1+ . Let commuting elements σ 0 , σ 1 ∈ G V V+ be given such that σ i | V0 ∈ G V0 V0+ , σ i | V1 = 1 for i = 0, 1. Then we have {σ 0 | V0 , σ 1 | V0 } V0 V0+ = {σ 0 , σ 1 } V V+ . Proof.
The proof is a straightforward application of the four square identity similar to that made in the proof of Proposition 3.3.1 and therefore omitted.
Proposition 3.3.3. Again suppose V is equipped with a direct sum decomposition V = V 0 ⊕ V 1 , put V i+ := V i ∩ V + for i = 0, 1 and assume that V + = V 0+ ⊕ V 1+ . Let σ 0 , σ 1 ∈ G V
V+ be given such that σ i | Vi ∈ G Vi Vi+ , σ i | V1−i = 1 for i = 0, 1. (Necessarily σ 0 and σ 1 commute.) Then we have
{σ 0 , σ 1 } V V+ = (−1) ν0ν1 where ν i := ind Vi Vi+ σ i | Vi = ind V V+ σ i for i = 0, 1.
Proof. For i = 0, 1, choose a free k-submodule P i ⊂ V i+ ∩ σ i V i+ such that the quotient k-modules V i+ /P and σ i V i+ /P i are free of finite rank and also choose finite sequences {e ij } and {f ij } in V i (not necessarily of the same length) reducing modulo P i to k-bases for V i+ /P i and σ i V i+ /P i , respectively. Then the diagram
V 0+ ⊕ V 1+ {e 0j } P 0 ⊕ V 1+ {f 0j } σ 0 V 0+ ⊕ V 1+ {e 1j } {e 1j } {e 1j } V 0+ ⊕ P 1 {e 0j } P 0 ⊕ P 1 {f 0j } σ 0 V 0+ ⊕ P 1 {f 1j } {f 1j } {f 1j } V 0+ ⊕ σ 1 V 1+ {e 0j } P 0 ⊕ σ 1 V 1+ {f 0j } σ 0 V 0+ ⊕ σ 1 V 1+
is a template for the calculation of {σ 0 , σ 1 } V V+ . The desired result now follows by the four square identity and the definitions.
Proposition 3.3.4. Let V − ⊂ V be a free k-submodule such that V = V + ⊕ V − . Let commuting elements σ, τ ∈ G V V+ ∩ G V V− be given. Then we have {σ, τ } V V+ {σ, τ } V V− = 1. Proof. We define σ 0 , σ 1 , τ 0 , τ 1 ∈ G V ⊕V V+⊕V−
by the block decompositions
σ 0 = σ 0 0 1 , σ 1 = 1 0 0 σ , τ 0 = τ 0 0 1 , τ 1 = 1 0 0 τ .
Then we have 3.4.1. Preliminary discussion of the ring k((t)). Let t be a variable. Let k((t)) be the ring obtained from the power series ring k[[t]] by inverting t. It is easily verified that k((t)) is an artinian local ring with maximal ideal m((t)) and residue field (k/m)((t)). We have an additive direct sum decomposition
{σ 0 σ 1 , τ 0 τ 1 } V ⊕V V+⊕V− = 1 i=0 1 j=0 {σ i , τ j } V ⊕V V+⊕V− = {σ, τ } V V+ {σ, τ } V V− (−1) µ−ν++µ+ν−µ ± = ind V V± σ, ν ± = ind V V± τ. Put U := ker ((v 0 ⊕ v 1 → v 0 + v 1 ) : V ⊕ V → V ) .k((t)) = t −1 k[t −1 ] ⊕ k[[t]]
and a multiplicative direct sum decomposition
k((t)) × = t Z · (1 + m[t −1 ]) · k × · (1 + tk[[t]])
at our disposal. The latter decomposition can be refined as follows. Each f ∈ k((t)) × has a unique presentation
f = t w(f ) · a 0 · ∞ i=1 1 − a −i t −i · ∞ i=1 1 − a i t i where w(f ) ∈ Z, a 0 ∈ k × , a i = 0 if i ≪ 0, a i ∈ m if i < 0, a i ∈ k × if i = 0, a i ∈ k if i > 0. We call w(f )f, g := (−1) w(f )w(g) · a w(g) 0 b w(f ) 0 · ∞ i=1 ∞ j=1 1 − a j/(i,j) i b i/(i,j) −j (i,j) ∞ i=1 ∞ j=1 1 − a j/(i,j) −i b i/(i,j) j (i,j) .
The right side of the definition makes sense because all but finitely many factors in the infinite products differ from 1. This definition is due to Contou-Carrère [3]. We call the map ·, · : k((t)) × × k((t)) × → k × defined by the formula above the Contou-Carrère symbol. The symbol is clearly anti-symmetric:
f, g = g, f −1 . Although it is not immediately evident from the definition, the symbol is also bimultiplicative:
f ′ , g = f, g f ′ , g , f, gg ′ = f, g f, g ′ .
The following result establishes bimultiplicativity of the Contou-Carrère symbol as a byproduct.
Theorem 3.4.3. For all f, g ∈ k((t)) × we have f, g −1 = (−1) w(f )w(g) {f, g} k((t)) k[[t]]
where on the right side we view f and g as elements of the restricted general linear group G
k((t)) k[[t]] .
Before beginning the proof proper we prove a couple of lemmas. We say that Proof. Statement (i) is a special case of the Weierstrass Division Theorem. From statement (i) it follows that the diagram
a polynomial f ∈ k[t] is distinguished if f is monic in t and f ≡ t deg f mod m, in which case necessarily w(f ) = deg f .k[[t]] ∅ k[[t]] ∅ k[[t]] ∅ ∅ ∅ k[[t]] ∅ k[[t]] ∅ k[[t]] {f −1 t i } n−1 i=0 {f −1 t i } n−1 i=0 {gf −1 t i } n−1 i=0 f −1 · k[[t]] ∅ f −1 · k[[t]] ∅ f −1 · k[[t]]
is a template for the calculation of {g,
f −1 } k((t)) k[[t]]
. Statement (ii) now follows from the four square identity. Proof. Since
f, g ∈ G k((t)) k[[t]] ∩ G k((t)) t −1 k[t −1 ] , we have {f, g} k((t)) k[[t]] {f, g} k((t)) t −1 k[t −1 ] = 1
by Proposition 3.3.4. Let p and q be the degrees of f and g, respectively. Now consider the template
t −1 k[t −1 ] ∅ t −1 k[t −1 ] {t i } p−1 i=0 t p−1 k[t −1 ] ∅ ∅ ∅ Lemma 3.4.6. For all f, g ∈ k[[t]] × we have {f, g} k((t)) k[[t]] = 1.
Proof. In view of the formal properties noted in §3.1.3, this is clear. ]. This justifies the further assumption without loss of generality that g = 1 − bt q for some positive integer q and b ∈ k. Finally, we have
det (1 − bt q |k[[t]]/(t p − a)) = 1 − a q/(p,q) b p/(p,q) (p,q) ,
as can be verified by a straightforward calculation that we omit, and we are done.
Reparameterization invariance.
It is easily verified that for any τ ∈ k((t)) of winding number 1, the operation (f (t) → f (τ )) : k((t)) → k((t)) ("substitution of τ for t")
is a k-linear automorphism of k((t)) belonging to the restricted general linear group G
k((t)) k[[t]]
. Via the commutator interpretation provided by Theorem 3.4.3, it follows that the Contou-Carrère symbol is invariant under reparameterization of k((t)).
3.4.9.
Recovery of the tame symbol and the residue. If k is a field, the Contou-Carrère symbol obviously reduces to the tame symbol. It is possible also to recover the residue from the Contou-Carrère symbol, as follows. Take k = F [ǫ]/(ǫ 3 ) where F is any field. Then we have
1 − ǫf, 1 − ǫg ≡ 1 − ǫ 2 Res t=0 (g df ) mod ǫ 3
for all f, g ∈ F ((t)) as can be verified by a straightforward calculation. This last observation suggests an interpretation of our work as the "integrated version" of Tate's Lie-theoretic theory [8]. We wonder how Beilinson's multidimensional generalization [2] of Tate's theory might analogously be integrated.
3.4.10. The case in which k is a Q-algebra. Suppose k is a Q-algebra. Let f ∈ 1 + m((t)) and g ∈ k((t)) × be given. We have For each s ∈ S select a uniformizer π s at s.
f, g = exp(Res t=0 log f · d log g)4.1.2. Construction of R 0 . For each meromorphic function f on X put f (s) := i a i t i ∈ F ((t)) where f = i a i π i s (a i ∈ F )
is the Laurent expansion of f in powers of π s . Put
R 0 := {(f (s) ) s∈S ∈ F ((t)) S | f ∈ H 0 (X \ S, O X )}.
The F -algebra R 0 is a copy of the affine coordinate ring of X \ S. We take for granted that
dim F F [[t]] S ∩ R 0 = dim F H 0 (X, O X ) = 1 < ∞ and dim F F ((t)) S R 0 ⊕ F [[t]] S = dim F H 1 (X, O X ) = genus of X < ∞.
As in the papers [1], [6], [8], it is these finiteness statements from algebraic geometry that lead ineluctably to reciprocity laws. 4.1.3. Extension of scalars from F to k. We assume now that the artinian local ring k taken as base for the theory of determinant groupoids is a finite F -algebra. We put
R := R 0 ⊗ F k = (k-span of R 0 ) ⊂ k((t)) S , and V := k((t)) S , V + := k[[t]] S .
Note that the k-modules V + ∩ R and V /(R + V + ) are free of finite rank. Note that R × acting in natural k-linear fashion on V is contained in the restricted general linear group G V V+ .
4.2.
A reciprocity law for the Contou-Carrère symbol.
Theorem 4.2.1. In the setting above, for all f, g ∈ R × we have s∈S f s , g s = 1. [9]. The basic theory-proofs omitted-takes the following form. For proofs, we recommend to the reader the exercises on this topic in Lang's algebra text [4]. Let
Proof. Clearly there exists some
F -subspace M 0 ⊂ F ((t)) S commensurable to F [[t]] S such that F ((t)) S = M 0 ⊕ R 0 . Put M := M 0 ⊗ F k. Then we have V = M ⊕ R, M ∼ V + and hence we have {f, g} V V+ = {f, g} V M = {f, g} V M {f, g} V R = 1.{ǫ} {x i , y i } ∞ i=1
be a family of independent variables. Write
∞ i=1 ((1 − x i ǫ i )(1 − y i ǫ i )) = ∞ i=1 (1 − A i ǫ i ), ∞ i=1 ∞ j=1 1 − x j/(i,j) i y i/(i,j) j ǫ ij/(i,j) (i,j) = ∞ i=1 (1 − M i ǫ i ),
thereby defining families of polynomials
A n , M n ∈ Z {x i , y i } i|n ∞ n=1
. For any commutative ring A with unit and finite subset ∆ of the set of positive integers closed under passage to divisors, let W ∆ (A) denote the set of vectors with entries in A indexed by ∆. It can be shown that the A's and M's define addition and multiplication laws with respect to which W ∆ (A) becomes a commutative ring with unit, functorially in commutative rings A with unit. Below we do not actually need to use the multiplication law in W ∆ (A) but we mention its definition because of its close relationship to the definition of the Contou-Carrère symbol. These polynomials are characterized by the power series identity
− log ∞ i=1 (1 − x i ǫ i ) = ∞ ν=1x ν ǫ ν ν .
Given a ring A, a finite subset ∆ of the set of positive integers closed under passage to divisors, x = (x i ∈ A) i∈∆ ∈ W ∆ (A) and an integer i ∈ ∆, we writex i for the result of substituting x d for x d inx i for all d ∈ ∆, and we callx i the ghost coordinate of x indexed by i; in this context, for emphasis, we say that x i is the live coordinate of x indexed by i. Addition and multiplication have a very simple expression in ghost coordinates:
A n =x n +ỹ n ,M n =x nỹn .
Clearly each variable x n has a unique expansion as a polynomial in thex's with coefficients in Q. It follows that for any Q-algebra A the ring W ∆ (A) decomposes in ghost coordinates as a product of copies of A indexed by ∆. But in general it is not possible to write x n as a polynomial in thex's with integral coefficients, and hence in general W ∆ (A) depends in a complicated way on A. But it is simpler to deal with the ring schemes of the form
W ≤N := W {1,...,N } .
The additive group scheme underlying the ring scheme W ≤N is fairly easy to handle because the map We define a pairing Res W ≤N (·, ·) : F ((t)) × × W ≤N (F ((t))) → W ≤N (F ) by the rule
x = (x i ) N i=1 → N i=1 (1 − x i ǫ) mod ǫ N +1f, N i=1 1 − x i ǫ i ≡ N i=1 1 − ǫ i Res W ≤N (f, x) i mod (ǫ N +1 )
where ·, · is the Contou-Carrère symbol.
4.3.5.
Comparison with Witt's original definition. The pairing Res W ≤N is essentially the pairing introduced in Witt's paper [9]. Without giving full details, we briefly explain this point as follows. Assume for the moment that F is of characteristic zero so that we can talk about ghost coordinates, and recall the remark of §3.4.10.
We have
log f, N i=1 1 − ǫ i x i ≡ Res t=0 − log N i=1 (1 − ǫ i x i ) · d log f f ≡ Res t=0 N i=1x i ǫ i i · d log f f ≡ N i=1
Res t=0 x i df f ǫ i i .
mod (ǫ N +1 ).
In other words, we have
Res W ≤N (f, x) i = Res t=0 x i df f for i = 1, . . . , N . This last formula for i = 1, p, . . . , p n−1 and F of characteristic p > 0 is exactly the expression in ghost coordinates of the rule used by Witt [9, p. 130] to define his pairing. But a priori Witt's definition is no good in characteristic p because live coordinates cannot in general be expressed as polynomials in the ghost coordinates with p-integral coefficients. Nevertheless, Witt succeeds in making the definition rigorous (see [9,Satz 4,p. 130]) by proving that the corresponding expression in live coordinates of his pairing is "denominator-free" and hence does make sense in arbitrary characteristic. Of course, with the hindsight afforded by Theorem 4.2.1, Witt's Satz 4 is not very difficult to check.
4.3.6. Reciprocity for the symbol Res W ≤N . We return to the situation in which the algebraically closed field F may be of any characteristic. As above, we fix a positive integer N . We then have for all f ∈ R × 0 ⊂ F ((t)) ×S , x ∈ W ≤N (R 0 ) ⊂ W ≤N (k((t)) S ) = W ≤N (k((t))) S by Theorem 4.2.1 and the definitions. We emphasize that the addition is to be performed in the group W ≤N (F ). From this last formula in the case that F is of characteristic p and N = p n−1 , it is not difficult to deduce the reciprocity law stated without proof on the last page of Witt's paper [9] in the case of an algebraically closed ground field of characteristic p. We omit further details.
Acknowledgements
The first-named author wishes to thank the mathematics department at the University of Salamanca for its hospitality during a two week period in May 2002 when much of the work on this paper was done. The first-named author became acquainted with determinant groupoids from the perspective of mathematical physics through some lectures on algebraic models of loop spaces by M. Kapranov in Spring 2002 at the University of Minnesota, and gratefully acknowledges that influence. The first-named author also wishes to thank the mathematics department at the University of Arizona for the opportunity to lecture on preliminary versions of some of these ideas at the Winter School of March 2000.
(This argument comes from the proof of [5, (3.G), Proposition, p. 21].) Proposition 2.1.4. (i) A k-module is free if and only if projective if and only if flat. (ii) If two of the k-modules in a short exact sequence of such are free, then so is the third. (iii) The k-linear dual of a free k-module is again free. (We frequently apply this proposition below but rarely cite it explicitly.) Proof. (i) It is necessary only to prove that flatness implies freeness, and for this purpose Lemmas 2.1.2 and 2.1.3 clearly suffice. (ii) Let 0 → U → V → W → 0 be a short exact sequence of k-modules. If U and W are free, then V is clearly free. If V and W are free then U is a direct summand of V , hence U is projective and hence U is free. If U and V are free, then any k-basis {u i } of U remains (k/m)-linearly independent in V /mV by Lemma 2.1.3, hence there exists a family of elements {v j } of V such that {u i mod mV } {v j mod mV } is a (k/m)-basis of V /mV , hence {u i } {v j } is a k-basis of V by Lemmas 2.1.2 and 2.1.3, and hence {v j } projects to a k-basis for W . (iii) This can be proved by straightforwardly applying Lemma 2.1.1 to verify flatness. We omit the details.
3 .
3Study of the symbol {σ, τ } V V+ 3.1. Definition and basic properties of the symbol. Fix a free k-module V and free k-submodule V + ⊂ V . 3.1.1. Definition. Given commuting elements
be a free k-submodule such that the quotient k-modules P/R, Q/R, τ P/R, σQ/R are free of finite rank. Fix finite sequences {a i }, . . . , {h i } in V (not necessarily all of the same length) such that the corresponding finite sequences
Fix commuting elements σ, τ ∈ G V V+ . (i) Suppose there exists a free k-module W containing V as a k-submodule. Suppose further that σ, τ admit commuting extensionsσ,τ ∈ G W V+ , respectively. Then we have {σ, τ } V V+ = {σ,τ } W V+ . (ii) Suppose there exists a free k-submodule U ⊂ V such that σU = U = τ U and U ∩ V + = 0. PutV := V /U andV + := (V + + U )/U . Letσ andτ be the k-linear automorphisms ofV induced by σ and τ , respectively, and assume thatσ,τ ∈ GV V+ .
.
Further, we have{σ 0 σ 1 , τ 0 τ 1 } V ⊕V V+⊕V− = {σ 0 σ 1 mod U, τ 0 τ 1 mod U } (V ⊕V )/U ((V+⊕V−)+U)/U = {σ, τ } V VThe commutator interpretation of the Contou-Carrère symbol.
the winding number of f and we call {a i } ∞ i=−∞ the family of Witt parameters of f . Now view f as a k-linear automorphism of the free k-module k((t)). It is easily verified that f ∈ G m)[[t]] f = −w(f mod m) = −w(f ). 3.4.2. Definition of the Contou-Carrère symbol. Let f, g ∈ k((t)) × be given. Let {a i } and {b j } be the systems of Witt parameters associated to f and g, respectively. Put
.
Fix a distinguished polynomial f ∈ k[t] of degree n and g ∈ k[[t]] × . (i) The quotient of the power series ring k[[t]] by its principal ideal (f ) is free over k and the monomials 1, t, . . . , t n−1 form a k-basis. (ii) We have {f, g} k((t)) k[[t]] = det(g|k[[t]]/(f )).
Lemma 3.4. 5 .
5Let f, g ∈ k[t] be distinguished polynomials. We have
3. 4 . 7 .
47Proof of Theorem 3.4.3. Every element of k((t)) × factors as a power of t times a distinguished polynomial times a unit of k[[t]]. So after making the evident reductions based upon Lemmas 3.4.5 and 3.4.6, we may assume without loss of generality that f is a distinguished polynomial and that g ∈ k[[t]] × . Moreover, we may assume without loss of generality that f takes the special form t p − a for some positive integer p and a ∈ m. By Lemma 3.(g|k[[t]]/(t p − a)). This justifies the further assumption without loss of generality that g(0) = 1. Now t operates nilpotently on the quotient k[[t]]/(t p − a), hence t N ≡ 0 mod (t p − a) for some positive integer N , and hence det(1 + t N h|k[[t]]/(t p − a)) = 1 for all h ∈ k[[t]
as can be verified by a straightforward calculation. This is quite similar in form to the commutator formula given by Segal-Wilson [7, Prop. 3.6]. 4. Reciprocity laws on curves 4.1. The common setting for the reciprocity laws. 4.1.1. Basic data. Let F be an algebraically closed field. Let X/F be a nonsingular complete algebraic curve. Let S be a finite nonempty set of closed points of X. For any ring or group A, put A S := {(a s ) s∈S |a s ∈ A} = (product of copies of A indexed by S).
The first two equalities are justified by the basic properties enumerated in §3.1.3 and the last by Proposition 3.3.4. We also have{f, g} V V+ = s∈S s ′ ∈S {f s , g s ′ } k((t)) k[[t]] = (−1) ( s w(fs))( s w(gs)) · s∈S f s , g s −1 .The first equality is justified the basic properties enumerated in §3.1.3 and Propositions 3.3.2. The second equality is justified by Proposition 3.3.3 and Theorem 3.4.3. Finally, we have s∈S w(f s ) = s∈S w(f s mod m) = 0 because the second sum is the degree of a principal divisor on X. The result follows.
identifies the additive group underlying W ≤N (A) with the group of units in A[ǫ]/(ǫ N +1 ) congruent to 1 modulo (ǫ) functorially in commutative rings A with unit. Since the ring W {1,p,...,p n−1 } (A) is a quotient of the ring W ≤p n−1 (A) functorially in A, it turns out that we really lose no generality by thus restricting our focus. 4.3.4. Definition of the symbol Res W ≤N . Let us turn our attention back to the setting of Theorem 4.2.1. Fix a positive integer N . We now take k := F [ǫ]/(ǫ N +1 ).
≤N (f s , x s ) = 0
.A), Thm. 1, part 6, pp. 17-18].Lemma 2.1.2. A family {v i } i∈I of elements of a k-module V generates V over k if and only if the family {v i mod mV } i∈I generates V /mV over k/m. (This is a version of Nakayama's Lemma. Note that V need not be finitely generated over k.)
Lemma 3.1.2. Let G be a group. Write [x, y] := xyx −1 y −1 for all x, y ∈ G. Now let α, β, γ ∈ G be given. Assume that [β, γ] is central in G. Then we have [α, γ][β, γ] = [αβ, γ].
Contou-Carrère symbol is reparameterization invariant. In the present context, it follows that the value of the symbol f s , g s is independent of the choice π s of uniformizer at s, and thus is coordinate-independent.4.2.3. Recovery of Weil reciprocity. Take k = F . In this case the Contou-Carrère symbol reduces to the tame symbol, and hence Theorem 4.2.1 reduces to Weil reciprocity. 4.2.4. Recovery of sum-of-residues-equals-zero. Take k = F [ǫ]/(ǫ 3 ). In this case, as explained in §3.4.9, the residue can be recovered from the Contou-Carrère symbol, and hence sum-of-residues-equals-zero can be recovered from Theorem 4.2.1. 4.3. Recovery of Witt's explicit reciprocity law. 4.3.1. Quick review of Witt vectors. Witt vectors were introduced in Witt's paper4.2.2. Coordinate-independence of the local symbol f s , g s . In §3.4.8 we explained
that the
4.3.3.Remark. The focus in arithmetical applications of Witt vectors is usually on the case ∆ ⊂ {1, p, p 2 , . . . } for some rational prime p. In this case, for example, we have the striking fact that W {1,p,...,p n−1 } (Z/pZ) = Z/p n Z.
t −1 k[t −1 ] ∅ t −1 k[t −1 ] {t i } p−1 i=0 t p−1 k[t −1 ] {t i } q−1 i=0 {t i } q−1 i=0 {f t i } q−1 i=0 t q−1 k[t −1 ] ∅ t q−1 k[t −1 ] {gt i } p−1 i=0 t p+q−1 k[t −1 ]for the calculation of {f, g} k((t)) t −1 k[t−1 ] . By the four square identity we conclude that the latter symbol equals unity, and we are done.
The infinite wedge representation and the reciprocity law for algebraic curves. Theta functions-Bowdoin. E Arbarello, C De Concini, V G Kac, Proc. Sympos. Pure Math. 49Amer. Math. SocPart 1Arbarello, E.; De Concini, C.; Kac, V. G.: The infinite wedge representation and the reciprocity law for algebraic curves. Theta functions-Bowdoin 1987, Part 1 (Brunswick, ME, 1987), 171-190, Proc. Sympos. Pure Math., 49, Part 1, Amer. Math. Soc., Providence, RI, 1989.
Residues and adèles. A A Beilinson, Russian) Funktsional. Anal. i Prilozhen. 141Functional Anal. Appl.Beilinson, A. A.: Residues and adèles. (Russian) Funktsional. Anal. i Prilozhen. 14 (1980), no. 1, 44-45. {English translation: Functional Anal. Appl. 14 (1980), no. 1, 34-35.}
Jacobienne locale, groupe de bivecteurs de Witt universel, et symbole modéré. (French) [Local Jacobian, universal Witt bivector group, and tame symbol. C Contou-Carrère, C. R. Acad. Sci. Paris Sér. I Math. 3188Contou-Carrère, C.: Jacobienne locale, groupe de bivecteurs de Witt universel, et symbole modéré. (French) [Local Jacobian, universal Witt bivector group, and tame symbol] C. R. Acad. Sci. Paris Sér. I Math. 318 (1994), no. 8, 743-746.
Algebra. Revised third edition. S Lang, Graduate Texts in Mathematics. 211Springer-VerlagLang, S.: Algebra. Revised third edition. Graduate Texts in Mathematics, 211. Springer- Verlag, New York, 2002.
H Matsumura, Commutative algebra. Reading, MassBenjamin/Cummings Publishing Co., Inc56Second editionMatsumura, H.: Commutative algebra. Second edition. Mathematics Lecture Note Series, 56. Benjamin/Cummings Publishing Co., Inc., Reading, Mass., 1980.
On the Tame Symbol of an Algebraic Curve. Pablos Romo, F , Comm. Algebra. 309Pablos Romo, F.: On the Tame Symbol of an Algebraic Curve. Comm. Algebra. 30(2002), no. 9, 4349-4368.
Loop groups and equations of KdV type. G Segal, G Wilson, Inst. Hautes tudes Sci. Publ. Math. 61Segal, G.; Wilson, G.: Loop groups and equations of KdV type. Inst. Hautes tudes Sci. Publ. Math. 61(1985), 5-65.
Residues of differentials on curves. J Tate, Ann. Sci. Ecole Norm. Sup. 14Tate, J.: Residues of differentials on curves. Ann. Sci. Ecole Norm. Sup. (4) 1 1968 149-159.
Zyklische Körper und Algebren der Charakteristik p vom Grad p n. E Witt, J. Reine Angew. Math. 176Witt, E.: Zyklische Körper und Algebren der Charakteristik p vom Grad p n . J. Reine Angew. Math. 176(1937), 126-140.
|
[] |
[
"Visual Depth Mapping from Monocular Images using Recurrent Convolutional Neural Networks",
"Visual Depth Mapping from Monocular Images using Recurrent Convolutional Neural Networks"
] |
[
"John Mern \nDepartment of Aeronautics and Astronautics\nStanford University\n94305StanfordCA\n",
"Kyle Julian \nDepartment of Aeronautics and Astronautics\nStanford University\n94305StanfordCA\n",
"Rachael E Tompa \nDepartment of Aeronautics and Astronautics\nStanford University\n94305StanfordCA\n",
"Mykel J Kochenderfer \nDepartment of Aeronautics and Astronautics\nStanford University\n94305StanfordCA\n"
] |
[
"Department of Aeronautics and Astronautics\nStanford University\n94305StanfordCA",
"Department of Aeronautics and Astronautics\nStanford University\n94305StanfordCA",
"Department of Aeronautics and Astronautics\nStanford University\n94305StanfordCA",
"Department of Aeronautics and Astronautics\nStanford University\n94305StanfordCA"
] |
[] |
A reliable sense-and-avoid system is critical to enabling safe autonomous operation of unmanned aircraft. Existing sense-and-avoid methods often require specialized sensors that are too large or power intensive for use on small unmanned vehicles. This paper presents a method to estimate object distances based on visual image sequences, allowing for the use of low-cost, on-board monocular cameras as simple collision avoidance sensors. We present a deep recurrent convolutional neural network and training method to generate depth maps from video sequences. Our network is trained using simulated camera and depth data generated with Microsoft's AirSim simulator. Empirically, we show that our model achieves superior performance compared to models generated using prior methods. We further demonstrate that the method can be used for sense-and-avoid of obstacles in simulation.I. IntroductionEffective sense-and-avoid systems are necessary to safely integrate unmanned aircraft into the airspace [1]. Many systems require specialized sensors and extensive computational resources creating the challenge of adhering to aircraft size, weight, and power (SWaP) constraints[2]. Embedded digital cameras, such as those commonly installed in cell phones, are common low-SWaP sensors that can be easily accommodated on-board most small unmanned aircraft.Camera images cannot be used directly for sense-and-avoid because they do not provide the three-dimensional location of potential obstacles. In this paper, we present a method to estimate three-dimensional locations using image sequences from simple monocular cameras. Our method generates a relative depth map of each pixel in a camera field-of-view (FoV) in the direction normal to the image plane. The resulting depth maps can then be used in a variety of applications, such as Simultaneous Localization and Mapping (SLAM) or sense-and-avoid. This method does not require specialized sensors allowing it to be used on SWaP-constrained vehicles where other systems are infeasible.The proposed method uses a deep neural network to map visual image sequences to corresponding relative depth maps. In order to account for the correlation between sequential input frames, we propose a recurrent convolutional neural network (R-CNN) architecture[3]. We present this general architecture and recommend an auto-encoder design based on convolutional Gated Recurrent Units (C-GRUs). In addition, we present a method to effectively train the network over image sequences using stochastic mini-batches.We demonstrate the effectiveness of the depth extraction approach in Microsoft's AirSim simulator[4]. Using AirSim, we generate matched-pair sets of images from a simulated on-board camera and the depth map of the scene in the field of view. We provide qualitative examples of the depth maps generated by our method and quantitative evaluations using conventional metrics from the field of computer vision. Our method outperforms three previously proposed deep learning-based methods. We also show that the accuracy of the depth maps generated by our approach is sufficient for sense-and-avoid of stationary obstacles without additional sensor or telemetry data.II. Prior WorkPrior attempts to extract depth maps from images have employed a variety of approaches. Early approaches were based on geometric inferencing using stereo images[5]. Such methods used a pair of cameras fixed at a known displacement from one-another that collect a set of stereo images. Distance could then be estimated by extracting features from objects in the scene and calculating the depth from the disparity of the location of the feature in each camera view. These methods are limited in how well they can resolve depth at different distances[6].Recent developments have improved the performance of stereoscopic techniques[7]. These methods continue to be limited in the resolution they can accurately provide in many environments where feature-extraction is challenging and * Graduate Student Researcher, Stanford Intelligent Systems Laboratory, and AIAA Student Member † Assistant Professor, Stanford Intelligent Systems Laboratory, and AIAA Associate Fellow arXiv:1812.04082v1 [cs.CV] 10 Dec 2018 Recent techniques have used deep convolutional neural networks (CNNs) to build depth maps from single images[16][17][18]. These techniques often require extensive pre-or post-processing[19]. While the performance of the different methods vary, these methods tend to produce accurate resolution of low-frequency information.Additional methods have been introduced to extract relative scene information from video sequences. Geometric methods based on feature-displacement and optical flow have been used on videos for depth mapping[20]. Like the geometric methods used on single images, these methods are dependent on scene features in order to provide accurate estimates and are therefore limited in the environments for which they can provide accurate resolution.Our work draws several insights from within the field of deep computer vision. In particular, we draw from the work done with deep auto-encoders. Deep auto-encoders use CNNs to generate compressed representations of an input image and the mapping of that compressed representation back to the original image[21]. Unlike CNNs, which can be used to map input images to symbolic outputs such as classification labels, auto-encoders train on mappings from images to images. The design and training methods for these networks have been successfully applied to various image-to-image translation tasks, such as those in the field of medical imaging[22]. Our work casts the depth-mapping problem as an image-to-image translation problem and builds on on these methods.
|
10.2514/6.2019-1189
|
[
"https://arxiv.org/pdf/1812.04082v1.pdf"
] | 54,564,575 |
1812.04082
|
c91456755f1d0258dae1cd060077bcd1a369a631
|
Visual Depth Mapping from Monocular Images using Recurrent Convolutional Neural Networks
John Mern
Department of Aeronautics and Astronautics
Stanford University
94305StanfordCA
Kyle Julian
Department of Aeronautics and Astronautics
Stanford University
94305StanfordCA
Rachael E Tompa
Department of Aeronautics and Astronautics
Stanford University
94305StanfordCA
Mykel J Kochenderfer
Department of Aeronautics and Astronautics
Stanford University
94305StanfordCA
Visual Depth Mapping from Monocular Images using Recurrent Convolutional Neural Networks
A reliable sense-and-avoid system is critical to enabling safe autonomous operation of unmanned aircraft. Existing sense-and-avoid methods often require specialized sensors that are too large or power intensive for use on small unmanned vehicles. This paper presents a method to estimate object distances based on visual image sequences, allowing for the use of low-cost, on-board monocular cameras as simple collision avoidance sensors. We present a deep recurrent convolutional neural network and training method to generate depth maps from video sequences. Our network is trained using simulated camera and depth data generated with Microsoft's AirSim simulator. Empirically, we show that our model achieves superior performance compared to models generated using prior methods. We further demonstrate that the method can be used for sense-and-avoid of obstacles in simulation.I. IntroductionEffective sense-and-avoid systems are necessary to safely integrate unmanned aircraft into the airspace [1]. Many systems require specialized sensors and extensive computational resources creating the challenge of adhering to aircraft size, weight, and power (SWaP) constraints[2]. Embedded digital cameras, such as those commonly installed in cell phones, are common low-SWaP sensors that can be easily accommodated on-board most small unmanned aircraft.Camera images cannot be used directly for sense-and-avoid because they do not provide the three-dimensional location of potential obstacles. In this paper, we present a method to estimate three-dimensional locations using image sequences from simple monocular cameras. Our method generates a relative depth map of each pixel in a camera field-of-view (FoV) in the direction normal to the image plane. The resulting depth maps can then be used in a variety of applications, such as Simultaneous Localization and Mapping (SLAM) or sense-and-avoid. This method does not require specialized sensors allowing it to be used on SWaP-constrained vehicles where other systems are infeasible.The proposed method uses a deep neural network to map visual image sequences to corresponding relative depth maps. In order to account for the correlation between sequential input frames, we propose a recurrent convolutional neural network (R-CNN) architecture[3]. We present this general architecture and recommend an auto-encoder design based on convolutional Gated Recurrent Units (C-GRUs). In addition, we present a method to effectively train the network over image sequences using stochastic mini-batches.We demonstrate the effectiveness of the depth extraction approach in Microsoft's AirSim simulator[4]. Using AirSim, we generate matched-pair sets of images from a simulated on-board camera and the depth map of the scene in the field of view. We provide qualitative examples of the depth maps generated by our method and quantitative evaluations using conventional metrics from the field of computer vision. Our method outperforms three previously proposed deep learning-based methods. We also show that the accuracy of the depth maps generated by our approach is sufficient for sense-and-avoid of stationary obstacles without additional sensor or telemetry data.II. Prior WorkPrior attempts to extract depth maps from images have employed a variety of approaches. Early approaches were based on geometric inferencing using stereo images[5]. Such methods used a pair of cameras fixed at a known displacement from one-another that collect a set of stereo images. Distance could then be estimated by extracting features from objects in the scene and calculating the depth from the disparity of the location of the feature in each camera view. These methods are limited in how well they can resolve depth at different distances[6].Recent developments have improved the performance of stereoscopic techniques[7]. These methods continue to be limited in the resolution they can accurately provide in many environments where feature-extraction is challenging and * Graduate Student Researcher, Stanford Intelligent Systems Laboratory, and AIAA Student Member † Assistant Professor, Stanford Intelligent Systems Laboratory, and AIAA Associate Fellow arXiv:1812.04082v1 [cs.CV] 10 Dec 2018 Recent techniques have used deep convolutional neural networks (CNNs) to build depth maps from single images[16][17][18]. These techniques often require extensive pre-or post-processing[19]. While the performance of the different methods vary, these methods tend to produce accurate resolution of low-frequency information.Additional methods have been introduced to extract relative scene information from video sequences. Geometric methods based on feature-displacement and optical flow have been used on videos for depth mapping[20]. Like the geometric methods used on single images, these methods are dependent on scene features in order to provide accurate estimates and are therefore limited in the environments for which they can provide accurate resolution.Our work draws several insights from within the field of deep computer vision. In particular, we draw from the work done with deep auto-encoders. Deep auto-encoders use CNNs to generate compressed representations of an input image and the mapping of that compressed representation back to the original image[21]. Unlike CNNs, which can be used to map input images to symbolic outputs such as classification labels, auto-encoders train on mappings from images to images. The design and training methods for these networks have been successfully applied to various image-to-image translation tasks, such as those in the field of medical imaging[22]. Our work casts the depth-mapping problem as an image-to-image translation problem and builds on on these methods.
are often sensitive to camera performance [8]. Application of deep neural networks to stereoscopic pairs have been shown to alleviate some of these challenges [9,10].
Conventional computer vision methods have been proposed to extract depth from single images using geometric features. These methods generally rely on heuristic techniques that require the presence of strong scene features such as vanishing perspective lines and clear horizon [11][12][13]. Such methods have achieved reasonable success in indoor environments [14] where such features are commonly present, however, this success has not generalized to outdoor environments. Some researchers have used multi-scale Markov random fields to model the relationship between features and depth [15].
III. Technical Approach
The main contribution of this work is the introduction of a recurrent convolutional neural network (CNN) for the generation of depth maps from visual image sequences. CNNs are a special class of neural network, which are commonly used in machine learning because of their ability to approximate a wide variety of functions [23]. The network is composed of layers with weights and biases that define a mapping from an input to output. The outputs of each layer pass through a non-linear activation function before being used as the input for the following layer. The weights and biases are optimized to minimize the empirical loss over a training data set.
Images are often represented by a three-dimensional array with dimensions height, width, and number of color channels. CNNs efficiently map image inputs to outputs by using weight-sharing filters, reducing the number of network parameters. A simple example of convolution is shown in Figure 1. As the filter slides over the input array, a linear transform is applied to the group of array elements to produce the output. The stride of the filter, how far it is translated each step, dictates the size of the output array. The example in Figure 1 uses a stride of 1.
CNNs achieved state-of-the-art performance on many visual processing tasks due to their ability to efficiently learn filters over varying spatial scales [24]. One class of tasks is image-to-image translation, in which an input image is transformed such that some of its features match those of a target class of images (e.g. photograph to painting style transfer). We cast the problem of depth map generation as an image-to-image translation task, with the visual camera images as the input and depth maps as the output. In this context, the intensity of each pixel value of the depth map is indicative of the relative distance of the object in the scene.
A challenge to this approach is that monocular images contain no explicit information about the distance of objects in scene. While some relative distance information may be inferred from relational cues (e.g. object obfuscation), actual distance remains ambiguous in the 2D representation. Our approach seeks to resolve that ambiguity by allowing comparisons between sequential frames. This type of approach is known as depth-from-motion [25]. To allow this comparison in a deep neural network, we introduce recurrent convolutional cells to the network.
Recurrent cells are neural network activation functions that maintain a latent state during the course of network execution. The input and the latent state are used to generate the recurrent cell output and update the latent state. We use the convolutional Gated Recurrent Unit (GRU) as our recurrent cell [26]. The convolutional GRU performs non-linear convolutional operations on inputs to generate outputs and update its latent state. Prior work implementing a similar architecture has shown that the convolutional GRU cell performance meets or exceeds the performance of the similar convolutional Long Short-Term Memory (LSTM) cell while being simpler to implement and train [27].
Using GRU cells, we propose an autoencoder architecture as shown in Figure 2. An autoencoder is composed of an encoder, bottleneck layer, and decoder. The encoder reduces the size of the layer outputs through recurrent strided convolutions until a minimum sized bottleneck layer (E3 in Figure 2), and the decoder then increases the output size by reshaping some of the array elements in the depth dimension to the spatial dimensions. Unlike transpose convolution or interpolation, this reshaping method does not create artifacts in the output images because the network can directly learn the proper mapping across the reshaping.
Fig. 2 Proposed neural network architecture
Our network architecture is described in Table 1, where the Layer corresponds to the matching labels in fig. 2, Depth defines the number of filters applied at each layer, and Activation defines the non-linear function used at each layer. GRU cells were used for all the recurrent activation functions. For layers E0 and D0, the leaky Rectified Linear Unit (LReLU) was used, which is defined as
f (x) = α max(0, x) + (1 − α)max(0, −x)(1)
where α is the leak parameter which was defined as α = 0.1 for all layers. Before each decoder layer, a reshaping (1,1) 1 3 tanh upscales the spatial dimensions by a factor of two in both height and width. The network was trained using supervised learning, which required matched pairs of camera images and true depth maps. The loss function was the L1 norm of the difference between the generated depth map and the true depth map, which is defined as j(y,ŷ) = y −ŷ 1 (2) where y is the true depth map andŷ is the depth map output by the network. The network parameters were optimized using gradient-based optimization using the Adam optimizer [28]. Our network was built and trained using the Tensorflow framework (tensorflow.org). At each optimization step, data was provided in mini-batches comprised of sub-sets of the complete training set. The full data set was composed of videos showing episodes of different trajectories in the simulated environment. The complete episode videos were not used directly for training because recurrent neural networks are sensitive to the vanishing gradient problem [29], and full-episodes were typically longer sequences than would be viable. Training sequences were instead generated as 32-frame sub-sequences from the full episodes.
During training, these mini-batches were constructed by uniformly sampling a starting frame from the complete set and constructing the sequence from the following frames. The loss used for the optimization step was the mean loss of the mini-batch, which is defined as
J(Y,Ŷ ) = 1 m m i=1 y i −ŷ i 1 (3)
where Y is the set of true depth maps,Ŷ is the set of depth maps generated by the network, and m is the number of images in the mini-batch. The training mini-batches are sequential, so the network can learn the depth-from-motion relationships in frame sequences. Sampling the starting index stochastically decorrelates the optimization steps, stabilizing the training process.
During operation of the network, the GRU latent states are continually updated each time a frame is passed through the graph and the output of the network for each frame is dependent upon its initial latent state. We draw our mini-batch samples stochastically, so we do not know the initial latent state corresponding to the given training sequence. Initializing this latent state to zero during training can bias the training process. To overcome this, we introduce a burn-in period for each training update, where we construct an initialization sequence of the 32 images before the selected start frame. We feed these through the network without including the error in the loss function, allowing the network to accumulate a latent state. The optimization step then proceeds with the training batch using the initialized network.
IV. Experimental Setup
We ran a series of experiments to qualitatively and quantitatively evaluate the performance of our method using Microsoft's AirSim UAV simulator. We used AirSim to gather our training dataset as well as a separate dataset used for testing. The datasets consisted of images from an on-board synthetic camera along with its corresponding depth map. Figure 3 shows an example image pair. Our training dataset has 1873 total image pairs in sequence, for which we use 80% for training and hold out 20% as a validation set for hyperparameter tuning. These were gathered by manually piloting the AirSim car model through various trajectories in AirSim. Our test dataset has 750 image pairs gathered from the same environment.
In addition to evaluating our method, we implemented three baseline CNNs for comparison: Pix2Pix, CycleGAN, and multi-scale deep network. The first two baselines are commonly used for image-to-image translation problems. They implement Generative Adversarial Networks (GANs) to create outputs that are perceptually similar to the target image class. Pix2Pix is a conditional GAN developed for general image-to-image translation tasks [30]. An extension to Pix2Pix is CycleGAN, which not only learns to map an input image to an output image but ensures that the output image can be used to recreate the input image [31]. The final method uses a multi-scale CNN, which first trains a coarse depth map generator and then trains a second network to refine the coarse depth map [17]. These networks were chosen because they have achieved state-of-the-art performance in various image translation tasks.
Fig. 3 (Left) Synthetic camera image; (Right) Corresponding depth map
We trained our Convolutional GRU network using stochastic sequential mini-batches for 10,000 epochs with an initial learning rate of 10 −3 . We trained and evaluated the baseline networks with the same data sets and hyperparameters determined by cross-validation for each network. After training, we generated depth maps for the data in the test set and measured the pixel average mean-square error (MSE), pixel average absolute error (AE), and the average pixel Root Mean-Square Logistic Error (RMSLE) for all four models. All of the errors are an average over the individual pixel color channel values of the real and network generated depth images, labeled d real and d network respectively. Because our depth maps are in gray-scale, pixel values in all three color channels are identical. Let c be the total number of color channels each pixel has, let n be the total number of pixels in the test set, and let d (i, j) real and d (i, j) network be the i th color channel of the j th pixel of the real and network generated depth maps with i ∈ 1, ..., c and j ∈ 1, ..., n. These error terms are defined as
MSE = 1 c 1 n c i=1 n j=1 d (i, j) real − d (i, j) network 2 (4) AE = 1 c 1 n c i=1 n j=1 d (i, j) real − d (i, j) network(5)RMSLE = 1 c 1 n c i=1 n j=1 log(256 − d (i) real ) − log(256 − d (i) network )
perceptual image quality. The MSE tends to emphasize the low-frequency content of an image caused by large objects. AE tends to more heavily weight the high-frequency content of an image, such as object textures. An image that performs well in MSE but not in AE will often provide a blurred rendering of the true image. RMSLE captures accuracy of features relative to the intensity of the feature pixels. For example, an error of 10 on of a feature whose average pixel value is 200 will be penalized far less than an error of 10 on a feature whose average pixel value is 20. In this way, RMSLE can be interpreted as a perceptually weighted loss.
In addition to the test set comparison, we also evaluated the effectiveness of each method for obstacle avoidance. To do so, we created an obstacle course in AirSim by placing cars in the road that block an unmanned aircraft from reaching its goal. To avoid the cars and reach the goal, the aircraft uses the depth prediction models to assess when it should stop before flying into a car as well as how much turning is needed to avoid the car. Cars parked perpendicularly in the road were not present in the environment from which the training data was gathered. In addition, no collisions with cars were seen in the training set, so this experiment evaluates the ability of the networks to generalize to new scenarios with new obstacles. Figure 4 shows an example trajectory as well as a top-down view of the trajectory flown when using the true depth map to guide the aircraft.
V. Results
We qualitatively assessed the depth maps generated from each network. Figure 5 shows a sample sequence from the different networks. As can be seen, Pix2Pix generated a depth map for each input frame which tended not to vary with inputs. It is likely that this depth map represents the average scene observed in the dataset. At first glance, Cycle GAN appears to provide the clearest resolution. Upon further inspection, it can be seen that, while visually detailed, the actual depth values are typically inaccurate. This is likely an effect of the cycle GAN loss which prioritizes the visual perceptual qualities of the image (presence of textures, shapes, etc.) over the agreement with the target image. This effect can be clearly seen in the frame-to-frame evolution of the GAN depth maps. The car is initially textured as a bush and continues to morph in shape throughout the progression. Additionally, several background features appear and disappear throughout the sequence, and notably the house is absent in all frames. The multi-scale CNN captures the very low-frequency features of the image (e.g. the car), though the blur renders most of the background indistinguishable.
Our proposed network appears to provide the best resolution of details while accurately matching the values of the true depth map features. Table 2 provides a quantitative comparison between our method and the baselines. As can be seen, our recurrent method outperforms the baseline methods on all metrics. In agreement with the qualitative assessment, Pix2Pix has the worst quantitative performance in all categories. Our qualitative assessment of cycle GAN is supported by the qualitative results. While features in the depth map were clearly resolved, the accuracy of the resulting maps is fairly poor, performing worse than the multi-scale CNN and the Convolutional GRU. The multi-scale CNN performance nearly matches the performance of the convolutional GRU in MSE, with only a 4.5% difference. This fits with our qualitative assessment, as the multi-scale CNN maps were observed to accurately capture blurry representations of large features in the scene. The absolute error, however, is 20.0% higher than the convolutional GRU. This is again consistent with our observations, as the convolutional GRU features were much more clearly resolved. The improved performance of the convolutional GRU over the baseline methods is likely due to the ability of the convolutional GRU to make implicit depth-from-motion estimates from the image sequences, while the baseline methods are only consider a single frame per estimate. The Pix2Pix depth model was not evaluated with the obstacle course because it preformed poorly with the test image set. For the other three models, 30 simulations were conducted with randomized starting locations. Figure 6 shows a top-down view of the 30 simulated trajectories flown with each method. In these cases where the trajectories intersect the bottom car, the aircraft flies over the trunk of the car. The simulation results are summarized in table 3.
The Convolutional GRU performs the best and stops before the cars at a consistent distance. A few times the aircraft turns farther than needed because the predicted depth map does not show an open path forward until facing almost the opposite direction. The aircraft is able to reach the goal in 25 of the 30 simulations and only crashes into a car twice. The remaining three trajectories timed out because the aircraft took wrong turns and could not reach the goal in the given time. In some trials, the aircraft pitched significantly to stop, and the depth estimate noise caused the car to be lost. This is likely due to these types of maneuvers not being seen in the training data. After the aircraft remained stationary for a few moments, the car was re-acquired and the travel resumed. The multi-scale CNN performs the second best in the simulations. The aircraft reaches the finish in 19 of the 30 simulations and crashes into a car 8 times. The mutli-scale CNN depth predictions are not as well resolved as with the Convolutional GRU, and as a result the depth predictions are not as accurate or reliable. Finally, the Cycle GAN model performs poorly. The aircraft reaches the finish in only 6 of the 30 simulations and crashes into a car in the remaining 24 trajectories. Although Cycle GAN predictions often seem the most visually detailed, they are often inaccurate. The resulting depth predictions do not get low enough to alert the vehicle that a car is in the path of the aircraft. As the aircraft approaches the car, the car seems to blend in with the road, which makes avoiding the car difficult.
VI. Conclusion
In this work, we presented a novel method to estimate object distances from a simple monocular camera using recurrent convolutional neural networks. We proposed a neural network architecture and design based on convolutional Gated Recurrent Units and a method to train the network using sequential stochastic mini-batch training. We also introduced hidden-state burn-in to reduce the bias induced by the stochastic training process.
We demonstrated the effectiveness of this approach in a simulated environment. Our approach quantitatively outperformed state-of-the-art convolutional neural style transfer methods in three common objective quality metrics. We showed that with only an on-board monocular camera, the aircraft was able to resolve object depth with sufficient accuracy to avoid obstacles. Future work will investigate the effectiveness of this method in different visual environments and with moving obstacles. While our recurrent network was able to outperform previous methods, the network architecture was relatively simple. Future work will explore the inclusion of architectural features of more complex style-transfer networks while retaining the recurrent units to improve the resolution of depth features. Currently, training is conducted with a simple L1 loss function. Loss functions tailored to specific depth map use cases, such as sense-and-avoid or SLAM, will also be investigated. Additionally, incorporating vehicle telemetry (velocity, orientation, etc.) as an additional network input will be explored to improve depth-from-motion estimation.
Fig. 1
1Convolutional Filter Example Input Array (left), Output Array (mid), Filter (right)
Fig. 4 A
4snapshot of the Airsim simulation (left) and a top-view of the obstacle course with the expected trajectory (right)
Fig. 5
5Example frame sequences. Rows (top to bottom): Visual Input Image, True Depth Map, Pix2Pix, Cycle GANs, Multi-scale CNN, Convolutional GRU
Fig. 6
6Thirty simulated trajectories using a Convolutional GRU (left), Multi-scale CNN (middle), and Cycle GAN (right)
Table 1
1Neural network design parametersLayer Filter Size Stride Depth Activation
E0
(3,3)
2
64
LReLU
E1
(3,3)
2
256
GRU
E2
(3,3)
2
512
GRU
E3
(3,3)
2
512
GRU
D0
(1,1)
1
512
LReLU
Reshape
D1
(3,3)
1
512
GRU
Reshape
D2
(3,3)
1
256
GRU
Reshape
D3
(3,3)
1
256
GRU
Reshape
D4
(3,3)
1
128
GRU
D5
Table 2
2Test set resultsNetwork
MSE
AE RMSLE
Pix2Pix
3534.6 36.41
0.419
CycleGAN
720.8 14.63
0.242
Multi-scale CNN
478.6 12.83
0.234
Convolutional GRU
457.4 10.72
0.081
Table 3
3Results from 30 simulationsNetwork
Finishes Crashes
CycleGAN
6
24
Multi-scale CNN
19
8
Convolutional GRU
25
2
(6) While all of these metrics provide a measure of image accuracy, each of tends to weight accuracy of a different
Future of unmanned aviation. K P Valavanis, G J Vachtsevanos, Valavanis, K. P., and Vachtsevanos, G. J., "Future of unmanned aviation," 2015, pp. 2993-3009.
Airborne Collision Detection and Avoidance for Small UAS Sense and Avoid Systems. L R Sahawneh, Brigham Young UniversityPh.D. thesisSahawneh, L. R., "Airborne Collision Detection and Avoidance for Small UAS Sense and Avoid Systems," Ph.D. thesis, Brigham Young University, 2016.
Recurrent convolutional neural networks for scene labeling. P H Pinheiro, R Collobert, International Conference on Machine Learning (ICML). Pinheiro, P. H., and Collobert, R., "Recurrent convolutional neural networks for scene labeling," International Conference on Machine Learning (ICML), 2014.
AirSim: High-Fidelity Visual and Physical Simulation for Autonomous Vehicles. S Shah, D Dey, C Lovett, A Kapoor, Field and Service Robotics Conference. Shah, S., Dey, D., Lovett, C., and Kapoor, A., "AirSim: High-Fidelity Visual and Physical Simulation for Autonomous Vehicles," Field and Service Robotics Conference, 2017.
An Iterative Image Registration Technique with an Application to Stereo Vision. B D Lucas, T Kanade, International Joint Conference on Artificial Intelligence (IJCAI). Lucas, B. D., and Kanade, T., "An Iterative Image Registration Technique with an Application to Stereo Vision," International Joint Conference on Artificial Intelligence (IJCAI), 1981.
Resolution limitations and error analysis for stereo camera models. N Alvertos, IEEE SouthEastCon. Alvertos, N., "Resolution limitations and error analysis for stereo camera models," IEEE SouthEastCon, 1988, pp. 220-224.
Joint depth map and color consistency estimation for stereo images with different illuminations and cameras. Y S Heo, K M Lee, S U Lee, IEEE transactions on pattern analysis and machine intelligence. 35Heo, Y. S., Lee, K. M., and Lee, S. U., "Joint depth map and color consistency estimation for stereo images with different illuminations and cameras," IEEE transactions on pattern analysis and machine intelligence, Vol. 35, No. 5, 2013, pp. 1094-1106.
Single Camera-Based Depth Estimation and Improved Continuously Adaptive Mean Shift Algorithm for Tracking Occluded Objects. J Im, J Jung, J Paik, Pacific-Rim Conference on Advances in Multimedia Information Processing. Berlin, HeidelbergSpringer-VerlagIm, J., Jung, J., and Paik, J., "Single Camera-Based Depth Estimation and Improved Continuously Adaptive Mean Shift Algorithm for Tracking Occluded Objects," Pacific-Rim Conference on Advances in Multimedia Information Processing, Springer-Verlag, Berlin, Heidelberg, 2015, pp. 246-252.
Efficient Deep Learning for Stereo Matching. W Luo, A G Schwing, R Urtasun, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR). Luo, W., Schwing, A. G., and Urtasun, R., "Efficient Deep Learning for Stereo Matching," IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
Stereo matching by training a convolutional neural network to compare image patches. J Zbontar, Y Lecun, Journal of Machine Learning Research. 171-322Zbontar, J., and LeCun, Y., "Stereo matching by training a convolutional neural network to compare image patches," Journal of Machine Learning Research, Vol. 17, No. 1-32, 2016, p. 2.
Depth map generation by image classification. S Battiato, S Curti, M La Cascia, M Tortora, E Scordato, Three-Dimensional Image Capture and Applications VI. 5302Battiato, S., Curti, S., La Cascia, M., Tortora, M., and Scordato, E., "Depth map generation by image classification," Three-Dimensional Image Capture and Applications VI, Vol. 5302, International Society for Optics and Photonics, 2004, pp. 95-105.
Depth map generation by image classification. S Battiato, S Curti, M La Cascia, M Tortora, E Scordato, Three-Dimensional Image Capture and Applications VI. 5302Battiato, S., Curti, S., La Cascia, M., Tortora, M., and Scordato, E., "Depth map generation by image classification," Three-Dimensional Image Capture and Applications VI, Vol. 5302, International Society for Optics and Photonics, 2004, pp. 95-105.
Block-based Vanishing Line and Vanishing Point Detection for 3D Scene Reconstruction. Y M Tsai, Y L Chang, Chen , L G , International Symposium on Intelligent Signal Processing and Communications. Tsai, Y. M., Chang, Y. L., and Chen, L. G., "Block-based Vanishing Line and Vanishing Point Detection for 3D Scene Reconstruction," International Symposium on Intelligent Signal Processing and Communications, 2006.
Indoor Segmentation and Support Inference from RGBD Images. P K Nathan Silberman, Derek Hoiem, Fergus , R , European Conference on Computer Vision (ECCV). Nathan Silberman, P. K., Derek Hoiem, and Fergus, R., "Indoor Segmentation and Support Inference from RGBD Images," European Conference on Computer Vision (ECCV), 2012.
3-D depth reconstruction from a single still image. A Saxena, S H Chung, A Y Ng, International journal of computer vision. 761Saxena, A., Chung, S. H., and Ng, A. Y., "3-D depth reconstruction from a single still image," International journal of computer vision, Vol. 76, No. 1, 2008, pp. 53-69.
Deeper depth prediction with fully convolutional residual networks. I Laina, C Rupprecht, V Belagiannis, F Tombari, N Navab, IEEE International Conference on 3D Vision (3DV). IEEELaina, I., Rupprecht, C., Belagiannis, V., Tombari, F., and Navab, N., "Deeper depth prediction with fully convolutional residual networks," IEEE International Conference on 3D Vision (3DV), IEEE, 2016, pp. 239-248.
Depth Map Prediction from a Single Image using a Multi-Scale Deep Network. D Eigen, C Puhrsch, Fergus , R , Advances in Neural Information Processing Systems (NIPS). Eigen, D., Puhrsch, C., and Fergus, R., "Depth Map Prediction from a Single Image using a Multi-Scale Deep Network," Advances in Neural Information Processing Systems (NIPS), 2014.
Deep convolutional neural fields for depth estimation from a single image. F Liu, C Shen, Lin , G , IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR). Liu, F., Shen, C., and Lin, G., "Deep convolutional neural fields for depth estimation from a single image," IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 5162-5170.
Unsupervised CNN for single view depth estimation: Geometry to the rescue. R Garg, V K Bg, G Carneiro, Reid , I , European Conference on Computer Vision (ECCV). Garg, R., BG, V. K., Carneiro, G., and Reid, I., "Unsupervised CNN for single view depth estimation: Geometry to the rescue," European Conference on Computer Vision (ECCV), 2016.
Depth Map Generation for 2D-to-3D Conversion by Short-Term Motion Assisted Color Segmentation. Y L Chang, C Y Fang, L F Ding, S Y Chen, Chen , L G , IEEE International Conference on Multimedia and Expo. Chang, Y. L., Fang, C. Y., Ding, L. F., Chen, S. Y., and Chen, L. G., "Depth Map Generation for 2D-to-3D Conversion by Short-Term Motion Assisted Color Segmentation," IEEE International Conference on Multimedia and Expo, 2007.
Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. P Vincent, H Larochelle, I Lajoie, Y Bengio, P.-A Manzagol, Journal of Machine Learning Research. 11Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., and Manzagol, P.-A., "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion," Journal of Machine Learning Research, Vol. 11, No. Dec, 2010, pp. 3371-3408.
U-Net: Convolutional Networks for Biomedical Image Segmentation. O Ronneberger, P Fischer, T Brox, Medical Image Computing and Computer-Assisted Intervention (MICCAI). Springer9351Ronneberger, O., Fischer, P., and Brox, T., "U-Net: Convolutional Networks for Biomedical Image Segmentation," Medical Image Computing and Computer-Assisted Intervention (MICCAI), LNCS, Vol. 9351, Springer, 2015, pp. 234-241.
Kolmogorov's theorem and multilayer neural networks. V Kuurkova, Neural networks. 53Kuurkova, V., "Kolmogorov's theorem and multilayer neural networks," Neural networks, Vol. 5, No. 3, 1992, pp. 501-506.
M Z Alom, T M Taha, C Yakopcic, S Westberg, M Hasan, B C Van Esesn, A A S Awwal, V K Asari, 2018. 1803.01164The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches. arXiv preprintAlom, M. Z., Taha, T. M., Yakopcic, C., Westberg, S., Hasan, M., Van Esesn, B. C., Awwal, A. A. S., and Asari, V. K., "The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches," arXiv preprint, 2018. 1803.01164.
Depth from motion. G Sperling, B A Dosher, Sperling, G., and Dosher, B. A., "Depth from motion," Early vision and beyond, 1994, pp. 133-142.
Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. K Cho, B Van Merrienboer, Ç Gülçehre, F Bougares, H Schwenk, Y Bengio, Conference on Empirical Methods in Natural Language Processing. Cho, K., van Merrienboer, B., Gülçehre, Ç., Bougares, F., Schwenk, H., and Bengio, Y., "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation," Conference on Empirical Methods in Natural Language Processing, 2014.
Full resolution image compression with recurrent neural networks. G Toderici, D Vincent, N Johnston, S J Hwang, D Minnen, J Shor, M Covell, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR. Toderici, G., Vincent, D., Johnston, N., Hwang, S. J., Minnen, D., Shor, J., and Covell, M., "Full resolution image compression with recurrent neural networks," IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
Adam: A method for stochastic optimization. D Kingma, J Ba, International Conference on Learning Representations (ICLR). Kingma, D., and Ba, J., "Adam: A method for stochastic optimization," International Conference on Learning Representations (ICLR), 2015.
On the difficulty of training recurrent neural networks. R Pascanu, T Mikolov, Y Bengio, International Conference on Machine Learning (ICML). Pascanu, R., Mikolov, T., and Bengio, Y., "On the difficulty of training recurrent neural networks," International Conference on Machine Learning (ICML), 2013, pp. 1310-1318.
Image-to-Image Translation with Conditional Adversarial Networks. P Isola, J.-Y Zhu, T Zhou, A A Efros, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR). IEEEIsola, P., Zhu, J.-Y., Zhou, T., and Efros, A. A., "Image-to-Image Translation with Conditional Adversarial Networks," IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2017, pp. 5967-5976.
Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. J Y Zhu, T Park, P Isola, A A Efros, IEEE International Conference on Computer Vision (ICCV. Zhu, J. Y., Park, T., Isola, P., and Efros, A. A., "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks," IEEE International Conference on Computer Vision (ICCV), 2017.
|
[] |
[
"ANALYSIS OF THE EXTENDED COUPLED-CLUSTER METHOD IN QUANTUM CHEMISTRY *",
"ANALYSIS OF THE EXTENDED COUPLED-CLUSTER METHOD IN QUANTUM CHEMISTRY *"
] |
[
"Andre Laestadius ",
"Simen Kvaal "
] |
[] |
[] |
The mathematical foundation of the so-called extended coupled-cluster method for the solution of the many-fermion Schrödinger equation is here developed. We prove an existence and uniqueness result, both in the full infinite-dimensional amplitude space as well as for discretized versions of it. The extended coupled-cluster method is formulated as a critical point of an energy function using a generalization of the Rayleigh-Ritz principle: the bivariational principle. This gives a quadratic bound for the energy error in the discretized case. The existence and uniqueness results are proved using a type of monotonicity property for the flipped gradient of the energy function.Comparisons to the analysis of the standard coupled-cluster method is made, and it is argued that the bivariational principle is a useful tool, both for studying coupled-cluster type methods, and for developing new computational schemes in general.
|
10.1137/17m1116611
|
[
"https://arxiv.org/pdf/1702.04317v1.pdf"
] | 43,924,413 |
1702.04317
|
14dc7554796f1cfa7e30798b270fe1145b9c7748
|
ANALYSIS OF THE EXTENDED COUPLED-CLUSTER METHOD IN QUANTUM CHEMISTRY *
14 Feb 2017
Andre Laestadius
Simen Kvaal
ANALYSIS OF THE EXTENDED COUPLED-CLUSTER METHOD IN QUANTUM CHEMISTRY *
14 Feb 2017quantum chemistrycoupled-cluster methodextended coupled-cluster methodbivariational principleuniqueness and existenceerror estimates AMS subject classifications 65Z05, 81-08, 81V55
The mathematical foundation of the so-called extended coupled-cluster method for the solution of the many-fermion Schrödinger equation is here developed. We prove an existence and uniqueness result, both in the full infinite-dimensional amplitude space as well as for discretized versions of it. The extended coupled-cluster method is formulated as a critical point of an energy function using a generalization of the Rayleigh-Ritz principle: the bivariational principle. This gives a quadratic bound for the energy error in the discretized case. The existence and uniqueness results are proved using a type of monotonicity property for the flipped gradient of the energy function.Comparisons to the analysis of the standard coupled-cluster method is made, and it is argued that the bivariational principle is a useful tool, both for studying coupled-cluster type methods, and for developing new computational schemes in general.
1. Introduction. The coupled-cluster (CC) method is today the de facto standard wavefunction-based method for electronic-structure calculations, and has a complex and interesting history [14,11,4,2]. To cut a long story short, it was invented by Coester and Kümmel in the 1950s as a method for dealing with the strong correlations inside an atomic nucleus [5,6]. From nuclear physics, the idea migrated to the field of quantum chemistry in the 1960s due to the seminal work of researchers such as Sinanoglu,Čížek, Paldus and Shavitt [19,3,15]. An interesting turn of events is that the method returned to nuclear physics in the 1990s, when Dean and Hjorth-Jensen applied the now mature methodology to nuclear structure calculations [7].
The main feature of the CC method is the use of an exponential parametrization of the wavefunction. This ensures proper scaling of the computed energy with system size (number of particles), i.e., the method is size extensive. At the same time, the CC method is only polynomially scaling with respect to system size. These factors have led to the popularity of the method.
However, the theory does not satisfy the (Rayleigh-Ritz) variational principle, i.e., the computed CC energy is not guaranteed to be an upper bound to the exact energy. This has traditionally been the main criticism of CC calculations, as an error estimate is not readily available. Furthermore, in the original formulation it was not variational in the sense that the solution was not formulated as a stationary point of some function(al).
Helgaker and Jørgensen later formulated the CC method in terms of a Lagrangian [9,10], viewing the solution of the CC amplitude equations as a constrained optimization of the energy, the set of cluster amplitude equations becoming constraints. This is today the standard formulation of the CC method.
Already in 1983, Arponen [1] derived the so-called extended CC method (ECC) from a generalization of the Rayleigh-Ritz variational principle, the bivariational principle. This principle formally relaxes the condition of the Hamiltonian being symmetric, and thus introduces the left eigenvector as a variable as well as the right eigenvector. Arponen noted that the standard CC method can be viewed as an approximation to ECC, and continued to write down the standard CC Lagrangian. In the bivariational interpretation, Helgaker and Jørgensen's Lagrange multipliers are actually wavefunction parameters on equal footing with the cluster amplitudes. No distinction is being made.
Both Helgaker and Jørgensen's CC Lagrangian and Arponen's bivariational formulation cast CC theory in a variational (stationary point) setting. However, only the bivariational point of view allows, at least formally, systematic improvement by adding other degrees of freedom than the cluster amplitudes to the ansatz. The bivariational principle is therefore of potential great use when developing novel wavefunction-based methods, see for example Ref. [12], where the single-particle functions are introduced as (bi)variational parameters in a time-dependent setting. However, while the bivariational principle is rigorous, it is not known how to introduce approximations by parameterizations of the wavefunctions, such that one can obtain existence and uniqueness results as well as error estimates.
In this article, we will provide a rigorous analysis of a version of the ECC method. The idea is, starting from the bivariational quotient, to choose a function F (see Eq. (7)) that is (locally and strongly) monotone and where F = 0 is equivalent to a critical point of the bivariational quotient. Until now, ECC has not been turned into a practical tool in chemistry due to its complexity. On the other hand, the analysis herein is a step towards obtaining a rigorous foundation for the application of the bivariational principle. We believe that the approach taken, by showing the monotonicity of the flipped gradient F , is an approach that may allow existence and uniqueness results in much more general settings.
We build our analysis on articles by Rohwedder and Schneider, who fairly recently put the standard CC method on sound mathematical ground [18,16,17]. They proved, among other important results, a uniqueness and existence result of the solution of the CC amplitude equations. The result rests on a certain monotonicity property of the CC equations. Moreover, in Ref. [16] the boundedness of cluster operators (as operators on a Hilbert space that guarantees finite kinetic energy) was established, which turns out to be a rather subtle matter. They also provided error estimates for the energy using the stationarity condition of the Lagrangian.
This article is structured as follows: In Section 2 we discuss the solution of the Schrödinger equation by employing an exponential ansatz. We here present relevant results needed for this work. In particular Lemma 8 is the motivation for our choice of ECC variables and links the ECC energy function to the bivariational principle. Theorem 9 formulates the continuous ECC equations and equates the solution of these equations with the solution of the Schrödinger equation.
In Section 3 we analyze the flipped gradient of the ECC energy function and prove strong and local monotonicity for this entity. This is achieved for two complementary situations. Theorem 16 proves this property under assumptions on the structure of the solution, whereas Theorem 17 under assumptions on the Hamiltonian. Along the lines of the analysis of Rohwedder and Schneider for the CC theory, we prove existence and uniqueness for the solution of the (continuous) ECC equation and truncated (discrete) versions of it, see Theorem 19. This theorem also guarantees convergence towards the full solution as the truncated amplitude spaces tend to the continuous ones. Theorem 22 formulates a sufficient condition for the truncated amplitude spaces to grant a unique solution of the discrete ECC equation. Again the monotonicity is used for the flipped gradient. Lastly, in Theorem 24 we obtain error estimates for the truncated ECC energy. The energy estimates are obtained without the use of a Lagrangian and are instead based on the bivariational formulation of the theory.
2. Solving the Schrödinger equation using the exponential ansatz.
2.1. Traditional CC theory in a rigorous manner. In this section we consider the exponential parametrization for the N -electron ground-state wavefunction ψ * satisfying the N -electron Schrödinger equation (SE)
Hψ * = E * ψ * .
Here, E * is the ground-state energy and H is the Hamiltonian of a molecule in the Born-Oppenheimer approximation. We assume that ψ * exists and that it is nondegenerate, and we denote by γ * > 0 the spectral gap (for definition see Section 3.2).
The set of admissible wavefunctions is a Hilbert space H ⊂ L 2 N of finite kinetic energy wavefunctions, with norm ψ 2 H = ψ 2 + ∇ψ 2 . Here, L 2 N is the space of totally antisymmetric square-integrable functions ψ : (R 3 × {↑, ↓}) N → R, with norm · and inner product ·, ·· . In this work, we restrict our attention to the real space L 2 N , and thus real Hamiltonians. We will furthermore assume that that the ground-state wavefunction ψ * is nonorthogonal to a (fixed) reference determinantal wavefunction φ 0 ∈ H, and thus, using intermediate normalization,
we have ψ * = φ 0 + ψ ⊥ , where φ 0 , ψ ⊥ = 0.
The molecular Hamiltonian has a set of useful properties that make the SE wellposed [20]. The operator H : H → H ′ is a bounded (continuous) operator into the dual H ′ , i.e., there exists a constant C ≥ 0 such that for all ψ, ψ ′ ∈ H,
(1a) | ψ ′ , Hψ | ≤ C ψ ′ H ψ H .
Moreover, H is below bounded by a constant e ∈ R such that H + e is H-coercive, i.e., there exists a constant c ≥ 0 such that for all ψ ∈ H,
(1b) ψ, (H + e)ψ ≥ c ψ 2 H .
The latter inequality is often referred to as a Gårding estimate and it is immediate that e > −E * . Finally, H is symmetric,
(1c) ψ, Hψ ′ = ψ ′ , Hψ .
Equations (1a-1c) form assumptions on H that will be used frequently.
In a standard fashion, we introduce a basis for H of determinantal wavefunctions built from the N "occupied" functions χ i (forming φ 0 ) as well as "virtual" functions χ a , a = N + 1, N + 2, · · · . Assuming that {χ p : p = 1, 2, · · · } is an L 2 1 -orthonormal basis, the corresponding determinantal basis {φ µ } is L 2 N -orthonormal. Additionally, we must require ∇χ p < +∞.
Each φ µ can be written on the form φ µ = X µ φ 0 , where X µ is an operator that creates up to N particle-hole pairs, i.e., {X µ } µ =0 are excitation operators, and for an arbitrary ψ ∈ H with φ 0 , ψ = 1 we have
ψ = φ 0 + µ =0 c µ φ µ = (I + C)φ 0 ,
with C = µ =0 c µ X µ being a cluster operator. The sequence c = {c µ } µ =0 consists of the corresponding cluster amplitudes. One says that φ 0 spans the "reference space" P := span{φ 0 }, while {φ µ } µ =0 forms a basis for Q = P ⊥ , the "excluded space". It is clear that P ⊕ Q = H. (Here P ⊥ denotes the L 2 N orthogonal complement of P, i.e., with respect to the inner product ·, ·· .)
We introduce the convention that to each cluster amplitude sequence c = {c µ } µ =0 , t = {t µ } µ =0 , etc, the corresponding cluster operator is denoted by the capital letter, i.e., C = µ c µ X µ , T = µ t µ X µ , etc. Cluster operators by definition excludes µ = 0, so unless otherwise specified, in the sequel, all sums over µ runs over excited determinants only. Moreover, we group the excitations according to the number of "particle-hole pairs" they create, i.e., T = T 1 + T 2 + · · · + T N , etc.
We follow Ref. [17] and introduce a Banach space of cluster amplitudes (in fact it is a Hilbert space). We say that t ∈ V if and only if t V := T φ 0 H < +∞. Thus, t ∈ V if and only if {t µ } are the amplitudes of a wavefunction of finite kinetic energy in the excluded space, i.e., T φ 0 ∈ Q. We remark that the space of cluster operators corresponding to amplitudes from V only depends on the choice of the reference φ 0 (i.e. the space P), and not on the choice of the virtual orbitals {χ a }, as long as {φ µ } is an orthonormal basis of Q.
If the Hilbert space was finite dimensional, every linear operator would be bounded, and the exponential map T → e T would always be well-defined. A cornerstone of formal CC theory is therefore the well-definedness of the exponential map for general Hilbert spaces and cluster operators (see Lemma 2.3 in [17]):
Theorem 1 (Rohwedder and Schneider, the exponential mapping). T and T † are bounded operators on H if and only if t ∈ V. Moreover, the exponential map T → e T is a (Fréchet) C ∞ isomorphism between C := {T : t ∈ V} and C 0 := {I + T : t ∈ V}. For ψ ∈ H such that φ 0 , ψ = 1 there exists a unique t ∈ V such that ψ = e T φ 0 , depending smoothly on ψ. In particular the exponential map and its inverse are locally Lipschitz, i.e., for s, t ∈ V inside some ball, there exist constants D, D ′ such that
(2) s − t V ≤ D e S φ 0 − e T φ 0 H ≤ D ′ s − t V .
Remark 2. Note that the above theorem does not hold for a general subspace (truncation) V d ⊂ V. To see this, let {χ p } be an orthonormal set but not necessarily a (complete) basis and consider a subset V d corresponding to only single excitations (T = T 1 , S = S 1 etc.) and assume N > 1. Then the relation e T = I + S implies T 1 + T 2 1 /2 + · · · + T N 1 /N ! = S 1 . Thus, we can choose T 1 = 0 such that e T1 = I + S 1 , for any single excitation S 1 .
The CC ansatz uses that the exponential is a bijection between the sets C and C 0 , such that ψ * = e T * φ 0 for some T * satisfying e T * = I + C * . We then have (see Theorem 5. where f : V → V ′ is given by
f µ (t) := φ µ , e −T He T φ 0 ,
and where E CC : V → R is given by
E CC (t) := φ 0 , e −T He T φ 0 .
Remark 4. (i) Equation (3) is the usual untruncated amplitude and energy equations of CC theory, formulated in the infinite dimensional case, with f : V → V ′ . This formulation was derived and named the continuous CC method in Ref. [16], being a mathematically rigorous formulation of the electronic SE using the exponential ansatz. Continuous here means that the excluded space Q is not discretized.
(ii) A remark on a frequently used notation in this article is in place. Since f (t) is an element of the dual space of V, f (t) ∈ V ′ , the pairing with any s ∈ V is continuous in s and given by the infinite series f (t), s = µ s µ f µ (t). It should be clear from context whether ·, ·· refers to the L 2 N inner product or the just stated infinite series. Even if Theorem 3 reformulates the SE, it is not clear that truncations of T , either with respect to basis set or excitation level (or both), will give discretizations that yield existence and uniqueness of solutions as well as error estimates. The main tool here is the concept of local strong monotonicity of f : V → V ′ . The following theorem is basically a local application of a classical theorem by Zarantonello [21], see also Theorem 4.1 in Ref. [17] and Theorem 25.B and Corollary 25.7 in [22]. We will have great use of this result when studying the extended CC method of Arponen. Let X be a Hilbert space and define for a subspace Y ⊂ X and
x ∈ X the distance d(Y, x) between Y and x by d(Y, x) := inf y∈Y y − x X .
We recall that if Y is closed then there exists a minimizer y m , i.e., d(Y, x) = y m −x X . This minimizer is the orthogonal projection of x onto Y . We now state without proof:
Theorem 5 (Local version of Zarantonello's Theorem). Let f : X → X ′ be a map between a Hilbert space X and its dual X ′ , and let x * ∈ B δ be a root, f (x * ) = 0, where B δ is an open ball of radius δ around x * .
Assume that f is Lipschitz continuous in B δ , i.e., that for all
x 1 , x 2 ∈ B δ , f (x 1 ) − f (x 2 ) X ′ ≤ L x 1 − x 2 X ,
for a constant L. Secondly, assume that f is locally strongly monotone in B δ , i.e., that
f (x 1 ) − f (x 2 ), x 1 − x 2 ≥ γ x 1 − x 2 2 X , for all x 1 , x 2 ∈ B δ ,
for some constant γ > 0. Then, the following holds:
1) The root x * is unique in B δ . Indeed, there is a ball C ε ⊂ X ′ with 0 ∈ C ε
such that the solution map f −1 : C ε → X exists and is Lipschitz continuous, implying that the equation
f (x * + ∆x) = y
has a unique solution ∆x = f −1 (y) − x * , depending continuously on y, with norm ∆x X ≤ δ.
2) Moreover, let X d ⊂ X be a closed subspace such that x * can be approximated sufficiently well, i.e., the distance d(x * , X d ) is small. Then, the projected problem f d (x d ) = 0 has a unique solution x d ∈ X d ∩ B δ , and
x * − x d X ≤ L γ d(x * , X d ).
Rohwedder and Schneider proved under certain assumptions (see Theorem 3.4 and 3.7 and Assumptions A and B in [17]) that the amplitude equations f : V → V ′ are indeed locally strongly monotone. (Lipschitz continuity follows from the differentiability of f .) Thus, the second part of Theorem 5 then guarantees that the truncated CC equations have a unique solution, and that the error tends to zero as we increase the basis size and the truncation level of T , if the amplitude equation map f is locally strongly monotone and Lipschitz continuous.
Before addressing the extended CC method we follow Helgaker and Jørgensen [9] and remark that one can view the CC method as minimization of E CC (t) over V under the constraint f (t) = 0. The Lagrangian in this case becomes
L(t, s) := φ 0 , e −T He T φ 0 + µ s µ φ µ , e −T He T φ 0 = φ 0 , (I + S † )e −T He T φ 0 ,(4)
where s = (s µ ) µ =0 ∈ V is the multiplier, which can be gathered into an excitation
operator S = µ s µ X µ . Note that D sµ L = f µ since L(t, s) = E CC (t) + f (t), s .
We shall in the next section see that the Lagrangian formulation is contained in the bivariational formulation of CC theory.
The extended coupled-cluster method.
To link the forthcoming discussion to the previous section, we note that Arponen [1] derived the CC Lagrangian starting from the bivariational Rayleigh-Ritz quotient E bivar : H × H → R,
E bivar (ψ, ψ ′ ) := ψ ′ , Hψ ψ ′ , ψ .
Vis-á-vis the usual Rayleigh-Ritz quotient, ψ and ψ ′ are here truly independent variables (not only treated as such in a formal manner). (See also the discussion following Eq. (24) in [13].) The stationary condition DE bivar = 0 yields the left (and right) eigenvector(s) of H with eigenvalue E * , in fact, by straight-forward differentiation we obtain the following result:
Theorem 6 (Bivariational principle). Let H : H → H ′ be a bounded operator. Then, E bivar is an infinitely differentiable function at all points where ψ ′ , ψ = 0, and D ψ E bivar = D ψ ′ E bivar = 0 if and only if the left and right SE is satisfied,
Hψ = Eψ, H † ψ ′ = Eψ ′ , ψ ′ , ψ = 0.
Here,
H † : H → H ′ is defined by H † ψ ′ , ψ := ψ ′ , Hψ .
Remark 7. If we assume that H satisfies all the requirements (1a-1c), in particular that H is symmetric, the left and right eigenvalue problems become identical, being the weak formulation of the eigenvalue problem of a unique self-adjointĤ over L 2 N . Suppose thatĤ is close to self-adjoint, e.g., self-adjoint up to an L 2 N -bounded perturbation. It is then reasonable that the left and right eigenvalue problems can be simultaneously solved (but with ψ ′ = ψ). Thus, the bivariational principle can be thought of as a generalization of Rayleigh-Ritz to at least certain non-symmetric problems.
We now introduce an exponential ansatz also for the wavefunctionψ. Following Arponen [1], we eliminate the denominator by changing the normalization of ψ ′ , i.e., we setψ = ψ ′ / ψ ′ , ψ . The two scalar constraints lead to a smooth submanifold M ⊂ H × H of codimension 2,
(5) M := (ψ,ψ) ∈ H × H | φ 0 , ψ = ψ , ψ = 1 .
The next lemma shows that this manifold M can be parameterized using cluster amplitudes.
Lemma 8 (Extended CC parameterization). Suppose (ψ,ψ) satisfies φ 0 , ψ = ψ , ψ = 1. Then, there exists unique (t, λ) ∈ V × V depending smoothly on (ψ,ψ) ∈ M, such that ψ = e T φ 0 , andψ = e −T † e Λ φ 0 ,
which is a smooth map. In other words, the map Φ :
V × V → M, Φ(t, λ) := (ψ(t),ψ(t, λ)) is a smooth map with a smooth inverse.
Proof. By Theorem 1, t exists and is unique, depending smoothly on ψ and vice versa. Consider ω = e T † (ψ)ψ , which depends smoothly on (ψ,ψ). We have φ 0 , ω = 1, so by Theorem 1 there exists a unique λ depending smoothly on ω, and hence (ψ,ψ), such that ω = e Λ φ 0 . Nowψ = e −T † e Λ φ 0 , a smooth map of (t, λ).
We define the extended coupled-cluster energy functional E :
V × V → R by E = E bivar • Φ, viz, (6) E(t, λ) = φ 0 , e Λ † e −T He T φ 0 .
Eq. (6) defines Arponen's ECC energy functional in a continuous, infinite dimensional formulation.
Theorem 9 (Continuous extended coupled-cluster equations). Let the Hamiltonian H : H → H ′ be as before. Then,
Hψ * = E * ψ * , and Hψ * = E * ψ * with normalization φ 0 , ψ * = ψ * , ψ * = 1, if and only if DE(t * , λ * ) = 0, i.e., D t E(t * , λ * ) = 0, and D λ E(t * , λ * ) = 0, where D tµ E(t, λ) = φ 0 , e Λ † [e −T He T , X µ ]φ 0 , (7a) D λµ E(t, λ) = φ µ , e Λ † e −T He T φ 0 ,(7b)
and where (ψ * ,ψ * ) = Φ(t * , λ * ).
Proof. Φ is differentiable with a differentiable inverse on M, which is precisely the set of function pairs satisfying the normalization constraints. Thus DE(t * , λ * ) = D[E bivar • Φ](t * , λ * ) = 0 if and only if DE bivar (ψ * ,ψ * ) = 0 with the side condition ψ * , ψ * = φ 0 , ψ * = 1. Moreover, E(t * , λ * ) = E bivar (ψ * ,ψ * ) = E * . The formulas for the partial derivatives of E follow by elementary differentiation strategies.
As in the case of standard CC theory, the continuous ECC equations do not imply that truncations in amplitude or basis set gives a well-behaved approximate method. To achieve this is the goal of the next section.
Remark 10. (i) We note that both ψ and ψ ′ are parameterized in an explicit multiplicatively separable manner, when the system is decomposed into non-interacting subsystems. This is the main advantage of the ECC parameterization. We observe that the CC Lagrangian (given by Eq. (4)) is obtained by a further change of variables S † := e Λ † − 1, which destroys this property of ψ ′ . Alternatively, one can view the CC Lagrangian as a first-order approximation to the ECC functional in terms of λ.
(ii) Arponen defined a further change of variables through .6) and (5.7) in [1]. The variables (t ′ , λ) turn out to be canonical in the sense of classical Hamiltonian mechanics, i.e., the time-dependent Schrödinger equation is equivalent to Hamilton's equations of motion,
t ′ µ = φ 0 , e Λ † X † µ T φ 0 , and where the inverse t = t(t ′ , λ) is explicitly given by t µ = φ 0 , e −Λ † X † µ T ′ φ 0 , see Eqs. (5iṫ ′ µ = D λµ E ′ , iλ µ = −D t ′ µ E ′ ,
where E ′ (t ′ , λ) := E(t(t ′ , λ), λ) andṫ (andλ) denotes the time derivative of the amplitudes t (and λ). The canonical variables have a computational advantage over the earlier defined non-canonical variables. As it turns out, they introduce cancellations in the (linked) diagram series for E * compared to when using the non-canonical (t, λ). We shall not use the variables (t ′ , λ) here, as the analysis becomes considerably more complicated, and instead relegate their study to future work.
3. Analysis of ECC from monotonicity.
3.1. The flipped gradient F . We will discuss the stationary point of E corresponding to the ground-state energy E * in terms of a map F : V × V → V ′ × V ′ defined by flipping the components of the (Fréchet) derivative DE = (D t E, D λ E), i.e., (9) F
:= (D t E, D λ E) 0 1 1 0 = (D λ E, D t E).
The components of the derivative are given in Eqs. (7). For the forthcoming discussion, let B δ (t, λ) denote the ball of radius δ > 0 centered at (t, λ) ∈ V ×V. Here the norm is (·, ··) 2 V×V := · 2 V + ·· 2 V . Let (t * , λ * ) ∈ V ×V be the optimal amplitudes corresponding to the ground-state pair (ψ * ,ψ * ), in particular F (t * , λ * ) = 0. For the extended CC function F we now want to establish:
(i) F is locally Lipschitz, i.e., let (t, λ) ∈ V × V then there exists δ > 0 such that
(t i , λ i ) ∈ B δ (t, λ) implies F (t 1 , λ 1 ) − F (t 2 , λ 2 ) V ′ ×V ′ ≤ L (t 1 , λ 1 ) − (t 2 , λ 2 ) V×V
for some (Lipschitz) constant L > 0, possibly depending only on (t, λ) and δ. (ii) F is locally and strongly monotone at (t * , λ * ) ∈ V × V, i.e., there exists δ, γ > 0 such that
F (t 1 , λ 1 ) − F (t 2 , λ 2 ), (t 1 , λ 1 ) − (t 2 , λ 2 ) ≥ γ( t 1 − t 2 2 V + λ 1 − λ 2 2 V )
holds for all (t 1 , λ 1 ), (t 2 , λ 2 ) ∈ B δ (t * , λ * ).
Item (i) above is readily established using the fact that F is the flipped gradient of a smooth function. For (ii), we shall formulate two sets of assumptions (Assumption 1 and Assumption 2 below) that each is enough to give strong monotonicity for F locally at (t * , λ * ). Having proved (i) and (ii), we can apply Theorem 5 to obtain existence and uniqueness results, also for truncated schemes. The definition of local strong monotonicity of the map F reduces to the existence of a γ > 0 such that for (t i , λ i ) close to t * , λ * , the quantity
∆ 1 (t 1 , λ 1 , t 2 , λ 2 ) + ∆ 2 (t 1 , λ 1 , t 2 , λ 2 ) := D λ E(t 1 , λ 1 ) − D λ E(t 2 , λ 2 ), t 1 − t 2 + D t E(t 1 , λ 1 ) − D t E(t 2 , λ 2 ), λ 1 − λ 2 (10) satisfies (11) ∆ 2 (t 1 , λ 1 , t 2 , λ 2 ) + ∆ 2 (t 1 , λ 1 , t 2 , λ 2 ) ≥ γ( t 1 − t 2 2 V + λ 1 − λ 2 2 V ).
The choice of the map F can be motivated as follows: It is clear that DE cannot be locally strongly monotone, as, just like E bivar , all the critical points of E are intuitively saddle points (we will not prove this claim). On the other hand, in Ref. [17], the map f (t) from Theorem 3 was considered, and demonstrated to be locally strongly monotone under suitable assumptions. We observe that f = D s L, a partial derivative of the Lagrangian, which is linear in s, so that f is only a function of t. In Ref. [17] it was demonstrated that (locally at t * )
(12) ∆(t 1 , t 2 ) = [D s L](t 1 ) − [D s L](t 2 ), t 1 − t 2 ≥ γ t 1 − t 2 2 V ,
for some constant γ > 0. Thus, Eq. (12) is "half" of the inequality (11). In extended CC, the functional E is nonlinear in λ, indicating that we should include λ in the monotonicity argument.
Assumptions and preparation.
The analysis of Arponen's ECC method conducted here will be based on two complementary assumptions, Assumption 1 and Assumption 2. The former deals with the accuracy of the ansatz, i.e., the accuracy of the reference φ 0 , while the latter considers a splitting of the Hamiltonian, e.g., the smallness of the fluctuation potential when a Hartree-Fock reference is used. We thus obtain two complementary monotonicity results applicable in different situations. However, both assumptions rest on conditions on spectral gaps. Recall the P denotes the reference space and moreover set P * := span{ψ * }. Let P and P * denote the L 2 N -orthogonal projections on P and P * , respectively. Essential for the analysis, we then either have to assume that: There exists γ * > 0 such that (Assumption 1)
(13) (I − P * )ψ, (H − E * )(I − P * )ψ ≥ γ * (I − P * )ψ 2
or there exists γ 0 > 0 such that (Assumption 2)
(14) (I − P )ψ, (F − e 0 )(I − P )ψ ≥ γ 0 (I − P )ψ 2 ,
for all ψ ∈ H. Here F is a one-body operator that has φ 0 as ground state with ground-state energy e 0 . A Hamiltonian splitting is then given by H = F + (H − F ), and will be dealt with below in connection with Assumption 2. We note that Eq. (13) expresses the fact that E * is the leftmost eigenvalue of H, that this eigenvalue exists, and has multiplicity 1. We iterate that throughout the analysis we assume that the system Hamiltonian is bounded as quadratic form and additionally satisfying a Gårding estimate, see the discussion in Section 2.1, and in particular Eqs. (1a-1c). We first state a slight upgrade of Lemma 3.5 in [17]. Note that for ψ ∈ H, (I − P )ψ ∈ Q. Also recall that in our notation · is the L 2 N norm. Lemma 11. With ψ * = φ 0 + ψ ⊥ , where ψ ⊥ ∈ Q is the correction to φ 0 , we have: (i) Assume that (13) holds with γ * > 0 and that ψ ⊥ H < ε. Then there exists a γ ε ∈ (0, γ * ] such that, for all ψ ∈ Q
(15) ψ, (H − E * )ψ ≥ γ ε γ ε + e + E * c ψ 2 H ,
where γ ε → γ * as ε → 0+. (ii) Assume F φ 0 = e 0 φ 0 and that (14) holds with γ 0 > 0 and that F satisfies the Gårding estimate given in (1b) (with constants e F and c F ). Then
(16) ψ, (F − e 0 )ψ ≥ γ 0 γ 0 + e F + e 0 c F ψ 2 H for all ψ ∈ Q.
Proof. (i) Let ψ ∈ Q. We first show that for γ ε > 0 (and where γ ε → γ * as ε → 0+) there holds (17) ψ
, (H − E * )ψ ≥ γ ε ψ 2 .
Following the argument in the proof of Lemma 2.4 in [17], we then have with 0 < q := γ ε /(γ ε + e + E * ) < 1 (recall that e + E * > 0 by necessity of the Gårding estimate)
ψ, (H − E * )ψ = q ψ, (H − E * )ψ + (1 − q) ψ, (H − E * )ψ ≥ qc ψ 2 H + (γ ε − q(γ ε + e + E * )) ψ 2 .
Thus, if (17) holds we are done. Let P and P * be as above. We use that
P − P * B(L 2 N ) ≤ 2 φ 0 − ψ ′ * ,
where ψ ′ * = ψ * / ψ * . Since ψ * = φ 0 + ψ ⊥ , with α := ψ ⊥ we have P − P * B(L 2 N ) ≤ 2 2 − 2(1 + α 2 ) −1/2 1/2 =: j(α). Note that j(α) is an increasing function for α > 0 and j(α) = 2α + O(α 2 ).
Since (H − E * )P * ψ = 0 (and H is symmetric), the left-hand side of (17) equals
(I − P * )ψ, (H − E * )(I − P * )ψ ,
which by (13) is bounded from below by γ * (I − P * )ψ 2 . Thus for α sufficiently small
ψ, (H − E * )ψ ≥ γ * ( (I − P )ψ − (P − P * )ψ ) 2 ≥ γ * (1 − j(α)) 2 ψ 2 .
Since ε > ψ ⊥ H ≥ α, we have that (17) holds with γ ε := γ * (1 − j(ε)) 2 . It is clear that γ ε → γ * as ε tends to zero from above because j(ε) → 0.
(ii) With q F := γ 0 /(γ 0 + e F + e 0 ) we have 0 < q F < 1 since e F > −e 0 (equivalent to e > −E * ). Thus we can repeat the above scheme with q = q F to complete the proof.
Because the relation ψ ⊥ = (e T * − I)φ 0 holds, it is immediate that ψ ⊥ H is small if and only if t * V is. It is a fact that the operator norm T B(H) is equivalent to the norm t V , see Ref. [17]. We now state the first assumption: Assumption 1. Let η ε := γ ε c/(γ ε + e + E * ). We assume the following: (a) Eq. (13) holds with a strictly positive spectral gap γ * > 0. (b) The optimal amplitudes t * and λ * are sufficiently small in · V norm. With
C * := C + |E * | we then assume ψ ⊥ H < ε, where ε > 0 is chosen such that b * (t * , λ * ) := e −T † * e Λ * − I B(H) + e −T † * e Λ * B(H) e T * − I B(H) + K φ 0 H e −T † * B(H) e T * B(H) e Λ * − I B(H) < η ε C * .(18)
Here, K is a constant such that T B(H) ≤ K t V , which exists since the norms are equivalent.
Remark 12. It is in fact possible to choose ε > 0 such that (18) holds. Indeed, ε = 0 is equivalent to t * = λ * = 0, and b * (t * , λ * ) = b(ε), a smooth function of ε. Since, b(ε) → 0+ as ε → 0+ and γ ε tends to the spectral gap γ * , there exists a ε 0 such that b * < η ε /C * for ε ≤ ε 0 . Furthermore, at ε = 0 we have ψ * = φ 0 , such that γ * = γ 0 and P * = P.
We next define the similarity transformed Hamiltonian H t and the doubly similarity transformed Hamiltonian H t,λ as given by
H t := e −T He T , H t,λ := e Λ † H t e −Λ † .
Note that (H t ) λ = H t,λ . Since e T * φ 0 solves the SE with eigenvalue E * , φ 0 is an eigenfunction of H t * with the same eigenvalue. This fact and e Λ † * φ 0 = e −Λ † * φ 0 = φ 0 make it easy to verify (i) in Lemma 13. Let f (t * ) = F (t * , λ * ) = 0 and E * = E(t * , λ * ). Then (i) H t * φ 0 = E * φ 0 and H t * ,λ * φ 0 = E * φ 0 . (ii) H † t * ,λ * φ 0 = E * φ 0 . Proof. It remains to prove (ii). We know that (by definition of the left eigenfunction of H)
φ 0 , e Λ † * e −T * H = E * φ 0 , e Λ † * e −T * . Thus φ 0 , e Λ † * H t * equals E * φ 0 , e Λ † * , i.e., H † t * e Λ * φ 0 = E * e Λ * φ 0 . Remark 14.
Note that Lemma 13 is valid for any critical point (t c , λ c ) with corresponding eigenvalue E c , not only the ground state ((t * , λ * ) and E * ). Furthermore, as stated in Lemma 13, the double similarity transform makes φ 0 both the left and right eigenvector of H t * ,λ * with the same eigenvalue.
We now move on to Assumption 2, which corresponds to an assumption made in Ref. [17], but suitable for ECC. Roughly speaking, instead of assuming that the reference φ 0 is sufficiently accurate, in Assumption 2 we assume that we have a splitting H = F + W where F is a one-body operator, and where W is sufficiently small in some appropriate sense. For example, F can be the Fock operator and W the fluctuation potential of a molecule in the Born-Oppenheimer approximation. Moreover, we assume that F φ 0 = e 0 φ 0 and that (14) holds, where γ 0 is the so-called HOMO-LUMO gap.
It can be remarked, that due to the structure of H, the Baker-Campbell-Hausdorff (BCH) expansion for H t terminates identically after four nested commutators in the case of a two-body interaction operator, i.e., H t is actually a polynomial of low order, independently of the number of particles.
The expansion for the outer similarity transform in H t,λ also truncates, albeit at a higher order. Thus, we have a finite sum
H t,λ = m,n 1 n!m! [[H, T ] (n) , −Λ † ] (m) .
Here
(19) H t,λ = H + [F, T ] + [Λ † , F ] + O(t, λ).
The significance of O(t, λ) is that (19) implies
(20) E(t, λ) − φ 0 , Hφ 0 = φ 0 , O(t, λ)φ 0 ,
i.e., O(t, λ) gives all nontrivial contributions to E. In the Hartree-Fock case, the right-hand side of Eq. (20) is the correlation energy functional, since the Hartree-Fock energy is given by E HF = φ 0 , Hφ 0 . The idea is that if the reference φ 0 is sufficiently good, the mapping (t, λ) → O(t, λ) will be well-behaved. In fact, since O(t, λ) is a (Fréchet-)smooth map, it is locally Lipschitz: Given (t, λ) ∈ V × V, there exist δ, L > 0 such that for all
(t i , λ i ) ∈ B δ (t, λ), O(t 1 , λ 1 ) − O(t 2 , λ 2 ) B(H,H ′ ) ≤ L (t 1 − t 2 , λ 1 − λ 2 ) V×V .
In our case, we assume that L is sufficiently small at (t * , λ * ). This, in a sense, measures the smallness of W . Assumption 2. Let H = F + W and η 0 := γ 0 c F /(γ 0 + e F + e 0 ). We assume the following:
(a) F : H → H ′ is a one-body operator that satisfies the same conditions as H, i.e., it is symmetric, bounded, and satisfies a Gårding estimate (with constants e F , c F ), as in Eqs. (1a-1c). The constant that bounds F is denoted C F and we set C 0 := C F + |e 0 |. (b) F φ 0 = e 0 φ 0 where e 0 is the smallest eigenvalue of F . Eq. (14) holds with a γ 0 > 0, i.e., there is a strictly positive HOMO-LUMO gap. In particular, Lemma 11 gives that (16) holds for all ψ ∈ Q. (c) The Lipschitz constant L at (t * , λ * ) and λ * V are not too large, so that that the following inequality holds:
0 < γ := η 0 − 1 2 L φ 0 H 3 + K (e Λ * − 1)φ 0 H + e Λ * φ 0 H / φ 0 H + 2 e Λ * B(H) − C 0 e Λ * − 1 B(H) .(21)
Here, K is a constant such that T B(H) ≤ K t V , which exists since the norms are equivalent.
Remark 15. Assumption 2(c) does not assume that λ * is small compared to λ 1 − λ 2 . However, λ * (and L) cannot be too large, since then γ eventually becomes negative. If we do assume that λ * V < δ, we obtain some simplifications, see Corollary 18 below.
Proof of Monotonicity.
We set ∆ := ∆ 1 + ∆ 2 , the left-hand side of Eq. (10). We then wish to prove
(22) ∆ ≥ γ t 1 − t 2 2 V + λ 1 − λ 2 2 V
where (t i , λ i ) ∈ B δ (t * , λ * ) and γ, δ > 0. To simplify notation we defineT = (T 1 +T 2 )/2 and δT = T 1 − T 2 , and similarlyΛ = (Λ 1 + Λ 2 )/2 and δΛ = Λ 1 − Λ 2 . Consequently, we write δt V and δλ V for t 1 − t 2 V and λ 1 − λ 2 V , respectively.
Theorem 16. Assume that Assumption 1 holds. Then F is strongly monotone locally at (t * , λ * ), F (t * , λ * ) = 0, belonging to the ground-state energy E * = E(t * , λ * ).
Proof. Using the formulas (7) for the partial derivatives, we obtain for the two terms in Eq. (10),
∆ 1 = δT φ 0 , e Λ † 1 H t1 − e Λ † 2 H t2 φ 0 , ∆ 2 = φ 0 , e Λ † 1 [H t1 , δΛ] − e Λ † 2 [H t2 , δΛ] φ 0 .
Moreover, we make use of the following notation g i := t i − t * , k i := λ i − λ * and define the excitation operators G i := µ (g i ) µ X µ and K i := µ (k i ) µ X µ . Also we write δG and δK as for T and Λ, where of course δG = δT and δK = δΛ. As in [17], we note that the similarity transformed Hamiltonians H ti can be expanded in terms of H t * as
(23) H ti = H t * + [H t * , G i ] + O( g i 2 V ).
Let∆ be the second-order Taylor expansion of ∆ around (t * , λ * ), i.e., ∆ =∆ + O( (δt, δλ) 3 V×V ). We will demonstrate the claim by first showing that∆ satisfies (22) for someγ > 0, using Assumption 1. Now by (23) and Λ i = K i + Λ * , we see that
∆ 1 = δT φ 0 , e K † 1 e Λ † * (H t * + [H t * , G 1 ] + O( g 1 2 V )) − e K † 2 e Λ † * (H t * + [H t * , G 2 ] + O( g 2 2 V )) φ 0 .
With the aid of Lemma 13 and since e K † i φ 0 = φ 0 , it holds
∆ 1 = δT φ 0 , e K † 1 e Λ † * [H t * , G 1 ] − e K † 2 e Λ † * [H t * , G 2 ] + O( g 1 2 V ) + O( g 2 2 V ) φ 0 .
As a next step we truncate e K † i = I + O( k i V ) and there holds
∆ 1 = δT φ 0 , e Λ † * [H t * , δT ]φ 0 + 3 k=0 O( g i k V k i 3−k V ) = δT φ 0 , e Λ † * (H t * − E * )δT φ 0 + 3 k=0 O( g i k V k i 3−k V ).
Again we have made use of Lemma 13. Equation (15) from Lemma 11 and (1a) give two useful bounds,
ψ ′ , (H − E * )ψ ≥ η ε ψ 2 H − C * ψ ′ − ψ H ψ H ,(24)ψ ′ , (H − E * )ψ ≥ −C * ψ ′ H ψ H .(25)
Using these,
∆ 1 = δT φ 0 , e Λ † * (H t * − E * )δT φ 0 = e −T † * e Λ * δT φ 0 , (H − E * )δT φ 0 + e −T † * e Λ * δT φ 0 , (H − E * )(e T * − I)δT φ 0 ≥ η ε δT φ 0 2 H − C * e −T † * e Λ * − I B(H) δT φ 0 2 H − C * e −T † * e Λ * B(H) e T * − I B(H) δT φ 0 2 H = δt 2 V η ε − C * ( e −T † * e Λ * − I B(H) + e −T † * e Λ * B(H) e T * − I B(H) ) .
Next, we look at ∆ 2 . Proceeding in similar a fashion, we compute
∆ 2 = φ 0 , (I + K † 1 + O( k 1 2 V ))e Λ † * [H t * + [H t * , G 1 ] + O( g 1 2 V ), δΛ]φ 0 − φ 0 , (I + K † 2 + O( k 2 2 V ))e Λ † * [H t * + [H t * , G 2 ] + O( g 2 2 V ), δΛ]φ 0 = φ 0 , δΛ † e Λ † * (H t * − E * )δΛφ 0 + φ 0 , e Λ † * [H t * , δT ], δΛ φ 0 + 3 k=0 O( g i k V k i 3−k V ) =:∆ 2,1 +∆ 2,2 + 3 k=0 O( g i k V k i 3−k V ),(26)
where the last equality defines∆ 2,1 and∆ 2,2 . For∆ 2,1 in (26), we again employ Eqs. (24) and (25) to obtaiñ Turning to∆ 2,2 in (26), we have by Lemma 13
∆ 2,1 = φ 0 , δΛ † e Λ † * (H t * − E * )δΛφ 0 = e −T † * e Λ∆ 2,2 = φ 0 , e Λ † * [H t * , δT ], δΛ φ 0 = e Λ * φ 0 , (H t * δT − δT H t * )δΛ − δΛ(H t * δT − δT H t * ) φ 0 = e Λ * φ 0 , δT (E * − H t * )δΛ − δΛ(H t * − E * )δT φ 0 . Since δT (E * − H t * )δΛ − δΛ(H t * − E * )δT φ 0 ∈ Q,
we only need to keep that part of e Λ * φ 0 that belongs to Q. Using Eq. (25), it holds that∆
2,2 = e −T † * δT † (e Λ * − I)φ 0 , (E * − H)e T * δΛφ 0 + e −T † * δΛ † (e Λ * − I)φ 0 , (E * − H)e T * δT φ 0 ≥ −2C * K φ 0 H e −T † * B(H) e T * B(H) e Λ * − I B(H) δt V δλ V ≥ −C * K φ 0 H e −T † * B(H) e T * B(H) e Λ * − I B(H) δλ 2 V + δt 2 V .
To summarize, collecting the lower bounds for∆ 1 and∆ 2,i we can now conclude by means of the definition given by (18)
∆ ≥ (η ε − C * b * (t * , λ * )) δt 2 V + δλ 2 V .
By Assumption 1,γ := η ε − C * b * (t * , λ * ) > 0 such that
∆ ≥γ δt 2 V + δλ 2 V ,γ > 0,(27)
holds. To conclude the proof, we just have to note that by (27)
∆ ≥γ δt 2 V + δλ 2 V + O( (δt, δλ) 3 V×V )
and by choosing δ sufficiently small there holds for some γ ∈ (0,γ]
∆ ≥ γ δt 2 V + δλ 2 V for (t i , λ i ) ∈ B δ (t * , λ * ).
Theorem 17. Assume that Assumption 2 holds. Then F is strongly monotone locally at (t * , λ * ), F (t * , λ * ) = 0, belonging to the ground-state energy E * = E(t * , λ * ).
Proof. As in the proof of Theorem 16, we study ∆ 1 and ∆ 2 separately before adding them together. We begin by noting that
∆ 1 = δT φ 0 , (e Λ † 1 H t1 − e Λ † 2 H t2 )φ 0 = δT φ 0 , (H t1,λ1 − H t2,λ2 )φ 0 ,
because any de-excitation of the reference φ 0 gives zero identically. Now, using Assumption 2 and the definition (19) of the operator O(t, λ) we immediately obtain the following lower bound for ∆ 1 ,
∆ 1 = δT φ 0 , (H t1,λ1 − H t2,λ2 )φ 0 = δT φ 0 , [F, δT ] + [δΛ † , F ] + O(t 1 , λ 1 ) − O(t 2 , λ 2 ) φ 0 = δT φ 0 , (F − e 0 )δT φ 0 + δT φ 0 , (O(t 1 , λ 1 ) − O(t 2 , λ 2 ))φ 0 ≥ η 0 δT φ 0 2 H − L δT φ 0 H (δt, δλ) V×V φ 0 H = η 0 δt 2 V − L φ 0 H δt V ( δt 2 V + δλ 2 V ) 1/2 ≥ η 0 δt 2 V − L φ 0 H δt V ( δt V + δλ V ).
We next turn to ∆ 2 . It holds,
(28) e Λ1 − e Λ2 = eΛδΛ + O( δλ 2 V ).
We compute
∆ 2 = φ 0 , e Λ † 1 [H t1 , δΛ] − e Λ † 2 [H t2 δΛ] φ 0 = φ 0 , (e Λ † 1 − e Λ † 2 )[H t1 , δΛ] + e Λ † 2 [H t1 − H t2 , δΛ] φ 0 = φ 0 , eΛ † δΛ † [F + W + O(t 1 , 0), δΛ] − eΛ † [O(t 1 , 0) − O(t 2 , 0), δΛ] φ 0 + O( δλ 3 V ) + O( δλ V δt 2 V ) + O( δλ 2 V δt V ).(29)
In the last equality, we exploited that the second-order nested commutator of F with two excitation operators vanishes. This so since for µ = 0 we have that [F, X µ ] is an excitation operator and consequently [[F, T ], T ′ ] = 0. Moreover, we used that
O(t 1 , 0) − O(t 2 , 0) = O( δt )
, allowing us to replace Λ 2 withΛ = Λ 2 + δΛ/2, a change which only affects the higher-order terms. Define∆ 2 as the leading second-order term of ∆ 2 , i.e., the first term in the last line of Eq. (29), neglecting the third-order remainders (note that these are in total O( (δt, δλ) 3 V×V )). We will start by findingγ > 0 such that
∆ 1 +∆ 2 ≥γ( δt 2 V + δλ 2 V ).
We split∆ 2 into two contributions,∆ 2,i , i = 1, 2.
Since O(t, 0) + W = e −T W e T , the BCH formula gives,
O(t + δλ, 0) − O(t, 0) = [O(t, 0) + W, δΛ] + O( δλ 2 ).
This gives us the directional derivative of O(·, 0) in the direction δλ,
DO(t, 0)(δλ) = [O(t, 0) + W, δΛ].
On the other hand, O is Lipschitz, so that
[O(t 1 , 0) + W, δΛ] B(H,H ′ ) ≤ DO(t 1 , 0) B(V,B(H,H ′ )) δλ V ≤ (L + K ′ δ) δλ V ,
for some constant K ′ . A useful bound is obtained from Eq. (16) from Lemma 11,
(30) ψ ′ , (F − e 0 )ψ ≥ η 0 ψ 2 H − C 0 ψ ′ − ψ H ψ H .
The first contribution becomes
∆ 2,1 = φ 0 , eΛ † δΛ † [F, δΛ]φ 0 + φ 0 , eΛ † δΛ † [O(t 1 , 0) + W, δΛ]φ 0 = eΛδΛφ 0 , (F − e 0 )δΛφ 0 + δΛeΛφ 0 , [O(t 1 , 0) + W, δΛ]φ 0 ≥ η 0 δΛφ 0 2 H − C 0 (eΛ − 1)δΛφ 0 H δΛφ 0 H − eΛδΛφ 0 H (L + K ′ δ) δλ V φ 0 H ≥ η 0 − C 0 eΛ − 1 B(H) − (L + K ′ δ) φ 0 H eΛ B(H) δλ 2 V .
The second contribution is
∆ 2,2 = φ 0 , eΛ † [O(t 1 , 0) − O(t 2 , 0), δΛ]φ 0 = eΛφ 0 , (O(t 1 , 0) − O(t 2 , 0))δΛφ 0 − eΛφ 0 , δΛ(O(t 1 , 0) − O(t 2 , 0))φ 0 ≥ −L eΛφ 0 H δλ V δt V − L δΛ † (eΛ − 1)φ 0 H φ 0 H δt V ≥ −L eΛφ 0 H δλ V δt V − LK (eΛ − 1)φ 0 H φ 0 H δλ V δt V = −L(K (eΛ − 1)φ 0 H + eΛφ 0 H / φ 0 H ) φ 0 H δλ V δt V .
We gather and obtain,
∆ 1 +∆ 2 ≥ η 0 δt 2 V − L φ 0 H δt V ( δt V + δλ V ) + η 0 − F − e 0 B(H,H ′ ) eΛ − 1 B(H) − (L + K ′ δ) φ 0 H eΛ B(H) δλ 2 V − L(K (eΛ − 1)φ 0 H + eΛφ 0 H / φ 0 H ) φ 0 H δλ V δt V ≥ η 0 − 1 2 L φ 0 H 3 + K (eΛ − 1)φ 0 H + eΛφ 0 H / φ 0 H + 2 eΛ B(H) − F − e 0 B(H,H ′ ) eΛ − 1 B(H) (δt, δλ) 2 V×V − K ′ δ φ 0 H eΛ B(H) (δt, δλ) 2 V×V =:γ(t,λ) (δt, δλ) 2 V×V .
We now note that, by Taylor's Theorem,γ(t,λ) = γ + ε(t,λ) − K ′ δ, with γ = γ(t * , λ * ) > 0 by Eq. (21) in Assumption 2, and |ε| ≤ Cδ for some C ≥ 0. Thus,
∆ 1 +∆ 2 ≥ (γ − (C + K ′ )δ) (δt, δλ) 2 V×V .
Finally,
∆ 1 + ∆ 2 ≥ (γ − (C + K ′ )δ) (δt, δλ) 2 V×V + O( (δt, δλ) 3 V×V ).
Since the third-order term cannot beat the second order term, by shrinking δ, we get
∆ 1 + ∆ 2 ≥ (γ − (C + K ′ )δ ′ ) (t 1 − t 2 , λ 1 − λ 2 ) 2 V×V whenever (t i , λ i ) ∈ B δ ′ (t * , λ * ).
Corollary 18. Assume Assumption 2(a-b) holds, and additionally that we have λ * V < δ. Also, assume that
(31) 0 < η 0 − 3L φ 0 H .
Then F is locally strongly monotone at the root (t * , λ * ) belonging to the ground-state energy.
Proof. It is enough to observe that we need to Taylor expand γ =γ(t * , λ * ) to zeroth order, i.e., setting λ * = 0 in Eq. (21). The reader can readily verify that this gives Eq. (31).
3.4. Existence, uniqueness, truncations and error estimates. Having obtained sufficient conditions for F to be locally strongly monotone at (t * , λ * ), we can now apply the local version of Zarantonello's theorem, Theorem 5, to obtain existence and local uniqueness of solutions, also for truncated versions of ECC.
In our setting, a (family of) truncated amplitude spaces V d × V d is such that if we let the dimension d → +∞, we can approximate (t * , λ * ) arbitrarily well. Of course, the usual truncation scheme defined by all excitations up to a given excitation level and additionally the restriction to a finite set of virtual orbitals, conforms with this. In the sequel it will be assumed that V d is closed in V.
The truncated ECC functional is the restriction E d :
V d × V d → R of E, giving the critical point problem DE d = 0, i.e., find (t d , λ d ) ∈ V d × V d such that ∂E(t d , λ d ) ∂t µ = ∂E(t d , λ d ) ∂λ µ = 0,
where t µ (λ µ ) are the components of t ∈ V d (λ ∈ V d ) in some arbitrary orthonormal basis. Since the flipping map in Eq. (9) commutes with projection onto V d × V d , the truncated ECC equations can be written F d (t d , λ d ) = 0. While stated as a theorem, our main result is really a corollary of Theorems 16 and 17, and an elementary application of Theorem 5. The only point to check is that F is locally Lipschitz. However, F is (in fact infinitely) continuously differentiable in the Fréchet sense. Such functions are always locally Lipschitz.
Theorem 19. Assume that Assumption 1 or 2 holds such that F is locally strongly monotone (with constant γ) on B δ (t * , λ * ), for some δ > 0. Here, (t * , λ * ) is the root of F belonging to the ground-state energy. Furthermore, let L be the local Lipschitz constant of F at (t * , λ * ).
(i) The solution (t * , λ * ) of the continuous ECC equation
DE(t, λ) = 0 on V × V is locally unique. (ii) For sufficiently large d, the projected ECC problem DE d (t, λ) = 0 has a unique solution (t d , λ d ) in the neighborhood B δ (t * , λ * ) ∩ (V d × V d ). The truncated solution (t d , λ d ) satisfies the estimate (t d , λ d ) − (t * , λ * ) V×V ≤ L γ d(V d × V d , (t * , λ * )).(32)
Remark 20. (i) The local uniqueness is also a direct consequence of the assumption that the ground state is non-degenerate and Lemma 8.
(ii) By the definition of the norm on V × V, (32) implies
(33) t d − t * 2 V + λ d − λ * 2 V ≤ L 2 γ 2 d(V d , t * ) 2 + d(V d , λ * ) 2 ,
and furthermore that (t d , λ d ) → (t * , λ * ) as d → +∞.
Theorem 19 guarantees that for sufficiently large discrete amplitude spaces V d , the ECC equations actually have locally unique solutions that approximate the exact solution. However, we do not yet know what "sufficiently large" means.
By slightly adapting the proof of Theorem 4.1 in Ref. [17], we can obtain a sufficient condition on V d . This argument rests on Brouwer's fixed point theorem: any continuous function of a closed ball in R n into itself has a fixed point. Here, we employ a version of this result [8].
Lemma 21. Equip R n with any norm · n , and let B R be the closed ball of radius R centered at x = 0. Let h : B R → R n be continuous and assume that on the boundary of B R , h( x), x = h( x) · x ≥ 0. Then h( x) = 0 for some x ∈ B R .
Proof. Assume that h = 0 everywhere. Then f ( x) := −Rh( x)/ h( x) n is continuous, mapping the ball into itself (in fact, onto its boundary). Therefore, f has a fixed point, say x 0 , i.e., x 0 = −Rh( x 0 )/ h( x 0 ) n . However, this gives the contradiction 0 < x 0 · x 0 = −R h( x 0 ), x 0 / x 0 n ≤ 0.
Following [17], the idea is now to choose h d such that F d = 0 is equivalent to h d = 0 and use the above argument. Assume that κ d satisfies
(35) κ d ≤ δγ γ + L ,
where γ and L are the monotonicity and Lipschitz constants, respectively, that hold on B δ (t * , λ * ). Then the projected extended coupled-cluster problem F d (t, λ) = 0 has a unique solution (t d , λ d ) in the neighborhood B δ (t * , λ * ) ∩ (V d × V d ).
Proof. Let d := dim V d and {b j } d j=1 be an orthonormal basis of V d . Define the continuous vector-valued function h d : R 2d → R 2d by h d ( x) = h d ( v, w) = ( y, z), where y j = D λ E(t m + v, λ m + w), b j , z j = D t E(t m + v, λ m + w), b j , . . . , w d ). Let ( v, w) 2d := (v, w) V×V , a norm on R 2d (a fact that can be easily checked). By definition, h d = 0 is equivalent to F d = 0.
and v = d j=1 v j b j , v = (v 1 , . . . , v d ), w = d j=1 w j b j , w = (w 1 ,
We now choose R := δ − κ d ≥ δL/(γ + L) > 0 and note that ( v, w) ∈ B R (t m , λ m ) implies (v, w) ∈ B δ (t * , λ * ). For x that satisfies x 2d = R, we have using monotonicity and Lipschitz continuity of F ,
h d ( x), x = d j=1 (y j v j + z j w j ) = F (t m + v, λ m + w), (v, w) = F (t m + v, λ m + w) − F (t m , λ m ), (v, w) + F (t m , λ m ) − F (t * , λ * ), (v, w) + F (t * , λ * ), (v, w) ≥ γ (v, w) 2 V×V − Lκ d (v, w) V×V . Since γR − Lκ d = γδ − κ d (γ + L) ≥ 0, we can conclude h d ( x), x = R(γR − Lκ d ) ≥ 0.
Lemma 21 now establishes that h d ( x * ) = 0 for some x * with x 2d = (v * , w * ) V×V ≤ R, which is equivalent to that (t d , λ d ) := (t m + v * , λ m + w * ) solves the projected problem F d = 0. The uniqueness follows from Theorem 19 applied to F d .
We will next show the power of the bivariational principle as far as the ECC method is concerned. The standard variational formulation of CC theory introduces a Lagrangian. Error estimates for the CC energy then requires that the dual problem has a solution. (See [17] where this non-trivial step has been done by means of the Lax-Milgram theorem.) However, the ECC method is based on the bivariational principle and the energy itself is stationary in this formulation, i.e., the solution (t * , λ * ) is a critical point of the bivariational energy. When (t d , λ d ) is close to the exact solution, we are guaranteed a quadratic error estimate for free. As our last order of business we will discuss this further.
Under the assumption that H supports a ground state with ground-state energy E * , the Rayleigh-Ritz variational principle states that E * ≤ E var (ψ) := ψ, Hψ ψ, ψ for any ψ ∈ H. Minimizing E var over trial wavefunctions (say, considering H appr ⊂ H) yields an approximate energy E appr that also provides an upper bound to E * , i.e., E appr ≥ E * . Furthermore, since D ψ E var (ψ * ) = 0, we obtain a second-order error estimate of the energy (see for instance Eq. (1.4) in [17] and the reference given in connection for more refined estimates) 0 ≤ E appr − E * ≤ C ψ appr − ψ * 2 H ≤ C ′ d(H appr , ψ * ) 2 . In similar a fashion, the critical point condition DE bivar (ψ * , ψ ′ * ) = 0 of the bivariational quotient will give us a second-order error estimate of the ECC energy.
Theorem 3 (Rohwedder and Schneider, continuous CC formulation). Under the assumptions on H stated in Eqs. (1a) and (1b), ψ * = e T * φ 0 solves Hψ * = E * ψ * if and only if
t * ) = 0, and E CC (t * ) = E * ,
[A, B] (n) denotes A n-fold commutated with B and [A, B] (0) := A. For (t, λ) ∈ V × V, we define the operator O(t, λ) through the relation
* δΛφ 0 , 0 ≥ η ε δΛφ 0 2 H
002(H − E * )δΛφ 0 + e −T † * e Λ * δΛφ 0 , (H − E * )(e T * − I)δΛφ − C * e −T † * e Λ * − I B(H) δΛφ 0 2 H − C * e −T † * e Λ * B(H) e T * − I B(H) δΛφ 0 2 H = δλ 2 V η ε − C * ( e −T † * e Λ * − I B(H) + e −T † * e Λ * B(H) e T * − I B(H) ) .
Theorem 22 .
22Let V d be a finite-dimensional subspace of V and set(34) κ d := min (t,λ)∈V d ×V d (t, λ) − (t * , λ * ) V×V = (t m , λ m ) − (t * , λ * ) V×V .
e Λ * φ 0 , G d (E * − H t * )G d φ 0 + 2 e Λ * K d φ 0 , (H t * − E * )G d φ 0 .
As far as truncations of the double wavefunction space M ⊂ H × H is concerned (see Eq. (5)), where the bivariational pair (ψ,ψ) is an element, we will useSince M d is closed (we assume that V d is closed, see the next lemma), we define the distance d(M d , (ψ * ,ψ * )) := minwhere (·, ··) 2 H×H := · 2 H + · · 2 H . Lemma 23. Assume that V d is closed. Then M d is closed. Moreover, it holdsfor some constant C.Proof. By Lemma 8, the map Φ : (t, λ) → (e T φ 0 , e −T † e Λ φ 0 ) and its inverse are smooth andFor (36), we first note thatThis gives (where we let C be a constant that is redefined and reused at leisure)The desired inequality then follows from,Theorem 24. Let δ > 0 be such that F is strongly monotone (with constant γ) and Lipschitz continuous (with constant L) for (t, λ) ∈ B δ (t * , δ * ) and assume that V d is sufficiently good an approximation of V.Furthermore, since e Λ * and K d commute, we obtainwhere we in the last step defined the constants D 1 := D 1 (t * , λ * , φ 0 ) and D 2 := D 2 (t * , λ * ). Thus, by (41) we can choose d 1 and d 2 , under the assumption that max(d(V d , t * ), d(V d , λ * )) is sufficiently small, such that (37) holds.To obtain (38), we see that(42)giveswhere we used (33).(ii) Next, using Theorem 1 (equation(2)), (42) givesFurthermore, we useand we obtainInserting (44) into (43), givesRepeating the argument made in (i) for (37), we can find constantsd 1 ,d 2 such that (39) holds.To finish the proof, we use (36) in Lemma 23 that together with the proof of (i) give (40).Conclusions.In this article we have put the formalism of Arponen's ECC method on firm mathematical ground. This has been achieved by generalizing the continuous (infinite dimensional) formulation of standard CC theory in Refs.[16,17]to the ECC formalism. The bivariational principle plays an important role in our analysis. With the bivariational energy E(t, λ) (and its derivatives) as the main object of study, we have derived existence and uniqueness results for the extended CC equation F = 0 (the flipped gradient) and its discretizations F d = 0. The key aspect of the analysis is the establishment of locally strong monotonicity of F at the exact solution (t * , λ * ). This has been achieved by either assuming that the reference φ 0 is sufficiently good an approximation of the exact solution ψ * , or by considering certain splittings of the Hamiltonian H.We have formulated and proved quadratic error estimates in terms of the quality of the truncated amplitude space V d . The energy error has been bound in terms of d(V d , t * ) and d(V d , λ * ), or equivalently d(M d , (ψ * ,ψ * )), where (ψ * ,ψ * ) is the exact wavefunction pair and M d the truncation of H × H.It is interesting to note, as ECC is variational by construction, i.e., the solution (t * , λ * ) is a critical point of the smooth map E, that the error estimate is obtained basically for free. Indeed, the CC Lagrangian L can be thought of as a linearized formulation of ECC where the second set of amplitudes {λ µ } are the Lagrange multipliers {z µ }. The dual problem of CC is, as it were, already built into the ECC theory. This again illustrates the benefit of applying the bivariational point of view.Here, ECC has been formulated in a set of cluster amplitude coordinates that are not usually employed. A next step in the study of the ECC method would be to repeat the analysis of the monotonicity of F and to obtain error estimates using the so-called canonical cluster amplitudes, cf. Remark 10.Even if ECC is currently not a practical tool in computational chemistry due to its complexity, our analysis demonstrates an important fact: The bivariational principle can be utilized to devise computational schemes that are not obtainable from the standard Rayleigh-Ritz principle, but still have a quadratic error estimate. Such schemes include both the traditional CC method and the ECC method. Indeed, not being variational in the Rayleigh-Ritz sense has been the single most important critique of the coupled-cluster method, precisely due to the lack of a quadratic error estimate. Moreover, we believe that the approach taken in this article, by showing the monotonicity of the flipped gradient F , is an approach that may allow existence and uniqueness results in much more general settings.
Variational principles and linked-cluster exp S expansions for static and dynamic many-body problems. J Arponen, Annals of Physics. J. Arponen, Variational principles and linked-cluster exp S expansions for static and dynamic many-body problems, Annals of Physics, 151 (1983), pp. 311-382.
An overview of coupled cluster theory and its applications in physics. R Bishop, Theor. Chim. Acta. 80R. Bishop, An overview of coupled cluster theory and its applications in physics, Theor. Chim. Acta, 80 (1991), pp. 95-148.
On the Correlation Problem in Atomic and Molecular Systems. Calculation of Wavefunction Components in Ursell-Type Expansion Using Quantum-Field Theoretical Methods. J Čížek, J. Chem. Phys. 45J.Čížek, On the Correlation Problem in Atomic and Molecular Systems. Calculation of Wave- function Components in Ursell-Type Expansion Using Quantum-Field Theoretical Meth- ods, J. Chem. Phys., 45 (1966), pp. 4256-4266.
Origins of the coupled cluster technique for atoms and molecules. J Čížek, Theor. Chim. Acta. 80J.Čížek, Origins of the coupled cluster technique for atoms and molecules, Theor. Chim. Acta, 80 (1991), pp. 91-94.
Bound states of a many-particle system. F Coester, Nucl. Phys. 7F. Coester, Bound states of a many-particle system, Nucl. Phys., 7 (1958), pp. 421-424.
Short-range correlations in nuclear wave functions. F Coester, H Kümmel, Nucl. Phys. 17F. Coester and H. Kümmel, Short-range correlations in nuclear wave functions, Nucl. Phys., 17 (1960), pp. 477-485.
Coupled-cluster approach to nuclear physics. D J Dean, M Hjorth-Jensen, Phys. Rev. C. 6954320D. J. Dean and M. Hjorth-Jensen, Coupled-cluster approach to nuclear physics, Phys. Rev. C, 69 (2004), p. 054320.
. A Laestadius, And S Kvaal, A. LAESTADIUS, AND S. KVAAL
. E Emmrich, Gewöhnliche Operator-Differentialgleichungen, Vieweg, Wiesbaden, GermanyE. Emmrich, Gewöhnliche und Operator-Differentialgleichungen, Vieweg, Wiesbaden, Ger- many, 2004.
Analytical Calculation of Geometrical Derivatives in Molecular Electronic Structure Theory. T Helgaker, P Jørgensen, Adv. Quant. Chem. 19T. Helgaker and P. Jørgensen, Analytical Calculation of Geometrical Derivatives in Molec- ular Electronic Structure Theory, Adv. Quant. Chem., 19 (1988), pp. 183-245.
Configuration-interaction energy derivatives in a fully variational formulation. T Helgaker, P Jørgensen, Theor. Chim. Acta. 75T. Helgaker and P. Jørgensen, Configuration-interaction energy derivatives in a fully vari- ational formulation, Theor. Chim. Acta, 75 (1989), pp. 111-127.
Origins of the coupled cluster method. H Kümmel, Theor. Chim. Acta. 80H. Kümmel, Origins of the coupled cluster method, Theor. Chim. Acta, 80 (1991), pp. 81-89.
Ab initio quantum dynamics using coupled-cluster. S , J. Chem. Phys. 136194109S. Kvaal, Ab initio quantum dynamics using coupled-cluster, J. Chem. Phys., 136 (2012), p. 194109.
Variational formulations of the coupled-cluster method in quantum chemistry. S , Mol. Phys. 111S. Kvaal, Variational formulations of the coupled-cluster method in quantum chemistry, Mol. Phys., 111 (2013), pp. 1100-1108.
The beginnings of coupled-cluster theory: an eyewitness account, in Theory and Applications of Computational Chemistry: The First Forty Years. J Paldus, C. Dykstra, G. Frenking, K. Kim, and G. ScuseriaElsevier115J. Paldus, The beginnings of coupled-cluster theory: an eyewitness account, in Theory and Applications of Computational Chemistry: The First Forty Years, C. Dykstra, G. Frenking, K. Kim, and G. Scuseria, eds., Elsevier, 2005, ch. 7, p. 115.
Correlation Problems in Atomic and Molecular Systems. IV. Extended Coupled-Pair Many-Electron Theory and Its Application to the BH3 Moleciule. J Paldus, J Čížek, I Shavitt, Phys. Rev. A. 5J. Paldus, J.Čížek, and I. Shavitt, Correlation Problems in Atomic and Molecular Sys- tems. IV. Extended Coupled-Pair Many-Electron Theory and Its Application to the BH3 Moleciule, Phys. Rev. A, 5 (1972), pp. 50-67.
The continuous coupled cluster formulation for the electronic Schrödinger equation. T Rohwedder, ESAIM: Math. Mod. Num. Anal. 47T. Rohwedder, The continuous coupled cluster formulation for the electronic Schrödinger equation, ESAIM: Math. Mod. Num. Anal., 47 (2013), pp. 421-447.
Error estimates for the coupled cluster method. T Rohwedder, R Schneider, ESAIM: Math. Mod. Num. Anal. 47T. Rohwedder and R. Schneider, Error estimates for the coupled cluster method, ESAIM: Math. Mod. Num. Anal., 47 (2013), pp. 1553-1582.
Analysis of the projected Coupled Cluster Method in Electronic Structure Calculation. R Schneider, Numer. Math. 113R. Schneider, Analysis of the projected Coupled Cluster Method in Electronic Structure Cal- culation, Numer. Math., 113 (2009), pp. 433-471.
Many-Electron Theory of Atoms and Molecules. I. Shells, Electron Pairs vs. Many-Electron Correlations. O Sinanoglu, J. Chem. Phys. 36O. Sinanoglu, Many-Electron Theory of Atoms and Molecules. I. Shells, Electron Pairs vs. Many-Electron Correlations, J. Chem. Phys., 36 (1962), pp. 706-717.
Regularity and approximability of electronic wavefunctions. H Yserentant, Lecture Notes In Mathematics. SpringerH. Yserentant, Regularity and approximability of electronic wavefunctions, Lecture Notes In Mathematics, Springer, New York, Heidelberg, Berlin, 2010.
Solving functional equations by contractive averaging. E Zarantonello, Army Math. Res. Centre. Tech. Report 160E. Zarantonello, Solving functional equations by contractive averaging, Tech. Report 160, U.S. Army Math. Res. Centre, Madison, WI., 1960.
Nonlinear Functional Analysis and its Application II/B. E Zeidler, SpringerNew York, Heidelberg, BerlinE. Zeidler, Nonlinear Functional Analysis and its Application II/B, Springer, New York, Heidelberg, Berlin, 1990.
|
[] |
[
"Schwarzian mechanics via nonlinear realizations",
"Schwarzian mechanics via nonlinear realizations"
] |
[
"Anton Galajinsky [email protected] \nTomsk Polytechnic University\nLenin Ave. 30634050TomskRussia\n"
] |
[
"Tomsk Polytechnic University\nLenin Ave. 30634050TomskRussia"
] |
[] |
The method of nonlinear realizations is used to clarify some conceptual and technical issues related to the Schwarzian mechanics. It is shown that the Schwarzian derivative arises naturally, if one applies the method to SL(2, R)×R group and decides to keep the number of the independent Goldstone fields to a minimum. The Schwarzian derivative is linked to the invariant Maurer-Cartan one-forms, which make its SL(2, R)invariance manifest. A Lagrangian formulation for a variant of the Schwarzian mechanics studied recently in [Nucl. Phys. B 936 (2018) 661] is built and its geometric description in terms of 4d metric of the ultrahyperbolic signature is given.
|
10.1016/j.physletb.2019.05.054
|
[
"https://arxiv.org/pdf/1905.01935v2.pdf"
] | 146,121,021 |
1905.01935
|
aab5955dc45fc5a59e114ce9aa562641bc4024f7
|
Schwarzian mechanics via nonlinear realizations
30 May 2019
Anton Galajinsky [email protected]
Tomsk Polytechnic University
Lenin Ave. 30634050TomskRussia
Schwarzian mechanics via nonlinear realizations
30 May 2019the method of nonlinear realizations, Schwarzian mechanics
The method of nonlinear realizations is used to clarify some conceptual and technical issues related to the Schwarzian mechanics. It is shown that the Schwarzian derivative arises naturally, if one applies the method to SL(2, R)×R group and decides to keep the number of the independent Goldstone fields to a minimum. The Schwarzian derivative is linked to the invariant Maurer-Cartan one-forms, which make its SL(2, R)invariance manifest. A Lagrangian formulation for a variant of the Schwarzian mechanics studied recently in [Nucl. Phys. B 936 (2018) 661] is built and its geometric description in terms of 4d metric of the ultrahyperbolic signature is given.
Introduction
When first encountering the Schwarzian derivative [1,2] S(ρ(t)) = ...
ρ (t) ρ(t) − 3 2 ρ(t) ρ(t) 2 ,(1)
where ρ(t) is a real function, one may be amazed by its invariance under the SL(2, R) transformations 1 ρ ′ (t) = aρ(t) + b cρ(t) + d , ad − cb = 1, ⇒ S(ρ ′ (t)) = S(ρ(t)).
Because sl(2, R) is a finite-dimensional subalgebra of the Virasoro algebra, the Schwarzian derivative arises naturally within the context of string theory and related field theories (see, e.g., Chapter 4 in Ref. [3]). In recent years there has been a burst of activity in studying 1d quantum mechanics that arises as the low energy limit of the solvable theory displaying maximally chaotic behaviour -the so called Sachdev-Ye-Kitaev model. 2 A peculiar feature of the system is that its Lagrangian density is proportional to the Schwarzian derivative of a specific function. As S(ρ(t)) in (1) is SL(2, R)-invariant, any function of it can be used to define the equation of motion of a higher derivative 1d mechanics enjoying SL(2, R) symmetry. In a recent work [5], a variant of the Schwarzian mechanics was studied which was governed by the third order equation of motion S(ρ(t)) = λ, where λ is a coupling constant. It was shown that in general the model undergoes stable evolution but for one fixed point solution which exhibits runaway behavior. Conserved charges associated with the SL(2, R) symmetry have been constructed by integrating the equation of motion and linking constants of integration to ρ(t) and its derivatives. Yet, the Hamiltonian formulation in [5] was unconventional and a Lagrangian formulation was missing.
The goal of this paper is to apply the method of nonlinear realizations [6] so as to clarify some conceptual and technical issues related to the Schwarzian mechanics.
We begin by demonstrating in Sect. 2 that the Schwarzian derivative arises naturally, if one applies the method to SL(2, R) × R group and decides to keep the number of the independent Goldstone fields to a minimum. Furthermore, S(ρ(t)) is linked to the invariant Maurer-Cartan one-forms, which make its SL(2, R)-invariance manifest.
In Sect. 3, the Maurer-Cartan one-forms are used to build a Lagrangian formulation for a variant of the Schwarzian mechanics studied recently in [5]. The full set of conserved 1 To be more precise, S(ρ(t)) holds invariant under the fractional linear transformation ρ ′ (t) = aρ(t)+b cρ(t)+d with ad − cb = 0. Because the matrices − a b c d and a b c d result in the same transformation, the actual symmetry is GL(2, R)/Z 2 . 2 The literature on the subject is rather extensive. For an introduction and references to the original literature see, e.g., [4]. charges is found. Similarities and differences between the Schwarzian mechanics and the conformal mechanics by de Alfaro, Fubini and Furlan [7] are discussed.
Sect. 4 is focused on a geometric description of the Schwarzian mechanics in terms of 4d metric of the ultrahyperbolic signature which obeys the Einstein equations.
Some final remarks are gathered in the concluding Sect. 5.
Schwarzian derivative via the method of nonlinear realizations
In what follows, we will need the infinitesimal form of the SL(2, R) transformation exposed in Eq. (2) above
ρ ′ (t) = ρ(t) + α, ρ ′ (t) = ρ(t) + βρ(t), ρ ′ (t) = ρ(t) + γρ 2 (t).(3)
The corresponding generators
P = i∂ ρ , D = iρ∂ ρ , K = iρ 2 ∂ ρ(4)
are associated with the translation, dilatation, and special conformal transformation acting upon the form of the field ρ(t). They obey the structure relations of SL(2, R) algebra
[P, D] = iP, [P, K] = 2iD, [D, K] = iK.(5)
One more symmetry operator, which commutes with (P, D, K), is related to the time translation
H = i∂ t , ⇒ t ′ = t + σ, ρ ′ (t ′ ) = ρ(t).(6)
Let us demonstrate that the Schwarzian derivative (1) comes about naturally, if one applies the method of nonlinear realizations [6] to SL(2, R) × R group and keeps the number of the independent Goldstone fields to a minimum.
As the first step, one considers a space parametrized by the temporal variable t and equipped with the Goldstone fields ρ(t), s(t), u(t), whose generic element reads 3 g = e itH e iρ(t)P e is(t)K e iu(t)D .
The left multiplication by a group elementg ′ = g ·g, where g = e iσH e iαP e iγK e iβD and (σ, α, γ, β) are infinitesimal parameters, defines the action of the group on the space. Taking into account the Baker-Campbell-Hausdorff formula
e iA T e −iA = T + ∞ n=1 i n n! [A, [A, . . . [A, T ] . . . ]] n times ,(8)
one gets
ρ ′ (t) = ρ(t) + α, s ′ (t) = s(t), u ′ (t) = u(t) ρ ′ (t) = ρ(t) + βρ(t), s ′ (t) = s(t) − βs(t), u ′ (t) = u(t) + β, ρ ′ (t) = ρ(t) + γρ 2 (t), s ′ (t) = s(t) + γ(1 − 2ρ(t)s(t)), u ′ (t) = u(t) + 2γρ(t),(9)
along with
t ′ = t + σ, ρ ′ (t ′ ) = ρ(t), s ′ (t ′ ) = s(t), u ′ (t ′ ) = u(t).(10)
As a sample calculation used in the derivation of (9), we display below the chain of relations involving the infinitesimal parameter β e iβD e iρ(t)P = e iρ(t)P e −iρ(t)P e iβD e iρ(t)P = e iρ(t)P e −iρ(t)
P (1 + iβD) e iρ(t)P = e iρ(t)P 1 + iβe −iρ(t)P De iρ(t)P = e iρ(t)P (1 + iβ[D + ρ(t)P ]) = e iρ(t)(1+β)P e iβD .(11)
Note that the Baker-Campbell-Hausdorff formula was used on the last but one step only. Because β is infinitesimal, one can approximate (1 + iβ[D + ρ(t)P ]) by e iβρ(t)P e iβD . As the second step, one computes the Maurer-Cartan one-forms
g −1 dg = i (ω H H + ω P P + ω K K + ω D D) ,(12)
where
ω H = dt, ω P =ρe −u dt, ω K = e u ṡ + s 2ρ dt, ω D = (u − 2sρ) dt,(13)
which are invariant under the transformationg ′ = g ·g represented by Eqs. (9), (10) above. These provide convenient building blocks for constructing invariant action functionals or equations of motion. If, for some reason, it is desirable to reduce the number of the independent Goldstone fields, one can use (13) to impose constraints. For instance, choosing the following restrictions:
ω P − µω H = 0, ω D + 2νω H = 0,(14)
where µ and ν are arbitrary constants, one can express u and s in terms of ρ
e −u = μ ρ , s = νρ +ρ 2ρ 2 .(15)
Substituting these relations into the remaining form ω K , multiplying by 2µ, and subtracting 2ν 2 ω H , one gets 2µω K − 2ν 2 ω H = S(ρ(t))dt,
with S(ρ(t)) defined in (1). Thus, the Schwarzian derivative arises quite naturally, if one applies the method of nonlinear realizations to SL(2, R) × R group and decides to keep the number of the independent Goldstone fields to a minimum. Note that within the group-theoretic framework the SL(2, R)-invariance of the derivative is obvious as it is built in terms of the invariant Maurer-Cartan one-forms.
Lagrangian formulation for the Schwarzian mechanics
In a recent work [5], a variant of the Schwarzian mechanics was studied which was obtained by setting S(ρ(t)) to be equal to a coupling constant λ S(ρ(t)) = λ.
(17)
The consideration in the preceding section allows us to immediately build a Lagrangian formulation which reproduces (17). Consider the action functional composed of the invariant Maurer-Cartan one-forms (13) dt
ω −2 H (ω P ω K + νω H ω D ) = dtρ ṡ + s 2ρ − 2νs ,(18)
where ν is an arbitrary nonzero constant. Note that in (18) we dropped the total derivative term νu. A variation of the action with respect to s yields the equation
s = νρ +ρ 2ρ 2 ,(19)
which links s to ρ. This is identical to the rightmost constraint in Eq. (15) above. Varying the action with respect to ρ, one finds
d dt ṡ + 2s 2ρ − 2νs = 0.(20)
Upon substitution of (19) into (20), one gets
d dt ṡ + 2s 2ρ − 2νs = 1 2ρ d dt S(ρ(t)) = 0 ⇒ S(ρ(t)) = λ,(21)
where λ is a constant of integration. Identifying the latter with a coupling constant, one reproduces the dynamical system (17). Because the action functional (18) is given in terms of the Maurer-Cartan one-forms, it does not change under the transformations (9) and (10). Constructing the Noether charges by conventional means 4 , one finds H =ρ ṡ + s 2ρ , P =ṡ + 2s 2ρ − 2νs,
D = ρP − sρ, K = ρ 2 P + (1 − 2sρ)ρ + 2νρ.(22)
Taking into account the condition (19) and the equation of motion (17), one finds that H degenerates to a constant
H = 1 2 λ + ν 2 ,(23)
while P , D, and K yield the integrals of motion associated with Eq. (17)
P = 1 2ρ λ + 1 2 ρ ρ 2 , D = ρP −ρ 2ρ , K = ρ 2 P +ρ − ρρ ρ(24)
The expressions for P and D agree with those found in [5] by the explicit integration of (17), while K proves to be functionally dependent
P K − D 2 − λ 2 = 0.(25)
Concluding this section, it is worth emphasising that, within the context of the Schwarzian mechanics, SL(2, R) group acts in the space of the Goldstone fields (9) by affecting their form only. This is to be contrasted with the 1d conformal mechanics by de Alfaro, Fubini and Furlan [7], in which SL(2, R) is realized in the 1d space parametrized by the temporal variable t (cf. (9)) t ′ = t + α + βt + γt 2 .
This is accompanied by the transformation law of the Goldstone field
ρ ′ (t ′ ) = ρ(t) + 1 2 (β + 2γt)ρ(t)(27)
and gives rise to the second order invariant equation
ρ(t) = λ 2 ρ(t) 3 ,(28)
where λ is a coupling constant. The derivation of the conformal mechanics [7] by the method of nonlinear realizations was reported in [8].
For more details concerning the Eisenhart lift of a generic 2d mechanics see a recent work [13].
Conclusion
To summarize, in this work we applied the method of nonlinear realizations to SL(2, R) × R group and demonstrated that the Schwarzian derivative arose naturally, if one decided to keep the number of the independent Goldstone fields to a minimum. A Lagrangian formulation for a variant of the Schwarzian mechanics studied in [5] was constructed in terms of the invariant Maurer-Cartan one-forms and the full set of the integrals of motion was exposed.
A geometric formulation has been constructed in terms of 4d metric of the ultrahyperbolic signature which obeys the Einstein equations. Turning to possible further developments, it would be interesting to generalise the analysis in this work to the case of the super Schwarzian derivative (see, e.g., [4] and references therein) and the supersymmetric Schwarzian mechanics. A possible link of the Hamiltonian formulation (29) to the Ostrogradsky method is worth studying. Last but not least, it would be interesting to analyse if the method in Sect. 2 may result in other interesting higher order derivatives enjoying a given symmetry group.
It should be born in mind that both the form of the SL(2, R) transformations and the invariant Maurer-Cartan one-forms essentially depend on the order of the factors entering the right hand side of (7). The choice(7)proves to be optimal.
Lagrangian symmetries, which we consider in this work, are of the formt ′ = t + δt(t), x ′ i (t ′ ) = x i (t) + δx i (t, x(t)). If the action S = dtL(x,ẋ) holds invariant up to a total derivative, δS = dt dF dt , the conserved quantity is derived from δx i ∂L ∂ẋi − δt ẋ i ∂L ∂ẋi − L − F by discarding an infinitesimal parameter of the transformation. For the case at hand, only the special conformal transformation associated with the last line in (9) yields a total derivative term, F = −2νγρ.
Interestingly enough, although the Hamiltonian (29) does describe a higher derivative theory, it is apparently not of the Ostrogradsky type.
AcknowledgementsThis work was supported by the Russian Science Foundation, grant No 19-11-00005.Geometric description of the Schwarzian mechanicsWithin the general relativistic framework, the conventional method of describing a classical mechanics systems with d degrees of freedom is to embed its equations of motion into the null geodesic equations associated with a Brinkmann-type metric defined on (d + 2)-dimensional spacetime of the Lorentzian signature[9]. Let us discuss a geometric formulation for the Schwarzian mechanics (17).As the first step, one constructs the Hamiltonian 5 corresponding to the action functionalwhere (p ρ , p s ) designate momenta canonically conjugate to the configuration space variables (ρ, s). As the second step, one introduces two more canonical pairs (t, p t ), (v, p v ) and promotes H 2d to the function which is homogeneously polynomial of degree two in the momenta (see Sect. 2 in[10]and related earlier work[11])Identifying the latter with the 4d geodesic Hamiltonian H 4d = 1 2 g M N p M p N , in which p M = (p t , p v , p ρ , p s ), one finally gets the Eisenhart metricwhere we denoted Z M = (t, v, ρ, s). By construction, the null reduction of the geodesic equations associated with (31) along v reproduces (17). A few comments are in order. Firstly, the metric (31) is given in the global coordinate system. Secondly, it is of the ultrahyperbolic signature (+, +, −, −), which is in agreement with the geometric analysis of higher derivative models in[12]. Thirdly, the metric admits five Killing vector fieldsof which ∂ v is covariantly constant. Two of these can be used to construct the energymomentum tensorwhich allows one to regard (31) as a solution to the Einstein equations
H Schwarz, Gesammelte mathematische Abhandlungen. BerlinSpringer1890H. Schwarz, Gesammelte mathematische Abhandlungen, Springer, Berlin, 1890.
What is the Schwarzian derivative?. V Ovsienko, S Tabachnikov, Notices of the AMS. 5634V. Ovsienko, S. Tabachnikov, What is the Schwarzian derivative?, Notices of the AMS 56 (2009) 34.
D Lüst, S Theisen, Lectures on string theory. 346D. Lüst, S. Theisen, Lectures on string theory, Lect. Notes Phys. 346, 1989.
Solving the Schwarzian via the conformal bootstrap. T G Mertens, G J Turiaci, H L Verlinde, arXiv:1705.08408JHEP. 1708136T.G. Mertens, G.J. Turiaci, H.L. Verlinde, Solving the Schwarzian via the conformal bootstrap, JHEP 1708 (2017) 136, arXiv:1705.08408.
A variant of Schwarzian mechanics. A Galajinsky, arXiv:1809.00904Nucl. Phys. B. 936661A. Galajinsky, A variant of Schwarzian mechanics, Nucl. Phys. B 936 (2018) 661, arXiv:1809.00904.
. S R Coleman, J Wess, B Zumino, Structure of phenomenological Lagrangians. I, Phys. Rev. 1772239S.R. Coleman, J. Wess, B. Zumino, Structure of phenomenological Lagrangians. I, Phys. Rev. 177 (1969) 2239.
Conformal invariance in quantum mechanics. V De Alfaro, S Fubini, G Furlan, Nuovo Cim. A. 34569V. de Alfaro, S. Fubini, G. Furlan, Conformal invariance in quantum mechanics, Nuovo Cim. A 34 (1976) 569.
Geometry of conformal mechanics. E Ivanov, S Krivonos, V Leviant, J. Phys. A. 22345E. Ivanov, S. Krivonos, V. Leviant, Geometry of conformal mechanics, J. Phys. A 22 (1989) 345.
Dynamical trajectories and geodesics. L Eisenhart, Ann. Math. 30591L. Eisenhart, Dynamical trajectories and geodesics, Ann. Math. 30 (1929) 591.
Some spacetimes with higher rank Killing-Stackel tensors. G W Gibbons, T Houri, D Kubiznak, C Warnick, arXiv:1103.5366Phys. Lett. B. 70068G.W. Gibbons, T. Houri, D. Kubiznak, C. Warnick, Some spacetimes with higher rank Killing-Stackel tensors, Phys. Lett. B 700 (2011) 68, arXiv:1103.5366.
Celestial mechanics, conformal structures and gravitational waves. C Duval, G W Gibbons, P A Horvathy, arXiv:hep-th/0512188Phys. Rev. D. 433907C. Duval, G.W. Gibbons, P.A. Horvathy, Celestial mechanics, conformal structures and gravitational waves, Phys. Rev. D 43 (1991) 3907, arXiv:hep-th/0512188.
Eisenhart lift for higher derivative systems. A Galajinsky, I Masterov, arXiv:1611.04294Phys. Lett. B. 76586A. Galajinsky, I. Masterov, Eisenhart lift for higher derivative systems, Phys. Lett. B 765 (2017) 86, arXiv:1611.04294.
Eisenhart lift of 2-dimensional mechanics. A P Fordy, A Galajinsky, arXiv:1901.03699Eur. Phys. J. C. 79301A.P. Fordy, A. Galajinsky, Eisenhart lift of 2-dimensional mechanics, Eur. Phys. J. C 79 (2019) 301, arXiv:1901.03699.
|
[] |
[
"Prepared for submission to JHEP Rotating black holes in an expanding universe from fake supergravity",
"Prepared for submission to JHEP Rotating black holes in an expanding universe from fake supergravity",
"Prepared for submission to JHEP Rotating black holes in an expanding universe from fake supergravity",
"Prepared for submission to JHEP Rotating black holes in an expanding universe from fake supergravity"
] |
[
"Samuele Chimento [email protected] \nDipartimento di Fisica\nUniversità di Milano\nINFN\nSezione di Milano\nVia Celoria 1620133MilanoItaly\n",
"Dietmar Klemm [email protected] \nDipartimento di Fisica\nUniversità di Milano\nINFN\nSezione di Milano\nVia Celoria 1620133MilanoItaly\n",
"Samuele Chimento [email protected] \nDipartimento di Fisica\nUniversità di Milano\nINFN\nSezione di Milano\nVia Celoria 1620133MilanoItaly\n",
"Dietmar Klemm [email protected] \nDipartimento di Fisica\nUniversità di Milano\nINFN\nSezione di Milano\nVia Celoria 1620133MilanoItaly\n"
] |
[
"Dipartimento di Fisica\nUniversità di Milano\nINFN\nSezione di Milano\nVia Celoria 1620133MilanoItaly",
"Dipartimento di Fisica\nUniversità di Milano\nINFN\nSezione di Milano\nVia Celoria 1620133MilanoItaly",
"Dipartimento di Fisica\nUniversità di Milano\nINFN\nSezione di Milano\nVia Celoria 1620133MilanoItaly",
"Dipartimento di Fisica\nUniversità di Milano\nINFN\nSezione di Milano\nVia Celoria 1620133MilanoItaly"
] |
[] |
Using the recipe of arXiv:0902.4814, where all fake supersymmetric backgrounds of matter-coupled fake N = 2, d = 4 gauged supergravity were classified, we construct dynamical rotating black holes in an expanding FLRW universe. This is done for two different prepotentials that are both truncations of the stu model and correspond to just one vector multiplet. In this scenario, the cosmic expansion is driven by two U(1) gauge fields and by a complex scalar that rolls down its potential. Generically, the solutions of arXiv:0902.4814 are fibrations over a Gauduchon-Tod base space, and we make three different choices for this base, namely flat space, the three-sphere and the Berger sphere. In the first two cases, the black holes are determined by harmonic functions on the base, while in the last case they obey a deformed Laplace equation that contains the squashing parameter of the Berger sphere. This is the generalization to a cosmological context of the usual recipe in ungauged supergravity, where black holes are given in terms of harmonic functions on three-dimensional Euclidean space. The constructed solutions may be instrumental in addressing analytically questions like black hole collisions and violation of cosmic censorship.
|
10.1088/0264-9381/32/4/045006
|
[
"https://arxiv.org/pdf/1405.5343v1.pdf"
] | 118,528,589 |
1212.5494
|
83204f26c7efa3f84a61e3ab9925432977b094b1
|
Prepared for submission to JHEP Rotating black holes in an expanding universe from fake supergravity
21 May 2014
Samuele Chimento [email protected]
Dipartimento di Fisica
Università di Milano
INFN
Sezione di Milano
Via Celoria 1620133MilanoItaly
Dietmar Klemm [email protected]
Dipartimento di Fisica
Università di Milano
INFN
Sezione di Milano
Via Celoria 1620133MilanoItaly
Prepared for submission to JHEP Rotating black holes in an expanding universe from fake supergravity
21 May 2014Black HolesSupergravity ModelsBlack Holes in String Theory
Using the recipe of arXiv:0902.4814, where all fake supersymmetric backgrounds of matter-coupled fake N = 2, d = 4 gauged supergravity were classified, we construct dynamical rotating black holes in an expanding FLRW universe. This is done for two different prepotentials that are both truncations of the stu model and correspond to just one vector multiplet. In this scenario, the cosmic expansion is driven by two U(1) gauge fields and by a complex scalar that rolls down its potential. Generically, the solutions of arXiv:0902.4814 are fibrations over a Gauduchon-Tod base space, and we make three different choices for this base, namely flat space, the three-sphere and the Berger sphere. In the first two cases, the black holes are determined by harmonic functions on the base, while in the last case they obey a deformed Laplace equation that contains the squashing parameter of the Berger sphere. This is the generalization to a cosmological context of the usual recipe in ungauged supergravity, where black holes are given in terms of harmonic functions on three-dimensional Euclidean space. The constructed solutions may be instrumental in addressing analytically questions like black hole collisions and violation of cosmic censorship.
Introduction
Black holes are the natural test ground for quantum gravity. Much of the current knowledge on quantum effects in strong gravitational fields indeed comes from the study of stationary black holes. However many interesting open questions, such as the validity of the cosmic censorship conjecture or what happens when black holes collide, are dynamical in nature and thus require the study of time-dependent black hole solutions.
One well-known such solution is the McVittie spacetime [1], whose interpretation as a black hole, or a mass particle, in an FLRW universe has been the subject of some controversy in the literature [2][3][4]. Another example, which however violates the energy conditions, was constructed by Sultana and Dyer [5] using conformal methods. Kastor and Traschen (KT) [6] obtained a solution representing an arbitrary number of electrically charged black holes, with charge equal to the mass, in a de Sitter universe. This solution allows an analytical discussion of black hole collisions and of the issue whether such processes lead to a violation of cosmic censorship [6,7]. The KT solution is a time-dependent generalization of the Majumdar-Papapetrou spacetime [8,9], which describes maximally charged Reissner-Nordström black holes in static equilibrium in an asymptotically flat space. The MP solution is supersymmetric, and the existence of a Killing spinor, satisfying a first order differential equation, explains why one can take arbitrary superpositions of black holes despite the high non-linearity of Einstein's equations. Supersymmetry however is only compatible with a negative or vanishing cosmological constant, thus no true Killing spinor can exist in a theory with positive cosmological constant. It was shown in [10] that the KT solution admits instead a fake Killing spinor, i.e., a solution of first order equations which are related to the Killing spinor equations of supergravity but do not come from an underlying supersymmetry.
Maeda, Ohta and Uzawa (MOU) obtained four-and five-dimensional black holes in an FLRW universe filled with stiff matter from the compactification of higher dimensional intersecting brane solutions [11]. In [12] Gibbons and Maeda presented a class of spacetimes interpolating between the KT and the four-dimensional MOU black holes as solutions to a theory with a Liouville-type scalar potential, later generalized to arbitrary dimension and further analyzed in [13]. In [14] the four-dimensional case was generalized to a scalar potential given by a sum of exponentials and the black holes were shown to admit a fake Killing spinor, explaining the superposition principle observed in the solution.
Only a few time-dependent rotating black hole solutions are known. A spinning generalization of the KT solution in a string-inspired theory was given by Shiromizu in [15]. Five-dimensional multi-centered rotating charged de Sitter black holes were constructed in [16,17]. A rotating generalization of the five-dimensional MOU solution was obtained in [18] by solving fake Killing spinor equations.
In this paper we will use the classification of all the fake supersymmetric solutions of Wick-rotated 1 N = 2, d = 4 gauged supergravity coupled to (non)abelian vector multiplets given in [19] 2 to build explicit time-dependent black hole solutions. We will restrict ourselves to the case of a single abelian vector multiplet, corresponding to a theory with two U (1) gauge fields and a single complex scalar field. Unlike what we did in [14], we will not require the scalar to be real (or equivalently imaginary). This will allow us to obtain solutions with rotation and NUT-charge that are generalizations of a subclass of those in [14]. For one choice of the prepotential defining the theory, these can be written in terms of two complex harmonic functions in a form similar to the IWP class of metrics [21,22], of which they are generalizations. We will also present solutions whose spatial slices have non-flat geometry. If the three-dimensional base space is spherical the solutions are given in terms of functions that are harmonics on the three-sphere.
The paper is organized as follows. In section 2 we briefly review fake N = 2, d = 4 gauged supergravity coupled to abelian vector multiplets and present the recipe of [19] to construct fake supersymmetric solutions. In section 3 we consider three different geometries for the three-dimensional base space and obtain some results that are independent of the specific theory (i.e., of the prepotential) under consideration. We also show, for flat or spherical geometry, how to obtain multi-centered solutions. In sections 4 and 5 we obtain explicit solutions for two different choices of the prepotential. In section 6 we conclude with some final remarks.
2 Fake N = 2, d = 4 gauged supergravity
Special geometry
In N = 2, d = 4 supergravity coupled to n V vector multiplets, the complex scalars of the multiplets parametrize an n V -dimensional Kähler-Hodge manifold, which is the base of a symplectic bundle with the covariantly holomorphic sections 3
V = L Λ M Λ , DīV ≡ ∂īV − 1 2 (∂īK) V = 0 , (2.1) obeying the constraint V,V ≡L Λ M Λ − L ΛM Λ = −i , (2.2)
where K is the Kähler potential. We also introduce the explicitly holomorphic section
Ω ≡ e −K/2 V ≡ χ Λ F Λ . (2.3)
If the theory is defined by a prepotential F(χ), then F Λ = ∂ Λ F. In terms of the section Ω the constraint (2.2) becomes
Ω,Ω ≡χ Λ F Λ − χ ΛF Λ = −ie −K . (2.4)
The couplings of the vectors to the scalars are determined by the matrix N , defined by the relations
M Λ = N ΛΣ L Σ , DīM Λ = N ΛΣ DīL Σ . (2.5)
In a theory with a prepotential, N is given by
N ΛΣ =F ΛΣ + 2i Im(F) ΛΛ χ Λ Im(F) ΣΣ χ Σ χ Ω Im(F) ΩΩ χ Ω , (2.6) where F ΛΣ = ∂ Λ ∂ Σ F.
The bosonic Lagrangian in the case of abelian vector multiplets, and with Fayet-Iliopoulos (FI) gauging of a U(1) R-symmetry subgroup, takes the form
e −1 L bos = R + 2G i ∂ a Z i ∂ aZ − V + 2Im(N ) ΛΣ F Λ ab F Σab − 2Re(N ) ΛΣ F Λ ab F Σab ,(2.7)
with the scalar potential
V = − g 2 2 4 C Λ L Λ 2 + 1 2 Im(N ) −1|ΛΣ C Λ C Σ . (2.8)
Here g denotes the gauge coupling constant, and the FI parameters C Λ determine the linear combination C Λ A Λ that is used to gauge the U(1). Since the matrix Im(N ) ΛΣ appears in the kinetic term of the vector fields, it must be negative definite and thus invertible. It can therefore be used as a 'metric' to raise and lower Λ, Σ, . . . indices.
Fake Killing spinors
If we perform a Wick rotation on the gauge coupling constant, g → ig, we obtain a new, non-supersymmetric theory with V → −V and a gauged R-symmetry 4 . The Killing spinor equations, coming from the vanishing of the fermionic supersymmetry variations, become
D a I = −2iL Λ F Λ+ ab γ b − ig 4 C Λ L Λ γ a ε IJ J , i / ∂Z i I = f i Λ / F Λ+ − g 2 C Λf iΛ ε IJ J ,(2.9)
where
D a I ≡ ∇ a + i 2 Q a − g 2 C Λ A Λ a I , Q a = (2i) −1 ∂ a Z i ∂ i K − ∂ aZī ∂īK is the gauge field of the Kähler U(1), and f Λ i ≡ D i L Λ = ∂ i + 1 2 ∂ i K L Λ .
Since these equations do not come from supersymmetry, they are called fake Killing spinor equations, and solutions for which they are satisfied are known as fake supersymmetric.
From the fake Killing spinors one can construct the bilinears 10) and the real symplectic sections of Kähler weight zero
X = 1 2 ε IJ¯ I J , V a = i¯ I γ a I , V x a = i(σ x ) J I¯ I γ a J ,(2.R ≡ Re(V/X) , I ≡ Im(V/X) . (2.11)
Fake supersymmetric solutions
In [19], Meessen and Palomo-Lozano presented a general method to obtain fake supersymmetric solutions to fake N = 2, d = 4 gauged supergravity coupled to nonabelian vector multiplets. We will restrict ourselves here to the case of just abelian multiplets and FI gauging. We will also consider only the timelike case of [19], which means that we take the norm of V defined in (2.10) to be positive. With these restrictions, the fake supersymmetric solutions always assume the form [19]
ds 2 = 2 |X| 2 (dτ + ω) 2 − 1 2 |X| 2 h mn dy m dy n ,(2.
12)
A Λ = − 1 2 R Λ V +Ã Λ m dy m , (2.13) Z Λ = L Λ L 0 = R Λ + iI Λ R 0 + iI 0 ,(2.14)
where V = 2 √ 2 |X| 2 (dτ + ω), ω = ω m dy m is a 1-form which can in general depend on τ , and h is the metric on a three-dimensional Gauduchon-Tod [24] base space. In particular there must exist a dreibein W x for h satisfying
dW x = gC ΛÃ Λ ∧ W x + g 2 √ 2 C Λ I Λ ε xyz W y ∧ W z . (2.15)
Furthermore the following equations must hold:
ω = gC ΛÃ Λ τ +ω , (2.16) F Λ xy = − 1 √ 2 ε xyzD z I Λ , (2.17) ∂ τ I Λ = 0 , ∂ τ I Λ = − g 2 √ 2 C Λ , (2.18) D 2 xĨ Λ − D xωx ∂ τ I Λ = 0 , (2.19) Dω = ε xyz Ĩ ∂ xĨ −ω x ∂ τ I W y ∧ W z , (2.20) withF Λ ≡ dÃ Λ ,ω ≡ ω| τ =0 ,Ĩ ≡ I| τ =0 , (2.21) D m I ≡ ∂ m I + gC ΛÃ Λ m I ,D x I ≡ W m xD m I . (2.22)
To obtain a specific solution we will then have to take the following steps:
1. Choose the number of vector multiplets, the real constants C Λ and the prepotential F. This completely determines the bosonic action and permits to derive the dependence of the R's from the I's, the so-called stabilization equations. 5. Solve the stabilization equations to find the R's and finally write down the metric and the other fields of the solution using (2.16) and 1/ |X| 2 = 2 R|I .
Choose a three-dimensional
In the next sections, we will use this procedure to find some solutions to theories with one vector multiplet, so that there will be only one physical scalar Z 1 ≡ Z.
3 Choice of base space
Flat space
The simplest solution of eq. (2.15) is three-dimensional flat space, with
W x m = δ x m , C ΛÃ Λ = C Λ I Λ = 0 . (3.1)
With this choice for the base space we don't need to distinguish between x, y, z . . . and lower m, n, p, . . . indices. If C 0 = C 1 = 0, C Λ I Λ = 0 is automatically satisfied and the section I is timeindependent. Using equation (2.17) and the Bianchi identity dF Λ = 0 it can be seen that the I Λ must be harmonic,
I 0 ≡ √ 2H 0 , I 1 ≡ √ 2H 1 . (3.2)
Moreover, (2.19) implies that the I Λ are harmonic as well,
I 0 ≡ H 0 2 √ 2 , I 1 ≡ H 1 2 √ 2 . (3.3) Equ. (2.20) becomes dω = 3 H 0 dH 0 + H 1 dH 1 − H 0 dH 0 − H 1 dH 1 . (3.4)
If at least one of the C Λ is nonzero, e.g.
C 1 = 0, C Λ I Λ = 0 implies I 1 = − C 0 C 1 I 0 .
Then, (2.17) and the Bianchi identity dF 0 = 0 yield
I 0 = √ 2H im , I 1 = − √ 2 C 0 C 1 H im , (3.5)
where H im is a time-independent harmonic function 5 .
(2.19) together with (2.18) implies that the time-independent combination I 0 − C 0 C 1 I 1 is harmonic. It proves convenient to express this defining
I 0 ≡ C 0 C 1 I 1 − 1 2 √ 2 H 1 + 1 2 √ 2 H 0 ,(3.6)
with H 0 , H 1 harmonic functions independent of τ . Since there are no further constraints onĨ 1 , the I Λ can be written as
I 1 = 1 2 √ 2 τ t 1 + f , I 0 = 1 2 √ 2 τ t 0 + H 0 + t 1 t 0 (f − H 1 ) ,(3.7)
where t Λ ≡ −(gC Λ ) −1 and f is a generic function of the spatial coordinates.
Equ. (2.20) becomes dω = 3 H 0 − t 1 t 0 H 1 dH im − H im d H 0 − t 1 t 0 H 1 ,(3.8)
and from (2.19) one gets
∂ pωp = t 1 ∂ p ∂ p f . (3.9)
It is always possible to set f to zero with a shift in the time coordinate, τ = t−t 1 f +t 1 H 1 , and replacingω byω =ω − t 1 df + t 1 dH 1 , such that
I 1 = 1 2 √ 2 t t 1 + H 1 , I 0 = 1 2 √ 2 t t 0 + H 0 , dω = 3 H 0 − t 1 t 0 H 1 dH im − H im d H 0 − t 1 t 0 H 1 , ∂ pωp = 0 , (3.10) dτ +ω = dt +ω .
An explicit choice for the harmonic functions, best expressed in Boyer-Lindquist coordinates (r, θ, φ) with x + iy = √ r 2 + a 2 sin θe iϕ and z = r cos θ, is
H = k + q Re (V ) + Q Im (V ) , (3.11) with V = 1 r − ia cos θ . (3.12)
If all the harmonics have this form, (3.10) is solved bŷ
ω = 1 Σ − 1 2 a sin 2 θ 2 kQ r + qQ + kq (r 2 + a 2 ) cos θ dϕ , (3.13) where Σ = r 2 + a 2 cos 2 θ , xy =xy im − x imỹ ,x = x 0 − t 1 t 0 x 1 . (3.14)
This choice is also suitable to be generalized to the multi-centered case. To this end, define V ( x, a) = 1
x 2 + y 2 + (z − ia) 2 ,(3.15)
and consider harmonic functions of the form
H = k + I (q I Re(V I ) + Q I Im(V I )) , (3.16) with V I ≡ V ( x − x I , a I )
, where x I is an arbitrary point in R 3 and the parameter a I in general depends on I. As long as the charges are taken to satisfy q im I = αq I , Q im I = αQ I for every I, with α independent of I, (3.10) reduces to
dω = (αk − k im ) 3 dH ,(3.17)
whereH = H 0 − t 1 H 1 /t 0 .ω is thus given by a sum over I of terms of the form (3.13), with qQ = 0. More explicitly, (3.13) with these charge constraints can be written in Cartesian coordinates and generalized tô
ω = − 2(αk − k im ) I Q I Re(V I ) | x − x I | 2 + a 2 I + 1/|V I | 2 −q I Im(V I ) | x − x I | 2 + a 2 I − 1/|V I | 2 · · a I [(x − x I ) dy − (y − y I ) dx] . (3.18)
Three-sphere
Since Gauduchon-Tod spaces are actually conformal classes, it would be possible to take any conformally flat three-dimensional manifold as a base space simply by applying a conformal transformation to the quantities in section 3.1 with appropriate conformal weights, leading to a nonzero C ΛÃ Λ . This would however result in the same fourdimensional solutions expressed in different coordinates.
On the other hand there is a different Gauduchon-Tod structure that can be defined on the same conformal class, giving nonequivalent four-dimensional solutions. Start from a 3-sphere, with metric in the form
ds 2 3 = 1 4 dθ 2 + sin 2 θ dϕ 2 + (dψ + cos θ dϕ) 2 ,(3.C ΛÃ Λ = 0 , C Λ I Λ = − 2 √ 2 g . (3.22)
A useful consequence of (3.21) is that with this frame choice we have for the associated spin connection ω x y z − ω x z y = 2 ε xyz , where ω y x z ≡ W µ x ω y µ z , as can easily be seen from Maurer-Cartan's first structure equation. This in particular implies that for a scalar function f on the sphere
∂ x ∂ x f = ∇ m ∇ m f ,(3.23)
where ∇ is the Levi-Civita connection associated with the metric (3.19), and
[∂ x , ∂ y ] = 2 ε xyz ∂ z . (3.24)
From (3.22) it is clear that the ungauged theory, C 0 = C 1 = 0, is incompatible with this GT-structure, hence at least one of the C Λ must be nonzero. If C 1 = 0, (3.22) gives
I 1 = 2 √ 2 t 1 − t 1 t 0 I 0 ,
where the t Λ were defined in section 3.1. The Bianchi identity dF 0 = 0, using (3.21), immediately implies ε xyz ∂ xF 0 yz = 0. Plugging in the expression forF 0 xy given by (2.17) and using (3.23) one concludes that I 0 must be harmonic on the sphere,
I 0 = √ 2H im , I 1 = √ 2 2 t 1 − t 1 t 0 H im . (3.25)
Equations (2.18) and (2.19) again imply that the combination I 0 − t 1 t 0 I 1 is harmonic on the base space, 26) while no additional constraint is imposed onĨ 1 , so one has
I 0 = t 1 t 0 I 1 − 1 2 √ 2 H 1 + 1 2 √ 2 H 0 ,(3.I 1 = 1 2 √ 2 τ t 1 + f , I 0 = 1 2 √ 2 τ t 0 + H 0 + t 1 t 0 (f − H 1 ) ,(3.27)
where a generic function f on S 3 was introduced. (2.20) becomes
dω = 3 H 0 − t 1 t 0 H 1 dH im − H im d H 0 − t 1 t 0 H 1 − 2 t 1 df + 2ω ,(3.28)
with ∂ xωx = t 1 ∂ x ∂ x f due to (2.19). Setting as before f = 0 by taking τ = t − t 1 f + t 1 H 1 andω =ω + t 1 df − t 1 dH 1 , one gets
I 0 = 1 2 √ 2 t t 0 + H 0 , I 1 = 1 2 √ 2 t t 1 + H 1 ,(3.29)
andω satisfies
dω = 3 H dH im − H im dH − 2 t 1 dH 1 + 2ω , ∂ xωx = ∇ mω m = 0 ,(3.30)
withH ≡ H 0 − t 1 t 0 H 1 . If the harmonics are chosen such as to satisfy dH im ∧ dH = 0, the simplest solution to these equations isω = 1 2 H im dH − 1 2H dH im + t 1 dH 1 , with dω = 0, and all other solutions can be obtained by adding arbitrary solutions of dω − 2 3 ω = 0, which implies ∇ m ω m = 0; these are clearly independent of the choice of harmonic functions.
To make an explicit choice forω and the harmonics it is convenient to work with the usual hyperspherical coordinates,
ds 2 S 3 = dΨ 2 + sin 2 Ψ dΘ 2 + sin 2 Θ dΦ 2 . (3.31)
In these coordinates the simplest nontrivial choice of harmonic function on S 3 is
H = k + q cos Ψ sin Ψ , (3.32)
which is singular in the points Ψ = 0, π. In a neighbourhood of the singularities the metric on S 3 is well approximated by the flat metric in spherical coordinates with Ψ playing the role of a radial coordinate, and H ∼ k + q Ψ . If all the harmonics are chosen to be of the form (3.32), the minimalω becomes
ω = 1 2k q im − k imq − 2q 1 t 1 sin 2 Ψ dΨ , (3.33)
which is the differential of a harmonic function and as such can be set to zero by a shift in the time coordinate and a redefinition of the harmonics H 0 and H 1 . This is equivalent to takingω = 0 from the beginning by imposing the constraint
kq im − k imq − 2q 1 t 1 = 0 . (3.34)
The equation dω = 2 3 ω, together with (3.21) and (3.24), implies ∂ x ∂ x ω y = −8ω y , which means that the components of ω with respect to the dreibein W x are spherical harmonics on S 3 with eigenvalue 1 − n 2 = −8. Using the well-known expressions for these spherical harmonics and rewriting the one-forms W x in the coordinates (3.31) it is possible to obtain the most general solution for ω which is regular on the three-sphere. The metric (3.19) is obtained by considering S 3 embedded in C 2 , |z 1 | 2 + |z 2 | 2 = 1, and taking the parametrization where a, b, and c are constants. It is also possible to construct multi-centered solutions by taking sums of harmonic functions with singularities in arbitrary points on the 3-sphere. Given the standard embedding of S 3 in R 4 , the harmonic function cos Ψ sin Ψ can be written as
z 1 = cos θ 2 e i 2 (ϕ+ψ) , z 2 = sin θ 2 e i 2 (ϕ−ψ) .(3.h = x 1 1 − x 2 1 , (3.38)
and the analogous harmonic function with singularities in any couple of antipodal points can be simply obtained by a rotation in R 4 sending the point (1, 0, 0, 0), corresponding to ψ = 0, in one of the new points. However in this case one has in general dω = 0, and in order to reinstate dω = 0 while keeping the possibility of having an arbitrary number of black holes in arbitrary positions and with independent charges one has to impose q im = αq for each of them, where α is a proportionality constant.
Berger sphere
A more general Gauduchon-Tod space can be defined starting from the Berger sphere [24], which is a squashed S 3 or an SU(2) group manifold with an SU(2) × U(1)-invariant metric ds 2 3 = dθ 2 + sin 2 θ dϕ 2 + cos 2 µ (dψ + cos θ dϕ) 2 .
(3.39)
Given the well-known expressions for the left-invariant 1-forms σ L 1 = sin ψ dθ − sin θ cos ψ dϕ , σ L 2 = cos ψ dθ + sin θ sin ψ dϕ , σ L 3 = dψ + cos θ dϕ , and for the right-invariant 1-forms σ R 1 = sin ϕ dθ − sin θ cos ϕ dψ , σ R 2 = cos ϕ dθ + sin θ sin ϕ dψ , σ R 3 = dϕ + cos θ dψ , one can define the dreibein [25]
W 1 = cos µ σ R 1 ± sin µ cos θ σ R 2 − sin θ sin ϕ σ R 3 , W 2 = cos µ σ R 2 ∓ sin µ cos θ σ R 1 + sin θ cos ϕ σ R 3 , W 3 = cos µ σ R 3 ± sin µ sin θ sin ϕ σ R 1 + cos ϕ σ R 2 , (3.40) that satisfies dW x = ± sin µ cos µ σ L 3 ∧ W x − cos µ 2 ε xyz W y ∧ W z , (3.41)
so that equation (2.15) is satisfied with
C ΛÃ Λ = ± sin µ cos µ g σ L 3 , C Λ I Λ = − √ 2 g cos µ .
(3.42)
Using Maurer-Cartan's first structure equation it is possible to see that for a scalar function on the Berger sphere
∂ x ∂ x f ± 2 sin µ cos µ σ L 3 x ∂ x f = ∇ m ∇ m f . (3.43)
Again at least one of the C Λ must be nonzero. If we assume C 1 = 0, (3.42) yields
I 1 = √ 2 t 1 cos µ − t 1 t 0 I 0 ,
where the t Λ are defined as before. The Bianchi identity dF Λ = 0, using (3.41), implies
ε xyz ∂ x ± 2 sin µ cos µ σ L 3 x F Λ yz = 0 .
Substituting the expression for F Λ xy given by (2.17) and using (3.43) one gets for K im ≡ 1 √
2 I 0 : ∇ m ∇ m ± sin µ cos µ σ L 3 m K im = ∇ m ± sin µ cos µ σ L 3 m ∇ m K im = 0 . (3.44)
Eqns. (2.18) and (2.19) imply that the combinationK ≡ 2 √ 2(I 0 − t 1 t 0 I 1 ) satisfies
∇ m ∇ m − sin 2 µ K = 0 ,(3.45)
while no additional constraint is imposed onĨ 1 , so one has
I 1 = 1 2 √ 2 τ t 1 + f , I 0 = 1 2 √ 2 τ t 0 +K + t 1 t 0 f , (3.46)
where a generic function f (θ, ϕ, ψ) was introduced. (2.20) becomes
dω ± sin µ cos µ σ L 3 ∧ω = 3 K dK im − K im dK − t 1 cos µ df + cos µω ,(3.∇ mω m ∓ sin µ cos µ σ L 3 mω m = t 1 ∇ m ∇ m − sin 2 µ f . (3.48)
It is possible to set f = 0 by taking (3.45). In this way
τ = t − t 1 f + t 1 K 1 andω =ω + t 1 d(f − K 1 ) ± sin µ cos µ σ L 3 t 1 (f − K 1 ), where K 1 (θ, ϕ, ψ) satisfiesI 0 = 1 2 √ 2 t t 0 + K 0 , I 1 = 1 2 √ 2 t t 1 + K 1 ,(3.49)with K 0 ≡K + t 1 t 0 K 1 , andω satisfies dω ± sin µ cos µ σ L 3 ∧ω = 3 K dK im − K im dK − t 1 cos µ dK 1 + cos µω , ∇ mω m ∓ sin µ cos µ σ L 3 mω m = 0 . (3.50)
There is no obvious way of finding solutions to the eqns. (3.44) and (3.45) that in the limit µ → 0 reduce to harmonic functions of the form given in section 3.2, which is what one would expect for black hole solutions. It is however possible to consider simple solutions given by the trivial choices
K 0 = K 1 = 0 , K im = k im ,ω = 0 , (3.51) with k im constant. 4 The F(χ) = − i 4 χ 0 χ 1 model
Given this prepotential, from (2.4) we can derive the Kähler potential
e −K = Re(Z) ,(4.1)
where we fixed |χ 0 | = 1. The Kähler metric is then
G = ∂ Z ∂ZK = 1 4 Re(Z) −2 . (4.2)
From equation (2.6) one obtains
N = − i 4 Z 0 0 1 Z ,(4.3)
and for the scalar potential (2.8) one gets
V = g 2 C 2 0 Re(Z) + 4C 0 C 1 + C 2 1 Re(1/Z) . (4.4) (2.11) leads to R 0 = −4I 1 , R 1 = −4I 0 , R 0 = 1 4 I 1 , R 1 = 1 4 I 0 ,(4.5)
as well as 1 2|X| 2 = R|I = 1 2 I 0 I 1 + 8 I 0 I 1 .
(4.6)
Flat base space
Using the results of section 3.1, one gets in the ungauged case from (4.6)
1 2|X| 2 = H 0 H 1 + H 0 H 1 ,(4.7)
and the solution takes the well-known form [26]
ds 2 = 2|X| 2 (dτ +ω) 2 − 1 2|X| 2 d y 2 , Z = H 0 − iH 1 H 1 − iH 0 , (4.8) F 0 = d 2|X| 2 H 1 (dτ +ω) − 3 dH 0 , F 1 = d 2|X| 2 H 0 (dτ +ω) − 3 dH 1 ,
withω satisfying (3.4). In the gauged case the solution can be written as
ds 2 = 2|X| 2 (dt +ω) 2 − 1 2|X| 2 d y 2 , Z = t/t 0 + H 0 + it 1 /t 0 H im t/t 1 + H 1 − iH im , (4.9) F 0 = d 2|X| 2 t t 1 + H 1 (dt +ω) − 3 dH im , F 1 = d 2|X| 2 t t 0 + H 0 (dt +ω) + t 1 t 0 3 dH im , where 1 2|X| 2 = t t 0 + H 0 t t 1 + H 1 − t 1 t 0 H 2 im (4.10)
andω ≡ω − t 1 df + t 1 dH 1 satisfies equ. (3.10). Both solutions can also be rewritten in terms of two complex harmonic functions H Λ as follows:
ds 2 = 1 Re(H 0H1 ) (dt + ω) 2 − Re(H 0H1 )d y 2 , Z = H 0 H 1 , (4.11) F 0 = d Re(H 1 ) Re(H 0H1 ) (dt + ω) + 3 dIm(H 1 ) , F 1 = d Re(H 0 ) Re(H 0H1 ) (dt + ω) + 3 dIm(H 0 ) ,
where ω is time-independent and satisfies
dω = 3 Im H 0 dH 1 + H 1 dH 0 . (4.12)
In the ungauged case, the only additional constraint on the complex harmonics is that they are independent of time. In terms of the harmonics defined above they are given by
H 0 = H 0 − iH 1 , H 1 = H 1 − iH 0 . (4.13)
In the gauged case the time dependence of the harmonics is completely determined by ∂ t H Λ = 1/t Λ 6 . In addition they must satisfy Im(H 0 ) = − t 1 t 0 Im(H 1 ), and thus
H 0 = t t 0 + H 0 + i t 1 t 0 H im , H 1 = t t 1 + H 1 − iH im .
(4.14)
In this case there is also the additional constraint ∂ p ω p = 0. Notice that (4.11) reduces to the Israel-Wilson-Perjés [21,22] solution for H 0 = H 1 . This means in particular that we can recover the Kerr-Newman solution with mass equal to the charge by taking
H 0 = H 1 = 1 + qV ≡ 1 + q r − ia cos θ , ω = qa sin 2 θ(2r + q) r 2 + a 2 cos 2 θ dϕ ,(4.15)
expressed in Boyer-Lindquist coordinates 7 . This construction suggests the more general form (3.11) for the harmonics, with ω given by (3.13). With these choices the gauged solution explicitly reads
ds 2 = Σ 2 ∆ dt 2 + Σ ∆ −a sin 2 θ 2 kQ r + qQ + 2 kq (r 2 + a 2 ) cos θ dtdϕ − ∆ Σ(r 2 + a 2 ) dr 2 − ∆ Σ dθ 2 (4.16) + 1 4∆
−a sin 2 θ 2 kQ r + qQ +2 kq (r 2 + a 2 ) cos θ 2 − ∆ Σ 2 (r 2 + a 2 ) sin 2 θ dϕ 2 ,
A 0 = Σ ∆ (Σ(t/t 1 + k 1 ) + q 1 r + Q 1 a cos θ) dt − 1 2 Σ ∆ (Σ(t/t 1 + k 1 ) + q 1 r + Q 1 a cos θ) 2 kQr + qQ − 2Q im r a sin 2 θ Σ dϕ + Σ ∆ (Σ(t/t 1 + k 1 ) + q 1 r + Q 1 a cos θ) kq − q im (r 2 + a 2 ) cos θ Σ dϕ ,(4.17)
6 Here one recognizes the substitution principle originally put forward by Behrndt and Cvetič in [27], which amounts to adding a linear time dependence to the harmonic functions in a supersymmetric black hole of ungauged N = 2, d = 4 supergravity. 7 One might ask whether the solution (4.11) has a minimal fake gauged supergravity limit. However, it is easy to see that requiring the scalar Z to be constant implies t Λ → ∞ with t 1 /t 0 fixed, and thus g → 0, which brings us back to the ungauged case. This is consistent with the fact that (for nonvanishing rotation) the Kerr-Newman-de Sitter solution can never admit fake Killing spinors, as can be seen by analytically continuing the BPS condition (3.27) of [28] for the Carter-Plebański solution with Λ < 0, whose KNdS limit cannot be taken. We thank M. Nozawa for pointing out this.
Note finally that the scalar field (4.22) assumes the constant value Z = t 1 /t 0 (where the potential (4.4) has an extremum 8 ) if t 0 H 0 = t 1 H 1 and H im = t 0 . In this case,H = 0 andω = t 1 dH 1 . If we take ω = 0 and define a new time coordinate τ by t + t 1 H 1 = t 0 t 1 sinh τ , the metric becomes 24) and the gauge field strengths F Λ vanish, so that the solution is dS 4 . For ω = 0, one gets a deformation of dS 4 with nonzero F Λ . This is what happens also in the 'asymptotic' limit Ψ ∼ π/2 of the solution with the explicit choice (3.32) and with t 0 k 0 = t 1 k 1 , k im = t 0 .
ds 2 = t 0 t 1 dτ 2 − cosh 2 τ ds 2 S 3 ,(4.
Berger sphere
For this base space, the results of section 3.3 imply that the complete solution can be written in the form
ds 2 = 2|X| 2 (dt ± sin µ cos µ σ L 3 t +ω) 2 − 1 2|X| 2 ds 2 3 , with α 0 = t 1 k im , α 1 = t 0 t 1 cos µ − α 0 = t 0 t 1 cos µ − t 1 k im . (4.28)
Imposing α 0 = α 1 , the scalar becomes constant and one obtains a solution of Einstein-Maxwell-de Sitter theory already found by Meessen [29]. This can be seen as a deformation of dS 4 , which is recovered for µ = 0.
5 The F(χ) = − 1 8 (χ 1 ) 3 χ 0 model
Using (2.4) this prepotential leads to the Kähler potential
e −K = Im(Z) 3 ,(5.1)
where we took |χ 0 | = 1, and to the Kähler metric
G = ∂ Z ∂ZK = 3 4 Im(Z) −2 . (5.2)
The vectors' kinetic matrix is, according to equ. (2.6),
N = 1 4 −Z Re(Z) 2 − i 2 |Z| 2 Im(Z) 3 2 Z Re(Z) 3 2 Z Re(Z) −3Z + i 3 2 Im(Z) ,(5.3)
and from (2.8) one gets the scalar potential
V = 4 3 g 2 C 2 1 Im(Z) . (5.4)
It is worth noting that for the choice C 1 = 0 (and C 0 arbitrary) the potential vanishes (so-called flat gauging), and the fake supersymmetric solutions constructed here are also solutions to the equations of motion of the corresponding ungauged supergravity. Requiring Re(Z), Im(Z) = 0 and R|I > 0 the stabilization equations give R 0 = 1 2 S (I 1 ) 3 + 4 I 0 I 1 I 1 + 4 I 0 (I 0 ) 2 , R 1 = − 2 9 S 16 I 0 (I 1 ) 2 + 3 I 1 (I 1 ) 2 − 9 I 0 I 0 I 1 , R 0 = 2 27 S 16 (I 1 ) 3 − 27 (I 0 ) 2 I 0 − 27 I 0 I 1 I 1 , R 1 = 1 6 S 4 (I 1 ) 2 I 1 − 12 I 0 I 0 I 1 − 9 I 0 (I 1 ) 2 , (5.5) with S ≡ −4(I 0 I 0 ) 2 + 4 3 (I 1 I 1 ) 2 + 128 27 I 0 (I 1 ) 3 − 2I 0 (I 1 ) 3 − 8I 0 I 0 I 1 I 1 , (5.6) and 1 2|X| 2 = R|I = S .
(5.7)
Berger sphere
Making use of the results of section 3.3 the complete solution can be written as
ds 2 = S −1 (dt ± sin µ cos µ σ L 3 t +ω) 2 − Sds 2 3 , Z = − T 1 − iSK im T 0 + iSK im , (5.25) F 0 = −d T 0 S 2 (dt ± sin µ cos µ σ L 3 t +ω) − 3 [dK im ± sin µ cos µ σ L 3 K im ] , F 1 = d T 1 S 2 (dt ± sin µ cos µ σ L 3 t +ω) − 3 [dK im ± sin µ cos µ σ L 3K im ] , where S = −K 0 T 0 +K 3 im + K 1 T 1 − 4 27 K im K 2 1 , T 0 =K 3 im + K imKim K 1 + K 2 im K 0 , T 1 = 4 9 K im K 2 1 + 1 3K 2 im K 1 − K imKim K 0 , K Λ = t t Λ + K Λ ,K im = t 1 cos µ − t 1 t 0 K im . (5.26)
Here the functions K 0 and K 1 satisfy equ. (3.45), K im obeys (3.44), and the timeindependent one-formω is a solution of (3.50).
Conclusions
In this paper, we used the results of [19], where all solutions to matter-coupled fake N = 2, d = 4 gauged supergravity admitting covariantly constant spinors were classified, to construct dynamical rotating black holes in an expanding FLRW universe. This was done for two different prepotentials that are both truncations of the stu model and correspond to just one vector multiplet. The cosmic expansion was thereby driven by two U(1) gauge fields and by a complex scalar that rolls down its potential. We considered three different choices for the Gauduchon-Tod base space over which the four-dimensional geometry is fibered, namely flat space, the three-sphere and the Berger sphere, and saw how the usual recipe in ungauged supergravity, where extremal black holes are given in terms of harmonic functions on three-dimensional Euclidean space, generalizes to a cosmological context. Some possible extensions and questions for future work are:
• Study more in detail the physics of the constructed solutions, for instance the presence of trapping horizons [30], and see whether a first law of trapping horizons [31] holds.
• Extend the analytic studies of nonrotating black hole collisions in de Sitter space performed in [6,7] to the more general solutions considered here, and see how the results depend on the rotation, the cosmological scale factor different from dS, and the spatial curvature of the underlying FLRW cosmology.
We hope to come back to these points in a future publication.
Gauduchon-Tod base space, that is, choose a solution (W x , C ΛÃ Λ , C Λ I Λ ) of equation (2.15). 3. Determine the I Λ 's and theÃ Λ 's that respect the choices of points 1 and 2 and at the same time satisfy equation (2.17). 4. Determine the I Λ 's andω from (2.18) and the coupled equations (2.19) and (2.20).
= cos Θ dΨ − sin Ψ cos Ψ sin Θ dΘ − sin 2 Ψ sin 2 Θ dΦ , and the most general regular ω is ω =(a cos Φ − b sin Φ)(sin Θ dΨ + sin Ψ cos Ψ cos Θ dΘ − sin 2 Ψ sin Θ cos Θ dΦ) − sin Ψ(a sin Φ + b cos Φ)(sin Ψ dΘ + cos Ψ sin Θ dΦ)35)
Comparing this with the usual parametrization for S 3 in R 4 one obtains in the coordi-
nates (3.31) the expressions
W 1 = − sin Θ sin Φ dΨ + sin Ψ(sin Ψ cos Φ − cos Ψ cos Θ sin Φ) dΘ
− sin Ψ sin Θ(cos Ψ cos Φ + sin Ψ cos Θ sin Φ) dΦ ,
W 2 = sin Θ cos Φ dΨ + sin Ψ(sin Ψ sin Φ + cos Ψ cos Θ cos Φ) dΘ
(3.36)
− sin Ψ sin Θ(cos Ψ sin Φ − sin Ψ cos Θ cos Φ) dΦ ,
W 3 − c(cos Θ dΨ − sin Ψ cos Ψ sin Θ dΘ + sin 2 Ψ sin 2 Θ dΦ) ,
(3.37)
In this context, by 'Wick rotation' we mean g → ig, where g denotes the coupling constant. 2 For a classification without matter coupling (pure fake N = 2, d = 4 gauged supergravity) see[20].
Here and in what follows we use the conventions of[19].
Note that the resulting theory is different from the so-called de Sitter supergravities[23]. To get the latter, one also takes A µ → iA µ , which leads to gauge field kinetic terms with the wrong sign, and thus to ghosts. In the theory considered here, the kinetic terms of the gauge fields come with the correct sign. We thank P. Meessen for clarifying discussions on this point.
Since H im is related to the imaginary part I Λ , the label 'im' stands for 'imaginary'.
t 1 t 0 t − iα 1 t − iα 0 ,(4.27)8 We assume t 1 /t 0 > 0.
A 1 = Σ ∆ (Σ(t/t 0 + k 0 ) + q 0 r + Q 0 a cos θ) dt − 1 2 Σ ∆ (Σ(t/t 0 + k 0 ) + q 0 r + Q 0 a cos θ) 2 kQr + qQ + 2 t 1 t 0 Q im r a sin 2 θ Σ dϕ + Σ ∆ (Σ(t/t 0 + k 0 ) + q 0 r + Q 0 a cos θ) kq + t 1 t 0 q im (r 2 + a 2 ) cos θ Σ dϕ , (4.18) Z = Σ(t/t 0 + k 0 ) + q 0 r + Q 0 a cos θ + it 1 /t 0 (Σk im + q im r + Q im a cos θ) Σ(t/t 1 + k 1 ) + q 1 r + Q 1 a cos θ − i(Σk im + q im r + Q im a cos θ) ,(4.19)where ∆ = Σ t t 0 + k 0 + q 0 r + Q 0 a cos θ Σ tIt can be seen from these expressions that the constant kq in ω represents essentially a NUT charge.Spherical base spaceUsing the results of section 3.2, the complete solution can be written in terms of harmonic functions H im , H 0 , H 1 on S 3 and a time-independent one-formω asandω satisfies(3.30). In particular the harmonics can be taken to be of the form (3.32), withω as in section 3.2. The curvature scalars R, R µν R µν and R µνρσ R µνρσ are singular for 1 2|X| 2 = 0, but not in the points ψ = 0, π unless q 0 q 1 = t 1 t 0 q 2 im .the functions K 0 and K 1 satisfy (3.45), K im satisfies (3.44), and the time-independent one-formω is a solution of (3.50).With the trivial choices (3.51) the solution reduces toFlat base spaceUsing again the results of section 3.1 the solution in the gauged case can be written in terms of harmonic functions H 0 , H 1 and H im and a time-independent one-form ω aswhile ω solves equ. (3.10).In the case C 0 = 0 (t 0 → ∞) and with the convenient redefinitions H 1 → 3/2 H 1 , t 1 = 3/2 t 1 the solution simplifies toWith the choice (3.11) and (3.13), this can be explicitly written as2 kQr + qQ a sin 2 θ + kq(r 2 + a 2 ) cos θ dϕ , (5.14)In the case of flat gauging, C 1 = 0 (which is inequivalent to C 0 = 0 for this model), the results of section 3.1 are still valid provided one exchanges 0 and 1 indices everywhere. Redefining H 1 → 3 H 1 , the solution simplifies toSince the potential vanishes for C 1 = 0, this is also a (non-supersymmetric) timedependent solution of ungauged supergravity. The metric with the same harmonic functions and ω as before can again be written in the form (5.12), but where now2 kQr + qQ a sin 2 θ − 2 kq(r 2 + a 2 ) cos θ dϕ , (5.20)Spherical base spaceUsing the results of section 3.2, the complete solution can be written asandω satisfies(3.30). An explicit solution can be obtained with harmonics of the form (3.32), obeying the constraint (3.34), andω given by(3.37).
The mass-particle in an expanding universe. G C Mcvittie, Mon. Not. Roy. Astron. Soc. 93325G. C. McVittie, "The mass-particle in an expanding universe," Mon. Not. Roy. Astron. Soc. 93 (1933) 325.
A point mass in an isotropic universe: Existence, uniqueness and basic properties. B C Nolan, gr-qc/9805041Phys. Rev. D. 5864006B. C. Nolan, "A point mass in an isotropic universe: Existence, uniqueness and basic properties," Phys. Rev. D 58 (1998) 064006 [gr-qc/9805041].
A point mass in an isotropic universe. 2. Global properties. B C Nolan, Class. Quant. Grav. 161227B. C. Nolan, "A point mass in an isotropic universe. 2. Global properties," Class. Quant. Grav. 16 (1999) 1227.
McVittie's legacy: Black holes in an expanding universe. N Kaloper, M Kleban, D Martin, arXiv:1003.4777Phys. Rev. D. 81104044hep-thN. Kaloper, M. Kleban and D. Martin, "McVittie's legacy: Black holes in an expanding universe," Phys. Rev. D 81 (2010) 104044 [arXiv:1003.4777 [hep-th]].
Cosmological black holes: A black hole in the Einstein-de Sitter universe. J Sultana, C C Dyer, Gen. Rel. Grav. 371347J. Sultana and C. C. Dyer, "Cosmological black holes: A black hole in the Einstein-de Sitter universe," Gen. Rel. Grav. 37 (2005) 1347.
Cosmological multi-black hole solutions. D Kastor, J H Traschen, hep-th/9212035Phys. Rev. D. 475370D. Kastor and J. H. Traschen, "Cosmological multi-black hole solutions," Phys. Rev. D 47 (1993) 5370 [hep-th/9212035].
Testing cosmic censorship with black hole collisions. D R Brill, G T Horowitz, D Kastor, J H Traschen, gr-qc/9307014Phys. Rev. D. 49840D. R. Brill, G. T. Horowitz, D. Kastor and J. H. Traschen, "Testing cosmic censorship with black hole collisions," Phys. Rev. D 49 (1994) 840 [gr-qc/9307014].
A class of exact solutions of Einstein's field equations. S D Majumdar, Phys. Rev. 72390S. D. Majumdar, "A class of exact solutions of Einstein's field equations," Phys. Rev. 72 (1947) 390.
A static solution of the equations of the gravitational field for an arbitary charge distribution. A Papapetrou, Proc. R. Irish Acad. A. 51A. Papapetrou, "A static solution of the equations of the gravitational field for an arbitary charge distribution", Proc. R. Irish Acad. A 51 (1947) 191-204.
Particle production and positive energy theorems for charged black holes in De Sitter. D Kastor, J H Traschen, gr-qc/9311025Class. Quant. Grav. 132753D. Kastor and J. H. Traschen, "Particle production and positive energy theorems for charged black holes in De Sitter," Class. Quant. Grav. 13 (1996) 2753 [gr-qc/9311025].
Dynamics of intersecting brane systems -Classification and their applications. K Maeda, N Ohta, K Uzawa, arXiv:0903.5483JHEP. 090651hep-thK. -i. Maeda, N. Ohta and K. Uzawa, "Dynamics of intersecting brane systems -Classification and their applications-," JHEP 0906 (2009) 051 [arXiv:0903.5483 [hep-th]].
Black holes in an expanding universe. G W Gibbons, K. -I Maeda, arXiv:0912.2809Phys. Rev. Lett. 104131101gr-qcG. W. Gibbons and K. -i. Maeda, "Black holes in an expanding universe," Phys. Rev. Lett. 104 (2010) 131101 [arXiv:0912.2809 [gr-qc]].
Black hole in the expanding universe with arbitrary power-law expansion. K. -I Maeda, M Nozawa, arXiv:1003.2849Phys. Rev. D. 81124038gr-qcK. -i. Maeda and M. Nozawa, "Black hole in the expanding universe with arbitrary power-law expansion," Phys. Rev. D 81 (2010) 124038 [arXiv:1003.2849 [gr-qc]].
Black holes in an expanding universe from fake supergravity. S Chimento, D Klemm, arXiv:1212.5494JHEP. 1304129S. Chimento and D. Klemm, "Black holes in an expanding universe from fake supergravity," JHEP 1304 (2013) 129 [arXiv:1212.5494].
Cosmological spinning multi-black hole solution in string theory. T Shiromizu, hep-th/9910176Prog. Theor. Phys. 1021207T. Shiromizu, "Cosmological spinning multi-black hole solution in string theory," Prog. Theor. Phys. 102 (1999) 1207 [hep-th/9910176].
Charged rotating black holes in 5-D Einstein-Maxwell (A)dS gravity. D Klemm, W A Sabra, hep-th/0010200Phys. Lett. B. 503147D. Klemm and W. A. Sabra, "Charged rotating black holes in 5-D Einstein-Maxwell (A)dS gravity," Phys. Lett. B 503 (2001) 147 [hep-th/0010200].
General (anti-)de Sitter black holes in five dimensions. D Klemm, W A Sabra, hep-th/0011016JHEP. 010231D. Klemm and W. A. Sabra, "General (anti-)de Sitter black holes in five dimensions," JHEP 0102 (2001) 031 [hep-th/0011016].
Cosmological rotating black holes in five-dimensional fake supergravity. M Nozawa, K. -I Maeda, arXiv:1009.3688Phys. Rev. D. 8324018hep-thM. Nozawa and K. -i. Maeda, "Cosmological rotating black holes in five-dimensional fake supergravity," Phys. Rev. D 83 (2011) 024018 [arXiv:1009.3688 [hep-th]].
Cosmological solutions from fake N = 2 EYM supergravity. P Meessen, A Palomo-Lozano, arXiv:0902.4814JHEP. 090542hep-thP. Meessen and A. Palomo-Lozano, "Cosmological solutions from fake N = 2 EYM supergravity," JHEP 0905 (2009) 042 [arXiv:0902.4814 [hep-th]].
Solutions of minimal four-dimensional de Sitter supergravity. J B Gutowski, W A Sabra, arXiv:0903.0179Class. Quant. Grav. 27235017hep-thJ. B. Gutowski and W. A. Sabra, "Solutions of minimal four-dimensional de Sitter supergravity," Class. Quant. Grav. 27 (2010) 235017 [arXiv:0903.0179 [hep-th]].
A class of stationary electromagnetic vacuum fields. W Israel, G A Wilson, J. Math. Phys. 13865W. Israel and G. A. Wilson, "A class of stationary electromagnetic vacuum fields," J. Math. Phys. 13 (1972) 865.
Solutions of the coupled Einstein Maxwell equations representing the fields of spinning sources. Z Perjes, Phys. Rev. Lett. 271668Z. Perjes, "Solutions of the coupled Einstein Maxwell equations representing the fields of spinning sources," Phys. Rev. Lett. 27 (1971) 1668.
De Sitter superalgebras and supergravity. K Pilch, P Van Nieuwenhuizen, M F Sohnius, Commun. Math. Phys. 98105K. Pilch, P. van Nieuwenhuizen and M. F. Sohnius, "De Sitter superalgebras and supergravity," Commun. Math. Phys. 98 (1985) 105.
Hyper-Hermitian metrics with symmetry. P Gauduchon, K P Tod, J. Geom. Phys. 25291P. Gauduchon and K. P. Tod, "Hyper-Hermitian metrics with symmetry," J. Geom. Phys. 25 (1998) 291.
,0) and (4,4) sigma models with a triholomorphic Killing vector. T Chave, G Valent, K P Tod, Phys. Lett. B. 3834262T. Chave, G. Valent and K. P. Tod, "(4,0) and (4,4) sigma models with a triholomorphic Killing vector," Phys. Lett. B 383 (1996) 262.
Stationary solutions of N = 2 supergravity. K Behrndt, D Lüst, W A Sabra, hep-th/9705169Nucl. Phys. B. 510264K. Behrndt, D. Lüst and W. A. Sabra, "Stationary solutions of N = 2 supergravity," Nucl. Phys. B 510 (1998) 264 [hep-th/9705169].
Time dependent backgrounds from supergravity with gauged noncompact R-symmetry. K Behrndt, M Cvetič, hep-th/0303266Class. Quant. Grav. 204177K. Behrndt and M. Cvetič, "Time dependent backgrounds from supergravity with gauged noncompact R-symmetry," Class. Quant. Grav. 20 (2003) 4177 [hep-th/0303266].
Supersymmetry of topological Kerr-Newman-Taub-NUT-AdS space-times. N Alonso-Alberca, P Meessen, T Ortín, hep-th/0003071Class. Quant. Grav. 172783N. Alonso-Alberca, P. Meessen and T. Ortín, "Supersymmetry of topological Kerr-Newman-Taub-NUT-AdS space-times," Class. Quant. Grav. 17 (2000) 2783 [hep-th/0003071].
Unpublished notes. P Meessen, P. Meessen, Unpublished notes, (2010).
General laws of black hole dynamics. S A Hayward, Phys. Rev. D. 496467S. A. Hayward, "General laws of black hole dynamics," Phys. Rev. D 49 (1994) 6467.
Unified first law of black hole dynamics and relativistic thermodynamics. S A Hayward, gr-qc/9710089Class. Quant. Grav. 153147S. A. Hayward, "Unified first law of black hole dynamics and relativistic thermodynamics," Class. Quant. Grav. 15 (1998) 3147 [gr-qc/9710089].
|
[] |
[
"Relativistic mean-field model with density-dependent meson-nucleon couplings",
"Relativistic mean-field model with density-dependent meson-nucleon couplings"
] |
[
"Kenta Minagawa \nDepartment of Physics\nFaculty of Science and Technology\nTokyo University of Science\n278-8510NodaJapan\n",
"Masahiro Kawabata \nDepartment of Physics\nFaculty of Science and Technology\nTokyo University of Science\n278-8510NodaJapan\n",
"Koichi Saito \nDepartment of Physics\nFaculty of Science and Technology\nTokyo University of Science\n278-8510NodaJapan\n"
] |
[
"Department of Physics\nFaculty of Science and Technology\nTokyo University of Science\n278-8510NodaJapan",
"Department of Physics\nFaculty of Science and Technology\nTokyo University of Science\n278-8510NodaJapan",
"Department of Physics\nFaculty of Science and Technology\nTokyo University of Science\n278-8510NodaJapan"
] |
[] |
Within the relativistic mean-field approach, we extend the Miyazaki model, where the NNσ and NNω interactions are modified to suppress the couplings between positive-and negative-energy states of a nucleon in matter. Assuming appropriate density-dependence of the meson-nucleon couplings, we study nuclear matter and finite nuclei. The model can reproduce the observed properties of 16 O and 40 Ca well. We also examine if the model is natural.typeset using PTPT E X.cls Ver.0.9
|
10.1143/ptp.118.175
|
[
"https://arxiv.org/pdf/nucl-th/0703039v1.pdf"
] | 17,033,717 |
nucl-th/0703039
|
c957a14de2eb116072d4269d21f0cf905201af94
|
Relativistic mean-field model with density-dependent meson-nucleon couplings
arXiv:nucl-th/0703039v1 13 Mar 2007
Kenta Minagawa
Department of Physics
Faculty of Science and Technology
Tokyo University of Science
278-8510NodaJapan
Masahiro Kawabata
Department of Physics
Faculty of Science and Technology
Tokyo University of Science
278-8510NodaJapan
Koichi Saito
Department of Physics
Faculty of Science and Technology
Tokyo University of Science
278-8510NodaJapan
Relativistic mean-field model with density-dependent meson-nucleon couplings
arXiv:nucl-th/0703039v1 13 Mar 20071
Within the relativistic mean-field approach, we extend the Miyazaki model, where the NNσ and NNω interactions are modified to suppress the couplings between positive-and negative-energy states of a nucleon in matter. Assuming appropriate density-dependence of the meson-nucleon couplings, we study nuclear matter and finite nuclei. The model can reproduce the observed properties of 16 O and 40 Ca well. We also examine if the model is natural.typeset using PTPT E X.cls Ver.0.9
Recently, the relativistic mean-field approach with density-dependent mesonnucleon couplings draws much attention. 1) It is an effective model for the Dirac-Brueckner-Hartree-Fock (DBHF) theory, 2) which can reproduce the saturation property of nuclear matter using the one-boson exchange potentials extracted from the nucleon-nucleon scattering data. In the DBHF calculation, the relativistic effect provides a strong density-dependent repulsion, which is originated from the nucleonantinucleon pair term (Z graph), and it is vital to obtain the nuclear saturation property. It should be noticed that a nuclear model based on the quark substructure of a nucleon, for example, the quark-meson coupling (QMC) model, 3) the quark-mean field (QMF) model, 4) also gives density-dependent meson-nucleon couplings through the scalar field in a nuclear medium, namely the scalar polarizability. 3) Thus, it seems quite natural that the meson-nucleon couplings depend on the nuclear environment.
About a decade ago, Miyazaki 5) has proposed an interesting, relativistic meanfield model for nuclear matter, in which the NNσ and NNω vertices are modified to reduce the couplings between positive-and negative-energy states of the in-medium nucleon (the +− couplings). Although the +− couplings play an important role in the relativistic nuclear models including nucleon-nucleus (NA) scattering (with the relativistic impulse approximation (RIA)) at intermediate energies, it is known that the effect of the coupling to negative states is too strong to produce the NA scattering observables at low energies. 6) Tjon and Wallace have remedied this problem by developing a generalized RIA, in which the different +− couplings from the usual RIA are introduced. 6) The vertex modification studied by Miyazaki 5) may enable us to include such variation of the +− couplings at the relativistic mean-field level. The modified vertices finally result in the density-dependent NNσ and NNω couplings, which can simultaneously reproduce the nuclear matter properties and the Dirac scalar and vector optical potentials given by the DBHF calculation.
In this Letter, we generalize the Miyazaki model, and study not only the nuclear matter properties but also single-particle energies of finite nuclei. Lastly, we discuss naturalness of the model. 7) We now modify the vertices of NNσ and NNω couplings using the energy projection operators, Λ ± (p) = (±/ p + M )/2M , where p is the four-momentum of a nucleon and M is the mass. Since the vertex, Γ (= I or γ µ ), is expressed by (1) it may be possible to vary the strength of the +− couplings, introducing two parameters, 0 ≤ λ 1 , λ 2 ≤ 1, as 5)
Γ = Λ + (p ′ )Γ Λ + (p) + Λ − (p ′ )Γ Λ − (p) + Λ + (p ′ )Γ Λ − (p) + Λ − (p ′ )Γ Λ + (p),Γ → λ 1 [Λ + (p ′ )Γ Λ + (p) + Λ − (p ′ )Γ Λ − (p)] + λ 2 [Λ + (p ′ )Γ Λ − (p) + Λ − (p ′ )Γ Λ + (p)],(2) = (λ 1 − λ 2 )/ p ′ Γ / p + (λ 1 + λ 2 )Γ M 2 2M 2 .(3)
In the original Miyazaki model, 5) λ 1 is chosen to be unity for the scalar (I) vertex, while λ 2 is unity for the vector (γ µ ) vertex, because the parameters are supposed to be constants. However, in general, the strength of the +− couplings may depend on the nuclear environment through the Pauli blocking, Z graphs etc. 8) To take account of those effects in the model, we here suppose that λ simply depends on the nuclear density, ρ v :
λ = 1 − a ρ v ρ 0 b ,(4)
where ρ 0 is the saturation density, and each λ has two parameters, a and b. Note that, in the limit ρ v → 0, Γ is identical to the original form Eq.(1). Using the vertex (3) and the mean-field approximation for the meson fields, the Lagrangian density is given by 5)
L =ψ(/ p − M )ψ − 1 2 m 2 σ σ 2 + 1 2 m 2 ω ω 2 + g σ 2M 2 [(λ s 1 − λ s 2 )(ψ / ← − p )(/ p ψ) + (λ s 1 + λ s 2 )M 2ψ ψ]σ − g ω 2M 2 [(λ v 1 − λ v 2 )(ψ / ← − p )γ 0 (/ p ψ) + (λ v 1 + λ v 2 )M 2ψ γ 0 ψ]ω,(5)
where σ and ω are respectively the mean-field values of the σ and ω mesons, and λ
s(v) i (i = 1, 2)
is the parameter for the scalar (vector) vertex. The meson mass and the NNσ(ω) coupling constant in vacuum are respectively denoted by m σ(ω) and g σ(ω) .
Following the prescription explained in Ref. 5) , we can construct an effective Lagrangian density, in which the effect of variation of the +− couplings in matter is included,
L eff =ψ(/ p − γ 0 U v − M * )ψ − 1 2 m 2 σ σ 2 + 1 2 m 2 ω ω 2 ,(6)
where the effective nucleon mass, M * , and the Dirac scalar, U s , and vector, U v , potentials in matter are defined as
M − M * = g * σ σ = −U s , U v = g * ω ω,(7)
with the effective coupling constants
g * σ = 1 2 [(λ s 1 + λ s 2 ) + (λ s 1 − λ s 2 )(m * 2 − v 2 )]g σ ,(8)g * ω = 1 2 [(λ v 1 + λ v 2 ) − (λ v 1 − λ v 2 )(m * 2 − v 2 )]g ω ,(9)m * = M * /M and v = U v /M . Note that, when λ s 1 = λ v 2 = 1
, the effective coupling constants coincide with those in the Miyazaki model.
The energy per nucleon, W , for symmetric nuclear matter is then written by
W = 3 4 E * F + 1 4 M * ρ s ρ v + U v − M + 2M C sρ 1 − m * λ s 1 + λ s 2 + (λ s 1 − λ s 2 )(m * 2 − v 2 ) 2 − 2M C vρ v λ v 1 + λ v 2 − (λ v 1 − λ v 2 )(m * 2 − v 2 ) 2 ,(10)
where
E * F = (k 2 F + M * 2 ) 1/2 (k F the Fermi momentum), ρ v = 2k 3 F /3π 2 ,ρ = ρ v /ρ 0 , C s(v) = g 2 σ(ω) ρ 0 /m 2 σ(ω) M , and ρ s = (M * /π 2 )[k F E * F − M * 2 ln((k F + E * F )/M * )] (the scalar density).
From the self-consistency conditions, (∂W/∂m * ) = 0 and (∂W/∂v) = 0, which the meson fields should satisfy, one finds
C s = 4c a 3 s b sρ (1 − m * ), C v = 4c a 3 v b vρ v,(11)
where a s = λ s
1 + λ s 2 + (λ s 1 − λ s 2 )(m * 2 − v 2 ), (12) a v = λ v 1 + λ v 2 − (λ v 1 − λ v 2 )(m * 2 − v 2 ),(13)b s = λ v 1 + λ v 2 − (λ v 1 − λ v 2 )(m * 2 + v 2 ) ρ s ρ v − 2(λ v 1 − λ v 2 )m * v,(14)
b
v = λ s 1 + λ s 2 − (λ s 1 − λ s 2 )(m * 2 − 2m * + v 2 ) + 2(λ s 1 − λ s 2 )(1 − m * )v ρ s ρ v ,(15)c = [λ s 1 + λ s 2 − (λ s 1 − λ s 2 )(m * 2 − 2m * + v 2 )][λ v 1 + λ v 2 − (λ v 1 − λ v 2 )(m * 2 + v 2 )] + 4(λ s 1 − λ s 2 )(λ v 1 − λ v 2 )(1 − m * )m * v 2 .(16)
Giving the λ parameters, the coupling constants in vacuum, g σ and g ω , are determined so as to fulfill the saturation condition, ∂W/∂ρ|ρ =1 = 0 and W (ρ = 1) = −15.75MeV (ρ 0 = 0.15 fm −3 ). Using those coupling constants, one can calculate the effective nucleon mass (m * ) and the vector potential (v) at any density.
In the present calculation, we, however, set that λ s
1 = λ v 2 = 1 − a A (ρ v /ρ 0 ) b A and λ s 2 = λ v 1 = 1 − c A (ρ v /ρ 0 ) d A to
reduce the total number of parameters. (We call this "type A".) Thus, for type A the density-dependence of the NNσ coupling is identical to that of the NNω coupling, i.e., g * σ /g σ = g * ω /g ω (see Eqs. (8) and (9)). This choice may be justified, because it has already been found in Ref. 5) that the case of λ s 1 = λ v 2 and λ s 2 = λ v 1 gives the best result for the nuclear matter properties. In contrast, we shall also study an alternative: λ s
1 = λ v 1 = 1 − a B (ρ v /ρ 0 ) b B and λ s 2 = λ v 2 = 1 − c B (ρ v /ρ 0 ) d B .
(We call this "type B".) Thus, each type eventually has four parameters (a ∼ d) to fit the observed data. Now we are in a position to show our numerical results. We determine the coupling constants, g σ and g ω , so as to exactly reproduce the saturation condition of nuclear matter. In addition, the parameters, a ∼ d, are tuned so as to produce the scalar U s and vector U v potentials of the DBHF calculation and the observed incompressibility (K = 210 ± 30 MeV) as precisely as possible.
We find that g σ = 12.24(16.97) and g ω = 14.97(20.69) for type A (B). In Table I, we list the parameter sets for type A and B. In Figs. 1 and 2, the scalar and vector potentials calculated with the present parameter sets are shown, together with the results of DBHF calculation 2) and Quantum Hadrodynamics (QHD). 9) The DBHF result is well reproduced by the present model up to ρ v /ρ 0 = 2.0 ∼ 2.5. We again see from Table I that g * σ /g σ ≃ g * ω /g ω even in type B because b B = d B and a B ≈ c B . Thus, for fitting the scalar and vector potentials of the DBHF calculation and the observed incompressibility simultaneously, it may be favorable that the density-dependence of the NNσ interaction is very close to that of the NNω interaction. Fig. 3. Charge density distribution for 40 Ca compared with that of QHD. The curves are labeled as in Fig. 1.
For finite nuclei, Eq.(6) gives a set of coupled non-linear differential equations, which may be solved by a standard iteration procedure. 10) For example, we have calculated the properties of 16 O and 40 Ca, and the result is presented in Table II. In the calculation, we have adjusted the σ mass (m σ ) so as to yield the observed rootmean-square (rms) charge radius of 40 Ca: r ch ( 40 Ca) = 3.48 fm.
In Table III, we give the single-particle energies for 40 Ca. We can see from the table that the model, especially type B, produces the good result. The spin-orbit force in the present model is thus sufficient to reproduce the observed energy levels and it is comparable to that of QHD. The charge density distribution for 40 Ca is also illustrated in Fig. 3, together with the QHD result. Note that the observed distribution, which is not shown here, is very close to that of type B.
Finally, we shall examine the present model using Georgi's "naive dimensional analysis" (NDA). 7), 11) In general, an effective field theory at low energy will contain an infinite number of interaction terms, which incorporate the compositeness of the low-energy degrees of freedom, i.e., hadrons, and it is then expected to involve numerous couplings which may be nonrenormalizable. The NDA gives a systematic way to manage such complicated, effective field theories. After extracting the dimensional factors and some appropriate counting factors using NDA, the remaining dimensionless coefficients are all assumed to be of order unity. This is the so-called naturalness assumption. If theory is natural, one can then control the effective Lagrangian, at least at the tree level. In the present case, the model involves the NNσ and NNω interactions. Using NDA, we then find that the dimensionless coefficients corresponding to those couplings are all smaller than 2.0. Thus, the model is natural.
In summary, we have extended the Miyazaki model, 5) where the NNσ and NNω couplings are modified to suppress the +− couplings, and studied two (A and B) types of density-dependent meson-nucleon vertices. Assuming an appropriate density form at the vertex, the parameters are adjusted so as to produce the scalar and vector potentials of the DBHF calculation and the observed incompressibility K as precisely as possible. The density-dependence such as g * σ /g σ ≈ g * ω /g ω may be eventually favorable for fitting the DBHF result and the observed nuclear data. Using such coupling constants, we have studied the properties of nuclear matter and finite nuclei ( 16 O and 40 Ca). The model can reproduce the experimental data well. Furthermore, NDA tells us that the model is natural. It is thus vital to include appropriate density-dependence of the meson-nucleon interactions in the relativistic mean-field approach, which may be attributed to many-body effects (like the Pauli exclusion) and the quark substructure of an in-medium nucleon. 3)
Fig. 1 .Fig. 2 .
12The scalar potential. The dashed, solid and dotted curves are, respectively, for type A, B and QHD. The DBHF result is shown by solid squares. The vector potential. The curves are labeled as inFig. 1.
Table I .
IParameter sets for type A and B. The nuclear incompressibility K (in MeV) is also shown.type
a
b
c
d
K
A
0.15 0.9 0.26 0.1 239.3
B
0.40 0.3 0.37 0.3 202.9
Table II .
IIBinding energy per nucleon W (in MeV), rms charge radius r ch (in fm) and difference between nuclear radii for neutrons and protons rn − rp (in fm). The QHD result is also included.40 Ca
16 O
mσ(MeV)
r ch
W
rn − rp
r ch
W
rn − rp
A
463.4
3.482
5.78
-0.080
2.83 4.22
-0.047
B
513.0
3.482
7.33
-0.074
2.77 6.06
-0.041
QHD
523.8
3.482
6.24
-0.055
2.75 4.85
-0.033
Exp.
3.482
8.45 0.05±0.05 2.73 7.98
0
Table III. Model predictions for the energy spectrum of 40 Ca.
Proton
Neutron
A
B
QHD
Exp.
A
B
QHD Exp.
1s 1/2 45.8
48.5
46.7
50±10 55.1
57.6
55.0
51.9
1p 3/2 30.2
32.9
30.8
34±6
38.7
41.3
38.7
36.6
1p 1/2 26.7
29.2
25.3
34±6
35.2
37.7
33.2
34.5
1d 5/2 15.6
17.9
15.2
15.5
23.3
25.6
22.6
21.6
2s 1/2 11.5
12.6
7.2
10.9
19.2
20.4
14.4
18.9
1d 3/2 10.2
11.9
6.7
8.3
17.9
19.6
14.1
18.4
0
2
4
6
0.00
0.02
0.04
0.06
0.08
0.10
ch
[e/fm
3
]
r [fm]
A
B
QHD
. R Brockmann, H Toki, ; C Fuchs, H Lenske, H H Wolter, ; H Shen, Y Sugahara, H Toki, Phys. Rev. Lett. 681211Phys. Rev. CR. Brockmann and H. Toki, Phys. Rev. Lett. 68 (1992), 3408; C. Fuchs, H. Lenske and H.H. Wolter, Phys. Rev. C 52 (1995), 3043; H. Shen, Y. Sugahara and H. Toki, Phys. Rev. C 55 (1997), 1211.
. R Brochmann, R Q Machleidt ; G, R Li, R Machleidt, Brockmann, Phys. Rev. C. 422782Phys. Rev. CR. Brochmann and R. Machleidt, Phys. Rev. C 42 (1990), 1965; G.Q. Li, R. Machleidt and R. Brockmann, Phys. Rev. C 45 (1992), 2782.
. K Saito, K Tsushima, A W A M Thomas ; P, ; K Guichon, A W Saito, Thomas, Prog. Part. Nucl. Phys. 5819Phys. Lett. BK. Saito, K. Tsushima and A.W. Thomas, Prog. Part. Nucl. Phys. 58 (2007), 1; P.A.M. Guichon, Phys. Lett. B 200 (1988), 235; K. Saito and A.W. Thomas, Phys. Lett. B 327 (1994), 9.
. H Toki, U Meyer, A Faessler, R Brochmann, ; H Shen, H Toki, Phys. Rev. C. 5845205Phys. Rev. CH. Toki, U. Meyer, A. Faessler and R. Brochmann, Phys. Rev. C 58 (1998), 3749; H. Shen and H. Toki, Phys. Rev. C 61 (2000), 045205.
. K Miyazaki, Prog. Theor. Phys. 93137K. Miyazaki, Prog. Theor. Phys. 93 (1995), 137.
. J A Tjon, S J Wallace, 267; ibid. 32Phys. Rev. C. 321667J.A. Tjon and S.J. Wallace, Phys. Rev. C 32 (1985), 267; ibid. 32 (1985), 1667.
. A Manohar, H Georgi, ; H Georgi, Nucl. Phys. B. 234187Phys. Lett. BA. Manohar and H. Georgi, Nucl. Phys. B 234 (1984), 189; H. Georgi, Phys. Lett. B 298 (1993), 187.
. M R Anastasio, L S Celenza, W S Pong, C M Shakin, Phys. Rep. 100327M.R. Anastasio, L.S. Celenza, W.S. Pong and C.M. Shakin, Phys. Rep. 100 (1983), 327.
. B D Serot, J D Walecka, Adv. Nucl. Phys. 161B.D. Serot and J.D. Walecka, Adv. Nucl. Phys. 16 (1986), 1.
. K Saito, K Tsushima, A W Thomas, Nucl. Phys. A. 609339K. Saito, K. Tsushima and A.W. Thomas, Nucl. Phys. A 609 (1996), 339.
. K Saito, K Tsushima, A W Thomas, Phys. Lett. B. 406287K. Saito, K. Tsushima and A.W. Thomas, Phys. Lett. B 406 (1997), 287.
|
[] |
[
"VLT/UVES and FORS2 spectroscopy of the GRB 081008 afterglow ⋆",
"VLT/UVES and FORS2 spectroscopy of the GRB 081008 afterglow ⋆"
] |
[
"V D'elia \nINAF-Osservatorio Astronomico di Roma\nVia Frascati 33I-00040Monteporzio CatoneItaly\n\nASI-Science Data Centre\nVia Galileo GalileiI-00044FrascatiItaly\n",
"S Campana \nINAF\nOsservatorio Astronomico di Brera\nVia E. Bianchi 4623807MerateLCItaly\n",
"S Covino \nINAF\nOsservatorio Astronomico di Brera\nVia E. Bianchi 4623807MerateLCItaly\n",
"P D'avanzo \nINAF\nOsservatorio Astronomico di Brera\nVia E. Bianchi 4623807MerateLCItaly\n",
"S Piranomonte \nINAF-Osservatorio Astronomico di Roma\nVia Frascati 33I-00040Monteporzio CatoneItaly\n",
"G Tagliaferri \nINAF\nOsservatorio Astronomico di Brera\nVia E. Bianchi 4623807MerateLCItaly\n"
] |
[
"INAF-Osservatorio Astronomico di Roma\nVia Frascati 33I-00040Monteporzio CatoneItaly",
"ASI-Science Data Centre\nVia Galileo GalileiI-00044FrascatiItaly",
"INAF\nOsservatorio Astronomico di Brera\nVia E. Bianchi 4623807MerateLCItaly",
"INAF\nOsservatorio Astronomico di Brera\nVia E. Bianchi 4623807MerateLCItaly",
"INAF\nOsservatorio Astronomico di Brera\nVia E. Bianchi 4623807MerateLCItaly",
"INAF-Osservatorio Astronomico di Roma\nVia Frascati 33I-00040Monteporzio CatoneItaly",
"INAF\nOsservatorio Astronomico di Brera\nVia E. Bianchi 4623807MerateLCItaly"
] |
[
"Mon. Not. R. Astron. Soc"
] |
We aim at studying the gamma-ray burst GRB 081008 environment by analysing the spectra of its optical afterglow. UVES/VLT high resolution spectroscopy of GRB 081008 was secured ∼ 5 hr after the Swift-BAT trigger. Our dataset comprises also three VLT/FORS2 nearly simultaneous spectra of the same source. The availability of nearly simultaneous high and low resolution spectra for a GRB afterglow is an extremely rare event. The GRB-Damped Lyman Alpha system at z = 1.9683 shows that the interstellar medium (ISM) of the host galaxy is constituted by at least three components which contribute to the line profiles. Component I is the redmost one, and is 20 km/s and 78 km/s redward component II and III, respectively. We detect several ground state and excited absorption features in components I and II. These features have been used to compute the distances between the GRB and the absorbers. Component I is found to be 52 ± 6 pc away from the GRB, while component II presents few excited transitions and its distance is 200 +60 −80 pc. Component III only features a few, low ionization and saturated lines suggesting that it is even farther from the GRB. Component I represents the closest absorber ever detected near a GRB. This (relatively) low distance can possibly be a consequence of a dense GRB environment, which prevents the GRB prompt/afterglow emission to strongly affect the ISM up to higher distances. The hydrogen column density associated to GRB 081008 is log N H /cm −2 = 21.11 ± 0.10, and the metallicity of the host galaxy is in the range [X/H] = −1.29 to −0.52. In particular, we found [Fe/H]= −1.19±0.11 and [Zn/H]= −0.52 ± 0.11 with respect to solar values. This discrepancy can be explained by the presence of dust in the GRB ISM, given the opposite refractory properties of iron and zinc. By deriving the depletion pattern for GRB 081008, we find the optical extinction in the visual band to be A V ∼ 0.19 mag. The Curve of Growth analysis applied to the FORS2 spectra brings column densities consistent at the 3σ level to that evaluated from the UVES data using the line fitting procedure. This reflects the low saturation of the detected GRB 081008 absorption features.
| null |
[
"https://arxiv.org/pdf/1108.1084v1.pdf"
] | 119,220,581 |
1108.1084
|
091be75ac578f6427c8dce949740ceb6a0b702cf
|
VLT/UVES and FORS2 spectroscopy of the GRB 081008 afterglow ⋆
Aug 2011. 2002
V D'elia
INAF-Osservatorio Astronomico di Roma
Via Frascati 33I-00040Monteporzio CatoneItaly
ASI-Science Data Centre
Via Galileo GalileiI-00044FrascatiItaly
S Campana
INAF
Osservatorio Astronomico di Brera
Via E. Bianchi 4623807MerateLCItaly
S Covino
INAF
Osservatorio Astronomico di Brera
Via E. Bianchi 4623807MerateLCItaly
P D'avanzo
INAF
Osservatorio Astronomico di Brera
Via E. Bianchi 4623807MerateLCItaly
S Piranomonte
INAF-Osservatorio Astronomico di Roma
Via Frascati 33I-00040Monteporzio CatoneItaly
G Tagliaferri
INAF
Osservatorio Astronomico di Brera
Via E. Bianchi 4623807MerateLCItaly
VLT/UVES and FORS2 spectroscopy of the GRB 081008 afterglow ⋆
Mon. Not. R. Astron. Soc
000Aug 2011. 2002Printed 28 April 2013(MN L A T E X style file v2.2) Accepted... Received...; in original form...gamma-rays: bursts -ISM: abundances -line: profiles -atomic data
We aim at studying the gamma-ray burst GRB 081008 environment by analysing the spectra of its optical afterglow. UVES/VLT high resolution spectroscopy of GRB 081008 was secured ∼ 5 hr after the Swift-BAT trigger. Our dataset comprises also three VLT/FORS2 nearly simultaneous spectra of the same source. The availability of nearly simultaneous high and low resolution spectra for a GRB afterglow is an extremely rare event. The GRB-Damped Lyman Alpha system at z = 1.9683 shows that the interstellar medium (ISM) of the host galaxy is constituted by at least three components which contribute to the line profiles. Component I is the redmost one, and is 20 km/s and 78 km/s redward component II and III, respectively. We detect several ground state and excited absorption features in components I and II. These features have been used to compute the distances between the GRB and the absorbers. Component I is found to be 52 ± 6 pc away from the GRB, while component II presents few excited transitions and its distance is 200 +60 −80 pc. Component III only features a few, low ionization and saturated lines suggesting that it is even farther from the GRB. Component I represents the closest absorber ever detected near a GRB. This (relatively) low distance can possibly be a consequence of a dense GRB environment, which prevents the GRB prompt/afterglow emission to strongly affect the ISM up to higher distances. The hydrogen column density associated to GRB 081008 is log N H /cm −2 = 21.11 ± 0.10, and the metallicity of the host galaxy is in the range [X/H] = −1.29 to −0.52. In particular, we found [Fe/H]= −1.19±0.11 and [Zn/H]= −0.52 ± 0.11 with respect to solar values. This discrepancy can be explained by the presence of dust in the GRB ISM, given the opposite refractory properties of iron and zinc. By deriving the depletion pattern for GRB 081008, we find the optical extinction in the visual band to be A V ∼ 0.19 mag. The Curve of Growth analysis applied to the FORS2 spectra brings column densities consistent at the 3σ level to that evaluated from the UVES data using the line fitting procedure. This reflects the low saturation of the detected GRB 081008 absorption features.
by an afterglow at longer wavelengths, which is crucial in order to understand the physics of these sources, but also to investigate the nature of the interstellar medium (ISM) of high redshift galaxies. Before GRBs, such studies made use of Lyman-break galaxies (LBGs, see e.g. Steidel et al. 1999) and galaxies that happen to be along the lines of sight to bright background quasars, commonly referred to as QSO-Damped Lyman Alpha (DLA) systems. However, both classes are entangled by selection effects. In fact, LBGs fall in the bright end of the galaxy luminosity function and may not entirely represent typical high-redshift galaxies. On the other hand, QSO sightlines preferentially probe galaxy halos, rather than bulges or discs, for cross-section effects (Fynbo et al. 2008). Indeed, Savaglio et al. (2004;2005) studied the ISM of a sample of faint K-band selected galaxies at 1.4 < z < 2.0, finding MgII and FeII abundances much higher than in QSO systems but similar to those in gammaray burst hosts. Unfortunately, these galaxies are too faint to be spectroscopically studied up to higher redshifts, using 8m class telescopes. In this context, long GRBs can be used as torchlights to illuminate the high-redshift ISM, and thus represent an independent tool to study high-redshift galaxies. Several papers report a metallic content in GRB host galaxies in the range 10 −2 − 1 with respect to solar values (see e.g., Fynbo et al. 2006;Savaglio 2006;Prochaska et al. 2007). The GRB host metallicity is thus on average higher than in QSO-DLA systems, supporting the notion that GRBs explode well within their hosts. Since long GRBs are linked to the death of massive stars, they are though to originate in molecular clouds. In this scenario, absorption from ground-state and vibrationally excited levels of H2 and other molecules is expected, but not observed (Vreeswijk et al. 2004;Tumlinson et al. 2007). The non-detection of these molecular states (with the exception of GRB 080607, see Prochaska et al. 2009;Sheffer et al. 2009) could be a consequence of the intense UV flux from the GRB afterglow, which photo-dissociates the molecules. However, molecular hydrogen is not detected in QSO-DLA either (e.g., Noterdaeme et al. 2008;Tumlinson et al. 2007), possibly indicating that these molecules are just hard to see at high redshift. This is just an example of how a GRB can modify its surrounding medium. The most impressive manifestation of the transient nature of GRBs in optical spectroscopy is the detection of strong absorption features related to the excited levels of the OI, FeII, NiII, SiII and CII species and their time variability (Vreeswijk et al. 2007). This variability can not be explained assuming infrared excitation or collisional processes (Prochaska, Chen & Bloom 2006;Vreeswijk et al. 2007;D'Elia et al. 2009a), thus excitation by the intense GRB UV flux is the leading mechanism to produce these features. In this framework, the GRB/absorber distance can be evaluated comparing the observed ground state and excited level abundances with that predicted by timedependent photo-excitation codes. This distance turns out to be in the range ∼ 0.1 − 1 kpc (Vreeswijk et al. 2007;D'Elia et al. 2009a,b;Ledoux et al. 2009).
Within the described framework, the best and most complete tool to perform these kind of studies is high resolution spectroscopy. In fact, it is the only way to disentangle the GRB interstellar medium in components and to separate the contribution to the absorption coming from the excited levels from the ground state ones. In addition, a high spectral resolution allows us to check for saturation of lines a few km s −1 wide that may appear unsaturated in lower resolution spectra (see e.g. Penprase et al. 2010).
In this paper we present data on GRB 081008, observed both in high and low resolution UVES and FORS2 at the VLT. The paper is organized as follows. Section 2 summarizes the GRB 081008 detection and observations from the literature; Sect. 3 presents the UVES observations and data reduction; Sect. 4 is devoted to the study of the features from the host galaxy, in particular their metallicity and distance from the GRB explosion site; Sect. 5 presents the FORS2 data and makes a comparison with the UVES ones; finally in Sect. 6 the results are discussed and conclusions are drawn. We assume a concordance cosmology with H0 = 70 km s −1 Mpc −1 , Ωm = 0.3, ΩΛ = 0.7. Hereafter, with [X/H] we refer to the X element abundance relative to solar values.
GRB 081008
GRB 081008 was discovered by Swift/BAT on October 8, 2008, at 19:58:29 UT, and was detected by both the XRT and the UVOT instruments (Racusin et al. 2008). The UVOT magnitude in the white filter was reported to be 15.0 at 96 s from the trigger. The afterglow was also detected in all filters (from B to K) by SMARTS/ANDICAM ∼ 4 hr post burst (Cobb 2008). The redshift was secured by the Gemini-South/GMOS, which observed the afterglow 5 hr after the Swift trigger, reporting a redshift of z = 1.967 (Cucchiara et al. 2008a). This value was later confirmed by our VLT/UVES+FORS2 data (D'Avanzo et al. 2008). The host galaxy was identified in the Gemini-South/GMOS acquisition image, and spectroscopically confirmed to be at the GRB redshift. The host of GRB 081008 has R = 20.75 ± 0.01, which corresponds to an absolute AB magnitude of −21.5 (Cucchiara et al. 2008b). A multiwavelength study of the prompt event and the early afterglow phase of GRB 081008 is reported by Yuan et al. (2010, hereafter Y10), which present Swift (BAT+XRT+UVOT), ROTSE-III and GROND data.
UVES OBSERVATIONS AND DATA REDUCTION
The GRB 081008 afterglow was observed with the high resolution UV-visual echelle spectrograph (UVES, Dekker et al. 2000), mounted at the VLT-UT2 telescope, in the framework of the ESO program 082.A-0755. Observations began on the 9 th October 2008 at 00:16:43 UT (∼ 4.25 hr after the Swift/BAT trigger), when the magnitude of the afterglow was R ∼ 18.5. Data were acquired under good observing conditions, with seeing ∼ 0.7. Only the UVES-dichroic-1 (red and blue arm) was used due to observational and scheduling constraints. The net exposure time of the observation is 30 minutes. The slit width was set to be 1 ′′ (corresponding to a resolution of R = 40000) and the read-out mode was rebinned to 2 × 2 pixels. The spectral range of our observation is ∼3300Å to ∼3870Å, ∼4780Å to ∼5750Å, and ∼5830Å to ∼6810Å. Table 1 makes a summary of our observations. The data reduction was performed using the UVES pipeline (version 2.9.7, Ballester et al. 2000). The signalto-noise ratio per pixel is ∼ 3 − 5 in the blue arm and ∼ 5 − 8 in the red one. The noise spectrum, used to determine the errors in the best-fit line parameters, was calculated from the real, background-subtracted spectrum, using line-free regions to evaluate the standard deviation of continuum pixels. Since the noise spectrum has been produced after the pipeline processing and the background subtraction, it takes into account possible systematic errors coming from the data reduction process. Fig. 1 shows the full, smoothed and normalized UVES spectrum.
UVES DATA ANALYSIS
The gas residing in the GRB host galaxy is responsible for many features observed in the GRB 081008 afterglow spectrum. Metallic features are apparent from neutral (OI) and low-ionization (AlII, AlIII, SiII, CrII, FeII, NiII, ZnII) species. In addition, strong absorption lines from the fine structure levels of SiII, FeII and from the metastable levels of FeII and NiII are identified, suggesting that the intense radiation field from the GRB excites such features. Table 2 gives a summary of all the absorption lines due to the host galaxy gas and report their rest frame equivalent widths (Wr). The spectral features were analyzed with FIT-LYMAN (Fontana & Ballester 1995), using the atomic parameters given in Morton (2003). The probed ISM of the host galaxy is resolved into two main components separated by 20 km s −1 (Figs. 2 and 3). The wealth of metal-line transitions allows us to precisely determine the redshift of the GRB host galaxy. This yields a vacuum-heliocentric value of z = 1.9683 ± 0.0001, setting the reference point to the redmost component (hereafter component I). The absorption features have been fitted with Voigt profiles, fixing the redshift of the two components when studying different lines. All transitions appear to be nicely lined-up in redshift, with the exception of component II of SiIIλ1808. We attribute this misalignment to a contamination of another feature, and fit just the redmost side of component II. All ground state and metastable species present absorption features in both components, while fine structure levels in component I only. Two sharp features can be seen at v = ±80 km s −1 from the FeII a 6 D 5/2 line (Fig. 2). They can not be separated in the FORS2 spectrum, thus we can not safely assess if they are real or not. The Doppler b parameter has been linked between different excited transitions belonging to the same species. A small variation of the Doppler parameter is allowed among different species, but the fits are quite good even fixing it. The values for components I and II are ∼ 10 and ∼ 20 km s −1 , respectively. An exception to this behaviour is represented by component II of ZnII. In order to obtain a good fit, a b ∼ 4 and ∼ 50 km s −1 is required for component I and II, respectively. This large b value in component II is necessary to adequately fit what appears as a low level of the continuum in particular in the ZnIIλ2026 feature. The large difference between the b parameters deduced for ZnII and CrII is odd, but we do not have a simple explanation for it. The column densities and b parameters for all the elements and ions of the host galaxy's absorbing gas are reported in Table 3. A third component is actually identified at −78 km s −1 for some low-ionization lines only, i.e., the OIλ1302, AlIIλ1670 and SiIIλ1260, and for the fine structure level SiIIλ1264 (see
Abundances
The GRB 081008 redshift was high enough to allow the hydrogen Lyα line to enter the UVES spectral window. Unfortunately, the UVES spectrum is extremely noisy in this region, and the derived hydrogen column density, log(NHI/cm −2 ) = 21.33 ± 0.12 is quite uncertain. The fit is plotted in Fig. 5 (top panel) superimposed to the smoothed UVES spectrum. The fit is particularly poor on the wings, possibly because more than one component is needed to model the absorption. To obtain a better estimate of the column density, we used the FORS2 spectrum, that has a better S/N (see next section for details). The two component fit shown in Fig. 5 (bottom panel) gives a better representation of the data. The two components are centered at z1 = 1.944 and z2 = 1.975, respectively, and have column densities of log(NHI/cm −2 ) = 20.82 ± 0.14 and log(NHI/cm −2 ) = 20.79±0.12, respectively. The FORS2 total column density is log(NHI/cm −2 ) = 21.11±0.10. This is our best fit result for NH and will be used in the following. The metallicity has been derived summing all non saturated component, excited level and ionic contributions belonging to the same atom, dividing these values by NH and comparing them to the corresponding solar values given in Asplund et al. (2009). The upper limits of compo-nent III result in an increment of the total column densities of < 20% in the worst cases, so they were not included in the computation. The results are listed in Table 4. Column 2 reports the total abundance of each atom, while columns 3 and 4 report the absolute and solar-scaled NX /NH ratios, respectively, with X the corresponding element in column 1. Lower limits are reported whenever saturation does not allow us to securely fit the metallic column densities (see e.g., Fig. 4, where the line profiles reach the zero value of the normalized flux). In particular, for OI and AlII we considered also the values of the third, saturated component, while for SiII this has not been considered, since the fit to the SiIIλ1260 resulted in a N value of component III which is considerably lower than that of components I and II. We derived metallicity values between 0.3 and 0.05 with respect to the solar ones. We caution, however, that many transitions belonging to other ionization states, which are commonly observed in GRB afterglow spectra could not be taken into account, because they are outside the UVESdichroic-1 spectral range. In addition, the dust depletion can prevent the observation of part of the metallic content of the GRB 081008 host galaxy. The reported relative abundances should then be considered as lower limits to the true GRB 081008 metallicity. However, some considerations on higher ionization states are possible analyzing the FORS2 spectra, and dust content can be investigated through the study of the depletion pattern (see sects. 5 and 6).
Metals
Components
I (0 km s −1 ) II (−20 km s −1 ) III (−78 km s −1 ) Species Observed transitions N (cm −2 ) b (km s −1 ) N (cm −2 ) b (km s −1 ) N (cm −2 ) b (km s −1 ) OI 3 P 2 λ1302 > 14.
Excited levels
The level structure of an atom or ion is characterized by a principal quantum number n, which defines the atomic level, and by the spin-orbit coupling (described by the quantum number j), which splits these levels into fine structure sublevels. Excited features are routinely detected in GRB absorption spectroscopy, at the host redshift, due to the population of both n > 1 and/or n = 1 fine structure levels. GRB 081008 behaves the same way. In fact, component I features the first and second fine structure levels of the FeII ground state (a 6 D), the first fine structure level of the SiII 2 P 0 , and the FeII a 4 F 9/2 , FeII a 4 D 7/2 , NiII 4 F 9/2 metastable levels (the subscript represents the spin-orbit quantum number j). Moreover, the FeII a 4 F 7/2 and NiII 4 F 9/2 excited states are also detected in component II (see Table 3 for details).
There is conspicuous literature on the population of excited states in GRB surrounding medium and their detection in the afterglow spectra (see e.g. Prochaska, Chen & Bloom 2006;Vreeswijk et al. 2007;D'Elia et al. 2010 and references therein). There is general consensus that these features are produced by indirect UV pumping by the afterglow, i.e., through the population of higher levels followed by the depopulation into the states responsible for the absorption features. This has been demonstrated both by the detection of variability of fine structure lines in multi-epoch spectroscopy (Vreeswijk et al. 2007;D'Elia et al. 2009a), and through the column density ratios of different excited levels when multiple spectra were not available (Ledoux et al. 2009;D'Elia et al. 2009b).
Concerning GRB 081008, the high column density of the first metastable level of FeII (a 4 F 9/2 ) with respect to the fine structure levels of the ground state can hardly be explained with a level population distribution given by a Boltzmann function (Vreeswijk et al. 2007), meaning that collisional excitations can be safely rejected. The lack of multi-epoch spectroscopy does not allow us to completely rule out the possibility that the exciting UV flux come from regions of high star-formation rates and not from the GRB. In fact, fine structure emission lines are present in Lyman-break, high redshift galaxies (see Shapley et al. 2003). If we assume that this flux comes from the GRB, we can estimate the GRB/absorber distance, comparing observed column densities to those predicted by a time-dependent, photoexcitation code for the time when the spectroscopic observations were acquired. The photo-excitation code is that used by Vreeswijk et al. (2007) and D'Elia et al. (2009a), to which we refer the reader for more details. Our equations take into account the (4π) −1 correction factor to the flux experienced by the absorbing gas described by Vreeswijk
. We assume that the species for which we are running the code are at the ground state before the GRB blast wave reaches the gas. The GRB flux behavior before the UVES observation was estimated using the data in Y10 (lightcurve and spectral index), with no spectral variation assumed during the time interval between the burst and our observation. We concentrate on FeII and SiII levels be- cause the NiII ground state has column densities not far from the 90% confidence level of log(NNiII/cm −2 ) = 13.3, and thus the uncertainties on such values are high (Table 3). In addition, NiII ground state is detected only through the λ1741 transition, because the lower oscillator strengths of the λ1709 and 1751 lines prevents a detection of these features above the 90% level. The initial column densities of Fig. 6 (top) shows the model that best fits the FeII data, obtained for a distance of 50 pc and a Doppler parameter of 20 km s −1 . Fig. 6 (bottom) reproduces the behaviour of the reduced χ 2 as a function of the distance GRB/absorber. The distance of component I from the GRB explosion site results dI,F eII = 51 +21 −11 pc at the 90% confidence level. The same calculation was performed using the SiII atomic data. The results are displayed in Fig. 7, and the estimated distance is dI,SiII = 52 ± 6 pc, which is consistent with what was estimated using the FeII data.
For component II, we have much less excited transitions. Fig. 8 shows the model which best fits the FeII data and the two theoretical curves compatible within the error bars for the FeII a 4 F 9/2 excited level column density, which is actually the only one with a positive detection in component II. The resulting distance between the GRB and this absorbing component is dII,F eII = 200 +60 −80 pc (90% confidence level), a larger value than dI , as expected given the lack of excited transition in component II.
FORS2 SPECTROSCOPY
In the framework of the ESO program 082.A-0755, we observed the afterglow of GRB 081008 also with the FORS2 low resolution spectrograph (R = 780), mounted on VLT/UT1. We took three spectra of 900 s each, starting around Oct 09 at 00:20 UT (about 4.4 hours after the burst). We used the 600B grism, whose spectral coverage is 330−630 nm. The extraction of the spectra was performed within the MIDAS environment. Wavelength and flux calibration of the three spectra were obtained by using the helium-argon lamp and observing spectrophotometric stars. Tab. 1 reports a summary of our FORS2 observations. We searched for variability in the Wr of the FORS2 absorption lines, but we found none at the 2σ level. This is not surprising, since fine structure and excited lines are expected to vary by less than ∼ 0.05 decades (in column density) during the acquisition time of the FORS2 spectra, which is ∼ 15 min rest frame (see Figs. 6-8). Since no variability is detected, we co-added the three spectra, to improve the signal-to-noise ratio of our data, to obtain a value of ∼ 60 − 80 at λ > 4000Å. The resulting spectrum is presented in Fig. 9, together with the spectral features identified at z = 1.97, a redshift consistent with that estimated using the UVES data. A list of the features detected in the FORS2 spectrum is reported in the first column of Table 5.
The Voigt fitting procedure is not adequate to compute the column densities of metallic species in low resolution spectroscopy. In this case, the Curve of Growth (COG) analysis (see e.g. Spitzer 1978) must be applied. For weak absorption lines, with width Wr < 0.1Å, and for Doppler parameters b > 20 km s −1 , Wr is proportional to the column density N , and virtually insensitive to the Doppler parameter itself. For stronger lines this does not hold any more, and the relation between Wr and N is described by a COG, which is a function of b. In order to fit the correct COG to the data and to estimate b, different transitions (with different oscillator strengths f ) of the same species are needed. Following Spitzer (1978), we built up a code to perform this fit on our FORS2 data. To test our code, we compute Wr for all the UVES transitions featuring two components, and apply our fitting program. The result of the fit is shown in Fig. 10 (top panel), and the estimated column densities are reported in Table 6 (errors are given at the 1σ level).
The effective Doppler parameter evaluated from the fit, b = 23 km s −1 , is compatible with that estimated using the Figure 9. The flux-calibrated, co-added FORS2 spectrum of the GRB 081008 afterglow, together with the spectral features identified at z = 1.9683. line fitting profile. To compare the column densities estimated with the two methods, we sum for each species the contribution coming from the two components using the line fitting method (see values in Table 3), and report the results in Table 6. The agreement between line fitting and COG analyses is very good: each column density is within 1σ from the corresponding value estimated using the other method. The only exception is the FeII5s, whose column density values however overlap at the 2σ level.
We now apply the COG analysis to the FORS2 spectrum. First of all, we compute the Wr from the data. As shown in Table 5, despite the identification of nearly 30 transitions, reliable Wr can be evaluated only for 13 (second column of the table). This is because the lower FORS2 resolution does not enable to separate many of these transitions which are blended with each other. We then run the COG code using the FORS2 Wr, and evaluate the corresponding column densities. The results are shown in the third column of Table 5, while the last column shows the UVES column densities for comparison. Errors are again at the 1σ level, and Fig. 10 (bottom panel) shows the graphical output of the fit. The effective Doppler parameter estimated (b = 31 ± 2 km s −1 ) reproduces quite well the combination of the values computed for component I and II (∼ 10 and ∼ 20 km s −1 , respectively, separated by ∼ 20 km s −1 ) using the line fitting method. The FORS2 and UVES spectra give consistent column densities, with the 3σ confidence regions overlapping in the worst cases.
CONCLUSIONS AND DISCUSSION
In this paper we present high and low resolution spectroscopy of the optical afterglow of GRB 081008, observed using UVES and FORS2 spectrographs at the VLT ∼ 5 hr after the trigger. We detect several absorption features (both neutral and excited) at the common redshift of z = 1.9683. The spectra show that the gas absorbing the GRB afterglow light can be described with three components identified in this paper as I, II and III, according to their decreasing velocity values.
We estimated the distances between the GRB and the absorbers. We find a distance for component I of dF eII,I = 51 +21 −11 pc and dSiII,I = 52 ± 6, using FeII and SiII, respectively. The SiII leads to a smaller uncertainty because its fine structure level is more sensitive to the flux experienced by the absorber. Other papers mainly use FeII as distance estimator, so for a safer comparison is better to consider our FeII value. For component II, this distance is greater, dII = 200 +60 −80 pc. We stress that these values are obtained assuming a three component absorber. However, we can not exclude a higher number of components, because our spectrum has a low S/N and a limited resolution. Component II is far away from GRB than component I, as expected given the lack of fine structure lines in this absorber. Component III does not show excited levels at all, and only shows low ionization states. Therefore, this is produced by an absorber located even farther from the GRB, in a region which is not significantly influenced by the prompt/afterglow emission.
Component I of GRB 081008 is the closest to a GRB ever recorded. In fact, for the 6 other GRBs for which the GRB/absorber distance have been estimated, the closest components are at d = 80−700 pc from the GRB (Vreeswijk et al. 2007;D'Elia et al. 2009,a,b;Ledoux et al. 2009;D'Elia et al. 2010;Thöne et al. 2011). The values reported in literature have been corrected for the 4(π) −1/2 factor discussed by Vreeswijk (2011). This behaviour can be interpreted as due to a dense environment close to the GRB explosion site. This high density is possibly witnessed by the a non negligible dust amount (see below) and by the metal content of the GRB surrounding medium. In fact, the GRB 081008 surroundings have the highest metallicity and the highest abundances of, e.g., FeII and NiII, among this sub-sample of GRBs. This high density in the GRB surroundings could constitute a barrier to the GRB prompt/afterglow emission, that is not able to strongly excite the interstellar medium up to the distances reached by the other GRBs.
The neutral hydrogen column density is log(NH,opt/cm −2 ) = 21.11 ± 0.10, while that estimated from Swift XRT data is log NH,X/cm −2 = 21.66 +0.14 −0.26 (Campana et al. 2010). The latter value is for a solar abundance medium. Using NH,opt we evaluate the GRB 081008 host galaxy's metallicity. The values we find are in the range [X/H] = −1.29 to −0.52 with respect to the solar abundances. This value lies in the middle of the GRB distribution, (Savaglio 2006;Prochaska et al. 2007;Savaglio, Glazebrook & Le Borgne, 2009). From X-ray data a limit of [X/H] > −1.83 (90% confidence limit) can be set assuming Table 5. GRB081008 absorption features detected in the FORS2 spectrum, together with their Wr and column densities. UVES data are shown for comparison.
Transition
Wr a solar abundance pattern and requiring that the absorbing medium is not Thomson thick. If we set the metallicity to [X/H] = −0.5, the absorbing column density in the X-rays is higher, namely, log NH,X/cm −2 = 22.24 +0.19 −0.30 and higher for lower metallicities. Fynbo et al. (2009) and Campana et al. (2010) show that in GRBs with a detectable Lyα feature (i.e., those at z > 2) NH,X is on average a factor of 10 higher than NH,opt, and GRB 081008 follows this trend. The intense GRB flux, which ionizes the hydrogen and prevents part of it to be optically detected, is the common explanation for this discrepancy (Fynbo et al. 2009;Campana et al. 2010;Schady et al. 2011).
(Å) a N b UVES N b CIIλ1334 BLEND - - CIVλ1548 BLEND - - CIVλ1550 BLEND - - OIλ1302 BLEND - SAT OIλ1304 BLEND - - OIλ1306 BLEND - -AlIIλ1670
It is worth noting that observed abundances of FeII and ZnII are significantly different ([Fe/H]= −1.29 ± 0.11 and [Zn/H]= −0.52 ± 0.11). This can be ascribed to the different refractory properties of the two elements, with the former that preferentially tends to produce dust grains while the latter prefers the gas phase. The comparison between these 'opposite' elements can thus provide information on the dust content in the GRB environments. In order to be more quantitative, we derive the dust depletion pattern for the GRB 081008 environment, following the method described in Savaglio (2000). We consider the four depletion patterns observed in the Milky Way, namely, those in the warm halo (WH), warm disk + halo (WHD), warm disk (WD) and cool disk (CD) clouds (Savage & Sembach 1996). We find that the best fit to our data is given by the WH cloud pattern, with a metallicity of logZGRB/Z⊙ ∼ −0.5 and a GRB dust-to-metal ratio comparable to that of the WH environment, e.g., d/dW H = 1 (Fig. 11). This metallicity value is consistent with our [Zn/H] measurement. This agreement is self-consistent with the use of zinc as a good indicator of metallicity. Since the latter quantity is linked to the extinction (see e.g., Savaglio, Fall & Fiore 2003) we derive AV ∼ 0.19 mag along the GRB 081008 line of sight. We check this value by modeling the flux-calibrated FORS2 spectrum. The SED is dominated by the Lyα which is difficult to model given the high fluctuations and other absorption lines. Anyway, the inferred AV value is low and compatible with that evaluated from the dust depletion. Another hint of dust is the non detection of FeII in the third component. The FeII column densities in components I and II are very similar, and this lets us believe that in component III the iron is present as well, but in the dust form. The higher presence of dust in components far away from the GRB has already been pointed out by D' Elia et al. (2007). They report a possible presence of dust in component III of GRB 050730, while the closer component II (featuring FeII fine structure lines) shows more iron in the gas state. The detection of more dust far away from the GRB can be explained since dust grains containing iron tends to be efficiently destroyed during a blast wave occurring after a GRB explosion (Perna, Lazzati & Fiore 2003).
The analysis of the FORS2 spectra extends our surveyed wavelength range, allowing the detection of higher ionization species, such as CIV and SiIV. Anyway, line profiles of high and low ionization species rarely match in redshift space and often if they do, it is because the line blending
GRB081008
warm halo warm disk halo warm disk cool disk observed Figure 11. Depletion patterns in the absorbing gas of GRB 081008. Filled squares are taken from average gas-phase abundance measurements in warm halo (blue), warm disk + halo (green), warm disk (red) and cool disk (cyan) clouds of the Milky Way (Savage & Sembach 1996). Filled circles represent our data points, which are best fitted by the warm halo cloud pattern.
cannot be resolved in a spectrum, regardless of resolution and S/N.
We stress that the availability of simultaneous high and low resolution spectra of a GRB afterglow is an extremely rare event. In this context, the comparison of the column densities obtained fitting the line profile of a high resolution spectrum with that estimated by the Curve of Growth analysis applied to a low resolution one could be extremely important. In fact, this can help to determine a range of column densities for which it is safe to apply the Curve of Growth analysis when high resolution data are missing. This is because high column densities can result in the saturation effect, a problem that is difficult to address using low resolution spectra only (see e.g. Penprase et al. 2010). Prochaska (2006) widely discuss the limits and perils of the COG analysis applied to low resolution data. They find that this kind of analysis tends to underestimate the column densities of the absorbing species. This is because strong transitions drive the COG fit since the relative error associated to their Wr is smaller than that for weak ones. Nevertheless, strong transitions are more affected by saturation, and in order to match their observed column densities, the COG fit is forced towards high values of the effective Doppler parameter. High resolution data often show that the main contribution to the column density of strong transitions comes from one narrow component. On the other hand, the main contribution to the Wr comes from other components which account for a small fraction of the column density. These inferred high values for the effective Doppler parameter are thus mimicking a more complex situation, with the result of underestimating the real column densities. For what concerns GRB 081008 the UVES observations show no or just mild saturation even for the strongest transitions, and the two main components give a similar contribution to the total column densities. This is the reason why there is a good agreement between COG analysis of low resolution data and line fitting analysis of high resolution ones for this particular GRB (within 3σ in the worst cases).
Finally, we detect two weak intervening systems in our spectra. The first one is a CIV absorber in the FORS2 spectrum at z = 1.78, and the second one is a MgII system in the UVES spectrum at z = 1.286. This last system has Wr(M gIIλ2796) = 0.3Å, the detection limit being 0.1Å at the 2σ confidence level. The redshift path analyzed for MgII is z = 0.18 − 0.38 and z = 0.71 − 1.43 for the UVES spectrum, and z = 0.36 − 1.21 for the FORS2 one.
Figure 1 .
1The full, smoothed and normalized UVES spectrum. Solid lines indicate the noise level as a function of wavelength.
Fig. 4 )
4. These lines are however heavily saturated, and their column densities reported in the table just set a lower limit to the true values. The reported upper limits are at the 90% confidence level.
Figure 2 .
2The FeII ground and excited absorption features. Solid lines represent the two Voigt components, best-fit model. Vertical lines identify the component velocities. The zero point has been arbitrarily placed at the redshift of the redmost component (z = 1.9683). g.s. and n* indicate ground state and n-th excited transitions, respectively.
Figure 3 .Figure 4 .
34The NiII (top panel), AlIII and SiII, (middle panel), CrII and ZnII (bottom panel) absorption features. Solid lines represent the two Voigt components, best-fit model. Vertical lines identify the component velocities. The zero point has been arbitrarily placed at the redshift of the redmost component (z = 1.9683). g.s. and n* indicate ground state and n-th excited transitions, respectively. The OIλ1302, AlIIλ1670, SiIIλ1260 ground state and SiIIλ1264 fine structure transitions. These lines need a three Voigt component model to be fitted and are heavily saturated. The zero point has been arbitrarily placed at the redshift of the redmost component (z = 1.9683). g.s. and n* indicate ground state and n-th excited transitions, respectively.
Figure 5 .
5The Lyα absorption feature at the GRB 081008 redshift. Top panel shows the single Voigt component, best-fit model for the UVES spectrum. Bottom panel shows the double component, best-fit model for the FORS2 spectrum. The UVES fit is poor, while the FORS2 one gives a more reliable description of N H .
Figure 6 .
6Top panel: FeII column densities for the ground level (open circle), fine structure levels of the ground state (filled circles), first metastable (open square) and second metastable (open triangle) transitions for component I in the spectrum of GRB 081008. Column density predictions from our timedependent photo-excitation code are also shown. They refer to the ground level (dotted line), fine structure level (solid lines), first and second excited level (dashed and thick solid lines, respectively) transitions, in the case of an absorber placed at 50 pc from the GRB. Bottom panel: the reduced χ 2 as a function of the distance for the model reproduced in the upper panel. Dashed lines indicate the best fit distance and enclose the 90% confidence range.
Figure 7 .
7The SiII column densities for the ground level (open circle) and first fine structure level (filled circle) transitions for component I in the spectrum of GRB 081008. Column density predictions from our time-dependent photo-excitation code are also shown. They refer to the ground level (dotted line) and first fine structure level (thick solid line) transitions, in the case of an absorber placed at 52 pc from the GRB. The two thin solid lines display the models which enclose the fine structure level data at the 90% confidence level (error bars for this transition are drawn both at 1σ and 90% confidence levels).
Figure 8 .
8The FeII column densities for the ground level (open circle), first fine structure level (upper limit) and first excited level (open square) transitions for component II in the spectrum of GRB 081008. Column density predictions from our timedependent photo-excitation code are also shown. They refer to the ground level (dotted line), first fine structure level (solid line) and first excited level (thick dashed line) transitions, in the case of an absorber placed at 200 pc from the GRB. The two thin dashed lines display the models which enclose the excited level data at the 90% confidence level (error bars for this transition are drawn both at 1σ and 90% confidence levels).
Figure 10 .
10Top panel: the COG analysis tested using the UVES species featuring two components. Bottom panel: COG analysis applied to the FORS2 lines with a measured Wr. Solid lines represent the best fit obtained using the reported b values. Dashed lines show the b = ∞ curve for comparison. The COG fits component I and II together in both plots.
Table 1 .
1UVES and FORS2 setups.Instrument
Setup (nm)
Time from burst (hr) Exposure (s)
Wavelength (Å) Slit width
Resolution
S/N
UVES
Dic 1, 346
4.30
1800
3300 -3870
1"
40 000
∼ 3 − 5
UVES
Dic 1, 580
4.30
1800
4780 -6810
1"
40 000
∼ 5 − 8
FORS2
600B+22(A)
4.37
900
3300 -6300
1"
780
∼ 35 − 50
FORS2
600B+22(B)
4.63
900
3300 -6300
1"
780
∼ 35 − 50
FORS2
600B+22(C)
4.88
900
3300 -6300
1"
780
∼ 35 − 50
FORS2
A+B+C
4.63
2700
3300 -6300
1"
780
∼ 60 − 80
Table 2 .
2Rest frame equivalent widths of the UVES features.Species
Transition
Wr (Å) ∆ Wr (Å, 1σ)
OI 3 P 2 (g.s)
1302
0.57
0.05
AlII 1 S 0 (g.s)
1670
0.67
0.01
AlIII 2 S 1/2 (g.s)
1854
0.22
0.01
1862
0.13
0.02
SiII 2 P 0
1/2 (g.s)
1260
0.63
0.07
1808
0.20
0.02
SiII 2 P 0
3/2 (1*)
1264
0.63
0.07
1816
0.06
0.02
CrII 2 S 1/2 (g.s.)
2056
0.20
0.02
2062
0.14
0.02
2066
0.11
0.02
FeIIa 6 D 9/2 (g.s)
2249
0.12
0.01
2260
0.20
0.02
FeIIa 6 D 7/2 (1*)
1618
0.05
0.01
1621
0.11
0.01
FeIIa 6 D 5/2 (2*)
1629
0.03
0.01
FeIIa 6 D 3/2 (3*)
1634
0.03
0.01
1636
0.03
0.01
FeII5sa 4 F 9/2 (5*)
1637
0.04
0.01
1612
0.12
0.01
1702
0.21
0.02
FeIIa 4 D 7/2 (9*)
1635
0.03
0.01
NiII 2 D 5/2 (g.s.)
1741
0.07
0.02
NiII 4 F 9/2 (2*)
2166
0.19
0.01
2217
0.27
0.01
2223
0.09
0.02
ZnII 2 S 1/2 (g.s.)
2026
0.19
0.02
2062
0.09
0.02
Table 3 .
3Absorption line logarithmic column densities for the three components of the main system, derived from the UVES spectrum.Species
Observed transitions
N (cm −2 )
HI 2 S 1/2
Lyα (UVES)
21.33 ± 0.12
HI 2 S 1/2
Lyα (FORS2)
21.11 ± 0.10
Components
20.82 ± 0.14
(z 1 = 1.944)
20.79 ± 0.12
(z 2 = 1.975)
Table 4 .
4Metallicity computed from the UVES data.Element X
log N X /cm −2
log N X /N H
[X/H]
O
> 15.12 ± 0.06
> −5.99 ± 0.13
> −2.68 ± 0.11
Al
> 13.70 ± 0.04
> −7.41 ± 0.13
> −1.86 ± 0.11
Si
15.75 ± 0.04
−5.32 ± 0.12
−0.87 ± 0.10
Cr
13.83 ± 0.03
−7.28 ± 0.08
−0.92 ± 0.10
Fe
15.42 ± 0.04
−5.69 ± 0.13
−1.19 ± 0.11
Ni
13.74 ± 0.07
−7.37 ± 0.13
−1.29 ± 0.12
Zn
13.15 ± 0.04
−7.96 ± 0.13
−0.52 ± 0.11
the ground states were computed from the observed column
densities of all the levels of each ion, i.e., we are assuming
that the species are not excited at t = 0. The initial val-
ues for FeII and SiII are log(NSiII /cm −2 ) = 15.63 ± 0.03
and log(NF eII /cm −2 ) = 15.21 ± 0.02 for component I, and
log(NF eII /cm −2 ) = 14.98 ± 0.04 for component II. Finally,
the Doppler parameter used as input of this model has been
left free to vary between 10 and 20 km s −1 , i.e. the range of
values that best fit the absorption features of components I
and II.
Table 6 .
6Comparison between UVES column densities evalutated with the line fitting and COG methods.All values are logarithmic (in cm −2 ).Specie
N (COG analysis) N (Line fitting)
AlIII
13.29 +0.05
−0.10
13.30 ± 0.03
SiII
15.66 +0.06
−0.12
15.60 ± 0.03
CrII
13.83 +0.03
−0.07
13.83 ± 0.03
FeII (g.s.)
15.31 +0.01
−0.07
15.33 ± 0.02
FeII a 4 F 9/2
14.13 +0.07
−0.11
14.33 ± 0.05
NiII (g.s.)
13.81 +0.01
−0.03
13.74 ± 0.07
NiII a 4 F 9/2
13.73 +0.05
−0.09
13.75 ± 0.02
ZnII
13.14 +0.05
−0.10
13.15 ± 0.04
ACKNOWLEDGMENTSWe thank an anonymous referee for a deep and critical reading of the paper, which strongly increased its quality. This work was partially supported by ASI (I/l/011/07/0).
. M Asplund, N Grevesse, A J Sauval, P Scott, ARA&A. 47481Asplund, M., Grevesse, N.; Sauval, A.J. & Scott, P. 2009, ARA&A, 47, 481
. P Ballester, A Modigliani, O Boitquin, ESO Messenger. 10131Ballester, P., Modigliani, A., Boitquin, O., et al.: 2000, ESO Mes- senger, 101, 31
. S Campana, C C Thöne, A De Ugarte Postigo, MNRAS. 4022429Campana, S., Thöne, C.C., de Ugarte Postigo, A. et al. 2010, MNRAS, 402, 2429
. B E Cobb, GCN Circ8356Cobb, B.E., 2008, GCN Circ, 8356
. A Cucchiara, GCN Circ. 8346Cucchiara, A. et al. 2008, GCN Circ 8346
. A Cucchiara, GCN Circ. 8372Cucchiara, A. et al. 2008, GCN Circ 8372
. H Dekker, S Odorico, A Kaufer, B Delabre, H Kotzlowski, SPIE. 4008534Dekker, H, D'Odorico, S, Kaufer, A., Delabre, B., Kotzlowski, H. 2000, SPIE, 4008, 534
. P D'avanzo, GCN Circ8350D'Avanzo, P. et al. 2008, GCN Circ, 8350
. V D'elia, F Fiore, E J A Meurs, A&A. 467629D'Elia, V., Fiore, F., Meurs, E.J.A. et al. 2007, A&A 467, 629
. V D'elia, F Fiore, R Perna, ApJ. 694332D'Elia, V., Fiore, F., Perna, R. et al. 2009a, ApJ, 694, 332
. V D'elia, F Fiore, R Perna, A&A. 503437D'Elia, V., Fiore, F., Perna, R. et al. 2009b, A&A 503, 437
. V D'elia, J P.U Fynbo, S Covino, A&A. 52336D'Elia, V., Fynbo, J.P.U, Covino, S. et al. 2010, A&A, 523, 36
. J P U Fynbo, R L Starling, C Ledoux, A&A. 45147Fynbo, J.P.U., Starling, R.L., Ledoux, C. et al. 2006, A&A, 451, L47
. J P U Fynbo, J X Prochaska, J Sommer-Larsen, M Dessauges-Zavadsky, P Moller, ApJ. 683321Fynbo, J.P.U., Prochaska, J.X, Sommer-Larsen, J., Dessauges- Zavadsky, M., Moller, P. 2008, ApJ, 683, 321
. J P U Fynbo, P Jakobsonn, J X Prochaska, ApJS. 185526Fynbo, J.P.U., Jakobsonn, P., Prochaska, J.X. et al. 2009, ApJS, 185, 526
. C Ledoux, P M Vreeswijk, A Smette, A&A. 506661Ledoux, C., Vreeswijk, P.M., Smette, A. et al. 2009, A&A, 506, 661
. D C Morton, ApJS. 149205Morton, D.C. 2003, ApJS, 149, 205
. P Noterdaeme, C Ledoux, P Petitjean, R Srianand, A&A. 481327Noterdaeme, P, Ledoux, C., Petitjean, P., Srianand, R. 2008, A&A, 481, 327
. B E Penprase, J X Prochaska, W L W Sargent, I Toro Martines, D J Beeler, ApJ. 7211Penprase, B.E., Prochaska, J.X., Sargent, W.L.W., Toro Mar- tines, I., Beeler, D.J. 2010, ApJ, 721, 1
. R Perna, D Lazzati, F Fiore, ApJ. 585775Perna, R., Lazzati, D., Fiore, F. 2003, ApJ, 585, 775
. J X Prochaska, ApJ. 650272Prochaska, J.X. 2006, ApJ, 650, 272
. J X Prochaska, H W Chen, J S Bloom, ApJ. 64895Prochaska, J.X., Chen, H.W., Bloom, J.S. 2006, ApJ, 648, 95
. J X Prochaska, H W Chen, M Dessauges-Zavadsky, J S Bloom, ApJ. 666267Prochaska, J.X., Chen, H.W., Dessauges-Zavadsky, M., Bloom, J.S. 2007, ApJ, 666, 267
. J X Prochaska, Y Sheffer, D A Perley, ApJ. 69127Prochaska, J.X., Sheffer, Y., Perley, D.A. et al. 2009, ApJ, 691, 27L
. J L Racusin, GCN Circ8344Racusin, J.L. et al. 2008, GCN Circ, 8344
. B D Savage, ARA&A. 34279Savage, B.D., Sembach, 1996, ARA&A, 34, 279
S Savaglio, IAU Symp. 204, The Infrared Background and Its Cosmological Implications. M. Harwit & M.G. HauserSan FranciscoASP307Savaglio, S. 2000, in IAU Symp. 204, The Infrared Background and Its Cosmological Implications, ed. M. Harwit & M.G. Hauser (San Francisco: ASP), 307
. S Savaglio, S M Fall, F Fiore, ApJ. 585638Savaglio, S., Fall, S.M., Fiore, F. 2003, ApJ, 585, 638
. S Savaglio, K Glazebrook, D Crampton, ApJ. 60251Savaglio, S., Glazebrook, K., Crampton, D. et al. 2004, ApJ, 602, 51
. S Savaglio, K Glazebrook, D Le Borgne, ApJ. 635260Savaglio, S., Glazebrook, K., Le Borgne, D. et al. 2005, ApJ, 635, 260
. S Savaglio, New J. Phys. 8195Savaglio, S. 2006, New J. Phys, 8, 195
. S Savaglio, K Glazebrook, D Le Borgne, ApJ. 691182Savaglio, S., Glazebrook, K. & Le Borgne, D. 2009, ApJ, 691, 182
. P Schady, S Savaglio, T Krühler, J Greiner, A Rau, A&A. 7275Schady, P., Savaglio, S., Krühler, T., Greiner, J. Rau, A. 2011, A&A, 727, 5
. A E Shapley, C C Steidel, M Pettini, K L Adelberger, ApJ. 58865Shapley, A.E., Steidel, C.C., Pettini, M., Adelberger, K.L. 2003, ApJ, 588, 65
. Y Sheffer, J X Prochaska, B T Draine, D A Perley, J S Bloom, ApJ. 70163Sheffer, Y., Prochaska, J.X., Draine, B.T., Perley, D.A., Bloom, J.S. 2009, ApJ, 701, L63
L Spitzer, Physical processes in the interstellar medium. L. SpitzerSpitzer, L. 1978, Physical processes in the interstellar medium, ed. L. Spitzer
. C C Steidel, K L Adelberger, M Giavalisco, M Dickinson, M Pettini, ApJ. 5191Steidel, C.C., Adelberger, K.L., Giavalisco, M., Dickinson, M., Pettini, M. 1999, ApJ, 519, 1
. C C Thöne, S Campana, D Lazzati, MNRAS. 414479Thöne, C.C., Campana, S., Lazzati, D. et al. 2011, MNRAS, 414, 479
. J Tumlinson, J X Prochaska, H.-W Chen, M Dessauges-Zavadsky, J S Bloom, ApJ. 668667Tumlinson, J., Prochaska, J.X., Chen, H.-W., Dessauges- Zavadsky, M., Bloom, J.S. 2007, ApJ, 668, 667
. P M Vreeswijk, S L Ellison, C Ledoux, A&A. 419927Vreeswijk, P.M., Ellison, S.L., Ledoux, C. et al. 2004, A&A, 419, 927
. P M Vreeswijk, C Ledoux, A Smette, A&A. 46883Vreeswijk, P.M., Ledoux, C., Smette, A. et al. 2007, A&A, 468, 83
GRB as Probes: From the Progenitor's Environment to the High Redshift Universe. P M Vreeswijk, ComoVreeswijk, P.M. 2011, In "GRB as Probes: From the Progenitor's Environment to the High Redshift Universe", Como, 16-20
. P Yuan, P Schady, J L Racusin, ApJ. 711Y10Yuan, P., Schady, P., Racusin, J.L. et al. 2010, ApJ, 711, 870 (Y10)
|
[] |
[
"Anomalous Flux in the Cosmic Optical Background Detected With New Horizons Observations A TARGETED OBSERVATION OF THE COSMIC OPTICAL BACKGROUND",
"Anomalous Flux in the Cosmic Optical Background Detected With New Horizons Observations A TARGETED OBSERVATION OF THE COSMIC OPTICAL BACKGROUND"
] |
[
"Tod R Lauer \nNSF's National Optical Infrared Astronomy Research Laboratory\nP.O. Box 2673285726TucsonAZ\n",
"Marc Postman \nSpace Telescope Science Institute\n\n",
"John R Spencer \nDepartment of Space Studies\nSouthwest Research Institute\n1050 Walnut St., Suite 30080302BoulderCO\n",
"Harold A Weaver \nThe Johns Hopkins University Applied Physics Laboratory\n20723-6099LaurelMD\n",
"S Alan Stern \nSpace Science and Engineering Division\nSouthwest Research Institute\n1050 Walnut St., Suite 30080302BoulderCO\n",
"G Randall Gladstone \nSouthwest Research Institute\n78238San AntonioTX\n\nUniversity of Texas at San Antonio\n78249San AntonioTX\n",
"Richard P Binzel \nDepartment of Earth, Atmospheric, and Planetary Sciences\nMassachusetts Institute of Technology\n02139CambridgeMA\n",
"Daniel T Britt \nDepartment of Physics\nUniversity of Central Florida\n32816OrlandoFL\n",
"Marc W Buie \nDepartment of Space Studies\nSouthwest Research Institute\n1050 Walnut St., Suite 30080302BoulderCO\n",
"Bonnie J Buratti \nJet Propulsion Laboratory\nCalifornia Institute of Technology\n91109PasadenaCA\n",
"Andrew F Cheng \nThe Johns Hopkins University Applied Physics Laboratory\n20723-6099LaurelMD\n",
"W M Grundy \nLowell Observatory\n86001FlagstaffAZ\n",
"Mihaly Horányi \nLaboratory for Atmospheric and Space Physics\nUniversity of Colorado\n80303BoulderCO\n",
"J J Kavelaars \nDepartment of Physics and Astronomy\nNational Research Council of Canada\nVictoria BC\nUniversity of Victoria\nVictoriaBC\n",
"Ivan R Linscott \nIndependent consultant\n94043Mountain ViewCA\n",
"Carey M Lisse \nThe Johns Hopkins University Applied Physics Laboratory\n20723-6099LaurelMD\n",
"William B Mckinnon \nDept. of Earth and Planetary Sciences\nMcDonnell Center for the Space Sciences\nWashington University\n63130St. LouisMO\n",
"Ralph L Mcnutt \nThe Johns Hopkins University Applied Physics Laboratory\n20723-6099LaurelMD\n",
"Jeffrey M Moore \nSpace Science Division\nNASA Ames Research Center\n94035Moffett FieldCA\n",
"J I Núñez \nThe Johns Hopkins University Applied Physics Laboratory\n20723-6099LaurelMD\n",
"Catherine B Olkin \nDepartment of Space Studies\nSouthwest Research Institute\n1050 Walnut St., Suite 30080302BoulderCO\n",
"Joel W Parker \nDepartment of Space Studies\nSouthwest Research Institute\n1050 Walnut St., Suite 30080302BoulderCO\n",
"Simon B Porter \nDepartment of Space Studies\nSouthwest Research Institute\n1050 Walnut St., Suite 30080302BoulderCO\n",
"Dennis C Reuter \nNASA Goddard Space Flight Center\n20771GreenbeltMD\n",
"Stuart J Robbins \nDepartment of Space Studies\nSouthwest Research Institute\n1050 Walnut St., Suite 30080302BoulderCO\n",
"Paul M Schenk \nLunar and Planetary Institute\n77058HoustonTX\n",
"Mark R Showalter \nSETI Institute\n94043Mountain ViewCA\n",
"Kelsi N Singer \nDepartment of Space Studies\nSouthwest Research Institute\n1050 Walnut St., Suite 30080302BoulderCO\n",
"Anne J Verbiscer \nUniversity of Virginia\n22904CharlottesvilleVA\n",
"Leslie A Young \nDepartment of Space Studies\nSouthwest Research Institute\n1050 Walnut St., Suite 30080302BoulderCO\n",
"\n3700 San Martin Drive21218BaltimoreMD\n",
"\nOperated by AURA, Inc., for the National Aeronautics and Space Administration\n\n"
] |
[
"NSF's National Optical Infrared Astronomy Research Laboratory\nP.O. Box 2673285726TucsonAZ",
"Space Telescope Science Institute\n",
"Department of Space Studies\nSouthwest Research Institute\n1050 Walnut St., Suite 30080302BoulderCO",
"The Johns Hopkins University Applied Physics Laboratory\n20723-6099LaurelMD",
"Space Science and Engineering Division\nSouthwest Research Institute\n1050 Walnut St., Suite 30080302BoulderCO",
"Southwest Research Institute\n78238San AntonioTX",
"University of Texas at San Antonio\n78249San AntonioTX",
"Department of Earth, Atmospheric, and Planetary Sciences\nMassachusetts Institute of Technology\n02139CambridgeMA",
"Department of Physics\nUniversity of Central Florida\n32816OrlandoFL",
"Department of Space Studies\nSouthwest Research Institute\n1050 Walnut St., Suite 30080302BoulderCO",
"Jet Propulsion Laboratory\nCalifornia Institute of Technology\n91109PasadenaCA",
"The Johns Hopkins University Applied Physics Laboratory\n20723-6099LaurelMD",
"Lowell Observatory\n86001FlagstaffAZ",
"Laboratory for Atmospheric and Space Physics\nUniversity of Colorado\n80303BoulderCO",
"Department of Physics and Astronomy\nNational Research Council of Canada\nVictoria BC\nUniversity of Victoria\nVictoriaBC",
"Independent consultant\n94043Mountain ViewCA",
"The Johns Hopkins University Applied Physics Laboratory\n20723-6099LaurelMD",
"Dept. of Earth and Planetary Sciences\nMcDonnell Center for the Space Sciences\nWashington University\n63130St. LouisMO",
"The Johns Hopkins University Applied Physics Laboratory\n20723-6099LaurelMD",
"Space Science Division\nNASA Ames Research Center\n94035Moffett FieldCA",
"The Johns Hopkins University Applied Physics Laboratory\n20723-6099LaurelMD",
"Department of Space Studies\nSouthwest Research Institute\n1050 Walnut St., Suite 30080302BoulderCO",
"Department of Space Studies\nSouthwest Research Institute\n1050 Walnut St., Suite 30080302BoulderCO",
"Department of Space Studies\nSouthwest Research Institute\n1050 Walnut St., Suite 30080302BoulderCO",
"NASA Goddard Space Flight Center\n20771GreenbeltMD",
"Department of Space Studies\nSouthwest Research Institute\n1050 Walnut St., Suite 30080302BoulderCO",
"Lunar and Planetary Institute\n77058HoustonTX",
"SETI Institute\n94043Mountain ViewCA",
"Department of Space Studies\nSouthwest Research Institute\n1050 Walnut St., Suite 30080302BoulderCO",
"University of Virginia\n22904CharlottesvilleVA",
"Department of Space Studies\nSouthwest Research Institute\n1050 Walnut St., Suite 30080302BoulderCO",
"3700 San Martin Drive21218BaltimoreMD",
"Operated by AURA, Inc., for the National Aeronautics and Space Administration\n"
] |
[] |
We used New Horizons LORRI images to measure the optical-band (0.4 λ 0.9µm) sky brightness within a high galactic-latitude field selected to have reduced diffuse scattered light from the Milky Way galaxy (DGL), as inferred from the IRIS all-sky 100 µm map. We also selected the field to significantly reduce the scattered light from bright stars (SSL) outside the LORRI field. Suppression of DGL and SSL reduced the large uncertainties in the background flux levels present in our earlier New Horizons COB results. The raw total sky level, measured when New Horizons was 51.3 AU from the Sun, is 24.22 ± 0.80 nW m −2 sr −1 . Isolating the COB contribution to the raw total required subtracting scattered light from bright stars and galaxies, faint stars below the photometric detection-limit within the field, and the hydrogen plus ionized-helium two-photon continua. This yielded a highly significant detection of the COB at 16.37 ± 1.47 nW m −2 sr −1 at the LORRI pivot wavelength of 0.608 µm. This result is in strong tension with the hypothesis that the COB only comprises the integrated light of external galaxies (IGL) presently known from deep HST counts. Subtraction of the estimated IGL flux from the total COB level leaves a flux component of unknown origin at 8.06 ± 1.92 nW m −2 sr −1 . Its amplitude is equal to the IGL.a The NSF's OIR Lab is operated by AURA, Inc. under cooperative agreement with NSF.
|
10.3847/2041-8213/ac573d
|
[
"https://arxiv.org/pdf/2202.04273v2.pdf"
] | 246,680,032 |
2202.04273
|
070dbd0b7529c81b840fa1ff4f7a069231eccf03
|
Anomalous Flux in the Cosmic Optical Background Detected With New Horizons Observations A TARGETED OBSERVATION OF THE COSMIC OPTICAL BACKGROUND
February 22, 2022 21 Feb 2022
Tod R Lauer
NSF's National Optical Infrared Astronomy Research Laboratory
P.O. Box 2673285726TucsonAZ
Marc Postman
Space Telescope Science Institute
John R Spencer
Department of Space Studies
Southwest Research Institute
1050 Walnut St., Suite 30080302BoulderCO
Harold A Weaver
The Johns Hopkins University Applied Physics Laboratory
20723-6099LaurelMD
S Alan Stern
Space Science and Engineering Division
Southwest Research Institute
1050 Walnut St., Suite 30080302BoulderCO
G Randall Gladstone
Southwest Research Institute
78238San AntonioTX
University of Texas at San Antonio
78249San AntonioTX
Richard P Binzel
Department of Earth, Atmospheric, and Planetary Sciences
Massachusetts Institute of Technology
02139CambridgeMA
Daniel T Britt
Department of Physics
University of Central Florida
32816OrlandoFL
Marc W Buie
Department of Space Studies
Southwest Research Institute
1050 Walnut St., Suite 30080302BoulderCO
Bonnie J Buratti
Jet Propulsion Laboratory
California Institute of Technology
91109PasadenaCA
Andrew F Cheng
The Johns Hopkins University Applied Physics Laboratory
20723-6099LaurelMD
W M Grundy
Lowell Observatory
86001FlagstaffAZ
Mihaly Horányi
Laboratory for Atmospheric and Space Physics
University of Colorado
80303BoulderCO
J J Kavelaars
Department of Physics and Astronomy
National Research Council of Canada
Victoria BC
University of Victoria
VictoriaBC
Ivan R Linscott
Independent consultant
94043Mountain ViewCA
Carey M Lisse
The Johns Hopkins University Applied Physics Laboratory
20723-6099LaurelMD
William B Mckinnon
Dept. of Earth and Planetary Sciences
McDonnell Center for the Space Sciences
Washington University
63130St. LouisMO
Ralph L Mcnutt
The Johns Hopkins University Applied Physics Laboratory
20723-6099LaurelMD
Jeffrey M Moore
Space Science Division
NASA Ames Research Center
94035Moffett FieldCA
J I Núñez
The Johns Hopkins University Applied Physics Laboratory
20723-6099LaurelMD
Catherine B Olkin
Department of Space Studies
Southwest Research Institute
1050 Walnut St., Suite 30080302BoulderCO
Joel W Parker
Department of Space Studies
Southwest Research Institute
1050 Walnut St., Suite 30080302BoulderCO
Simon B Porter
Department of Space Studies
Southwest Research Institute
1050 Walnut St., Suite 30080302BoulderCO
Dennis C Reuter
NASA Goddard Space Flight Center
20771GreenbeltMD
Stuart J Robbins
Department of Space Studies
Southwest Research Institute
1050 Walnut St., Suite 30080302BoulderCO
Paul M Schenk
Lunar and Planetary Institute
77058HoustonTX
Mark R Showalter
SETI Institute
94043Mountain ViewCA
Kelsi N Singer
Department of Space Studies
Southwest Research Institute
1050 Walnut St., Suite 30080302BoulderCO
Anne J Verbiscer
University of Virginia
22904CharlottesvilleVA
Leslie A Young
Department of Space Studies
Southwest Research Institute
1050 Walnut St., Suite 30080302BoulderCO
3700 San Martin Drive21218BaltimoreMD
Operated by AURA, Inc., for the National Aeronautics and Space Administration
Anomalous Flux in the Cosmic Optical Background Detected With New Horizons Observations A TARGETED OBSERVATION OF THE COSMIC OPTICAL BACKGROUND
February 22, 2022 21 Feb 2022Draft version Typeset using L A T E X twocolumn style in AASTeX63 (Accepted for publication in The Astrophysical Journal Letters) 2 Lauer, Postman, Spencer et al. 1.cosmic background radiation -dark agesreionizationfirst stars -diffuse radiation
We used New Horizons LORRI images to measure the optical-band (0.4 λ 0.9µm) sky brightness within a high galactic-latitude field selected to have reduced diffuse scattered light from the Milky Way galaxy (DGL), as inferred from the IRIS all-sky 100 µm map. We also selected the field to significantly reduce the scattered light from bright stars (SSL) outside the LORRI field. Suppression of DGL and SSL reduced the large uncertainties in the background flux levels present in our earlier New Horizons COB results. The raw total sky level, measured when New Horizons was 51.3 AU from the Sun, is 24.22 ± 0.80 nW m −2 sr −1 . Isolating the COB contribution to the raw total required subtracting scattered light from bright stars and galaxies, faint stars below the photometric detection-limit within the field, and the hydrogen plus ionized-helium two-photon continua. This yielded a highly significant detection of the COB at 16.37 ± 1.47 nW m −2 sr −1 at the LORRI pivot wavelength of 0.608 µm. This result is in strong tension with the hypothesis that the COB only comprises the integrated light of external galaxies (IGL) presently known from deep HST counts. Subtraction of the estimated IGL flux from the total COB level leaves a flux component of unknown origin at 8.06 ± 1.92 nW m −2 sr −1 . Its amplitude is equal to the IGL.a The NSF's OIR Lab is operated by AURA, Inc. under cooperative agreement with NSF.
A TARGETED OBSERVATION OF THE COSMIC OPTICAL BACKGROUND
The cosmic optical background (COB) is the flux of visible light photons averaged over the surface of the observable Universe. As it integrates over all processes that generate optical-band photons, it is a test of how well we understand what that integral should comprise. One way to pose this question is to ask if the galaxies that we see in cosmologically deep surveys are sufficient to account for the COB, or if there are significant sources of light yet to be recognized (Cooray 2016).
NASA's New Horizons spacecraft, which is presently over 50 AU away from the Sun, is an excellent platform for COB observations. Its sky is completely free of zodiacal light (ZL), which is sunlight scattered by interplanetary dust. ZL strongly dominates the sky brightness in the inner solar system. Zemcov et al. (2017) produced a "proof of concept" demonstration that New Horizons' LORRI camera (Cheng et al. 2008;Weaver et al. 2020) should be useful for COB observations, but had to contend with the dearth of useful archival images available at the time for measuring the COB flux. Lauer et al. (2021), in contrast, had a rich set of deep images to draw from and conducted a thorough examination of the calibration of New Horizons' LORRI camera for low light-level observations. Based on seven fields, they measured the COB flux to be in the range 15.9 ± 4.2 (1.8 stat., 3.7 sys.) nW m −2 sr −1 to 18.7 ± 3.8 (1.8 stat., 3.3 sys.) nW m −2 sr −1 at the LORRI pivot wavelength of 0.608 µm, where the range reflects two different DGL corrections (diffuse galaxy light from the Milky Way scattered by infrared cirrus).
When the estimated integrated light of galaxies (IGL) fainter than the LORRI photometric detection-limit was subtracted from this flux, a component of unknown origin in the range 8.8 ± 4.9 (1.8 stat., 4.5 sys.) nW m −2 sr −1 to 11.9 ± 4.6 (1.8 stat., 4.2 sys.) nW m −2 sr −1 remained. These measures are the most significant detections of the COB, and any unknown non-IGL component, to date.
The Lauer et al. (2021) image sets, however, were still drawn from archival observations. The strongest foreground sources of light were DGL and scattered starlight (SSL) from bright field stars entering the LORRI camera from large angles. DGL and SSL vary strongly over the sky, however, which means that fields can be targeted that greatly minimize the contributions of both foregrounds. In this work we selected a field for pointed New Horizons COB observations that was estimated to markedly reduce DGL and SSL, compared to even the darkest field in Lauer et al. (2021). As our analysis builds on Lauer et al. (2021), we will frequently refer the reader to that work (hereafter NH21) for brevity.
MEASURING THE COB FLUX
Selecting the Sky Field
To identify fields with low foregrounds, we computed the SSL and DGL intensity levels for 60,000 randomly distributed positions in a 7320 deg 2 area of sky bounded by galactic latitude |b| ≥ 40 • and a requirement that the fields' solar elongation angles (SEA) were > 90 • . Measurement of the background sky levels in LORRI images as a function of SEA < 90 • shows that the camera accepts scattered sunlight from large angles, which means that scattered starlight must be accounted for in fields within the New Horizons shadow (SEA > 90 • ). Only fields with SEA > 90 • are suitable for COB observations in order to avoid sunlight entering the camera.
We estimated the DGL component at each position from the strength of the 100 µm flux, which is due to the thermal emission of infrared "cirrus". The fluxes are provided by the "IRIS" reprocessing of the IRAS full-sky thermal-IR maps (Miville-Deschênes & Lagache 2005) As we discuss in NH21, we subtracted a constant cosmic infrared background (CIB) level of 0.78 MJy sr −1 (Puget et al. 1996;Fixsen et al. 1996) from the IRIS map. Even though there is no significant zodiacal light background at the distances from the Sun where the observations were obtained, we still must correct for any residual zodi signature in the IRIS data that remains even after the IRIS team applied its major zodi-subtraction. In NH21 we show that there is indeed a residual zodi-signature remaining in the IRIS flux values and we apply a smooth correction to the fluxes as a function of ecliptic latitude (see Figure 16 and Equation 8 in NH21) to remove this residual zodiacal light from the map.
The preliminary SSL at each location in the sky was estimated by convolving stars with V < 11 mag drawn from the Tycho2 star catalog (Høg et al. 2000), and the Yale Bright Star catalog v5.0 (Hoffleit & Warren 1995), with the New Horizons scattered light response measured from preflight calibrations and inflight images. At each position we included stars up to 45 • away.
We then sorted the fields based on their combined SSL and DGL intensities to identify fields with significantly reduced DGL and SSL foregrounds, as compared to our earlier fields. We gave highest priority to fields with low DGL intensities. Once a final field was se-lected we recomputed the SSL by adding in fainter stars (11 ≤ V < 20 mag) from the Gaia DR2 catalog (Gaia Collaboration et al. 2016, 2018.
The selected field center is at J2000 α = 0 • . 0756, δ = −21 • . 5451; the galactic latitude is b = −77 • . 1, and the ecliptic latitude is β = −19 • . 7. This position has SEA = 113 • . 9, putting the aperture of LORRI safely within the spacecraft shadow.
At the time of the observations New Horizons was 51.3 AU from the Sun, thus no ZL foreground was present. However, the ecliptic latitude is still important for understanding the 100 µm flux derived from the Earth-based IRIS maps needed to estimate DGL. The 100 µm flux measured from the IRIS map at this position prior to any background subtractions or corrections, is 1.756 ± 0.042 MJy sr −1 . This value is the mean IRIS flux within a circular area of radius 0 • . 2 centered on the above position. This area corresponds to the circle that fully inscribes the LORRI FOV. We subtract the 0.78 MJy sr −1 CIB flux, and the NH21 residual ZL correction of 0.724 MJy sr −1 at β = 19 • . 7 from the mean map value of 1.756 MJy sr −1 , leaving 0.252 ± 0.055 MJy sr −1 as the estimated flux from any IR-cirrus in the field. With the Zemcov et al. (2017) scaling coefficient, this implies a DGL flux of only 2.22 ± 1.00 nW m −2 sr −1 , with most of the error due to the large uncertainty in the coefficient. This DGL value is only 43% of the lowest DGL intensity of the seven NH21 fields. The field is also predicted to have to have an SSL foreground of 5.18 ± 0.40 nW m −2 sr −1 , only 74% of the lowest SSL of the seven NH21 fields.
Images of the Field
The COB images were obtained with LORRI (the Long-Range Reconnaissance Imager) (Cheng et al. 2008;Weaver et al. 2020) on 2021 September 24 (UT) as a sequence of 16 65s exposures (only 30s exposures were used in NH21). The MET (mission elapsed time) IDs of the images were 0494832182 to 0494833607. The pointing was dithered by a few pixels between subsets of four images. A stack of the first subset is shown in Figure 1. To avoid the LORRI "background fade" anomaly associated with the activation of the camera (NH21), the exposure sequence was initiated five minutes after the camera was powered on. As a check, a fit to the sky levels of the 16 exposures as a function of time showed an insignificant drift of only 0.10 ± 0.10 DN (data number) over the 1040s duration of the sequence.
In brief, LORRI is an unfiltered (white light) 1032 × 1024 pixel CCD imager mounted on a 20.9 cm aperture Cassegrain reflector. For deep observations, the camera is operated with 4 × 4 pixel binning, producing (raw) images in 257×256 pixel format, including a single bias/dark column. The pixel-scale in this mode is 4 . 08, which provides a 17 .4 field. LORRI's sensitivity extends from the blue (0.4µm) to NIR (0.9µm) and is defined by the CCD response and telescope optics. The pivot wavelength is 0.608 µm. The camera is operated with a gain of 19.4e − per 1 DN, and the read-noise is 24e − . In 4 × 4 mode the photometric zeropoint is 18.88 ± 0.01 AB magnitudes corresponding to a 1 DN/s exposure level (Weaver et al. 2020).
Image Reduction
The sky levels in the images are only slightly greater than 1 DN. The reduction of the images thus requires attention to a number of subtle effects that are only important at this level. Rather than using calibrated ("Level 2") images produced by the standard LORRI pipeline operated by the New Horizons project, we use the NH21 custom reduction of the raw ("Level 1") images to optimize accurate recovery of the faint sky signal. The first calibration step is to estimate the bias level by fitting a gaussian to the peak of the DN histogram of the bias column. This provides bias values accurate to a fraction of a DN, while until recently, the standard pipeline selected the median integer DN level.
Subsequent to NH21, we discovered an error in the analogue to digital (A/D) conversion of the video signal produced by the LORRI CCD that required a small correction to be applied to the bias determination. Histograms of raw LORRI images showed that the measurement of the least-significant bit (LSB) during the A/D conversion was slightly in error, such that the set point of the LSB was 7% too high, making even DN values 14% more common than odd DN values (errors in the higher order bits were not evident). Analysis of the effects of this error were done following the precepts of Lauer (1989), which discussed the diagnosis and correction of large A/D errors in the HST WFPC1 instrument. Briefly, bias values were recovered from simulated distributions of integer DN values generated from un-digitized gaussians of width appropriate to the LORRI readout noise. Simulated A/D conversion was done with and without the LSB error, as the fractional location of the mean value of the distribution was varied over a range of 1 DN. The measured mean value with the LSB error was always 0.02 DN too low, allowing for a simple additive correction to the measured bias levels.
The second step is to correct for the "jail bar" pattern, where bias level of the even-numbered columns in the CCD are offset by +0.5 or −0.5 DN from that of the odd-numbered columns (which includes the bias column). The sign of the offset is set randomly when the camera is powered on; in the present sequence the offset of the even columns is +0.5 DN. This calibration step is not included in the standard pipeline. The final calibration steps are subtraction of a "super-bias" frame, charge-smear correction, and standard flat-field calibration. The charge-smear correction is an improved version of that in the standard pipeline (Weaver et al. 2020), and we also exclude bright cosmic ray hits and negative amplifier under-shoot artifacts associated with over-exposed stars from the charge-smear calculations, as they are not smeared.
Measuring the Sky Level
The procedures for measuring the sky level are discussed extensively in NH21. In brief, we measure the sky for each individual exposure by first masking out foreground stars, galaxies, hot pixels, and cosmic ray events, and then fitting a gaussian to the peak of the intensity histogram of the remaining unmasked pixels. Masking is done by flagging all pixels above 8 DN intensity, and excluding all pixels within 3 pixels or 12 in radius around that pixel. This threshold is somewhat arbitrary; it is a compromise between detecting faint sources versus selecting on background noise. Low level wings at larger radii from the stars do remain in the image, but these are corrected for in the estimation of the scattered starlight (SSL) components in field. In practice the masking procedure deletes all objects with V < 19.9 (this threshold is 0.8 mag deeper than the V < 19.1 used in NH21, given the present 65s, rather than 30s exposures).
While nearly all the objects masked are stars, the galaxies deleted need to be accounted for, as their flux should be included in the COB. The LORRI angular resolution is too poor to allow classification of most galaxies above the detection threshold as non-stellar, thus our solution is to add the light from masked galaxies with V < 19.9, as catalogued by the PANSTARRS survey (Flewelling et al. 2020), to the IGL flux (see §3.7).
The histogram fitting algorithm is designed to take into account fine scale structure of the distribution of pixel intensity values that results from the image calibration operations applied to the initially integer raw pixel values. The histogram fitting procedure also ignores all pixels with values well away from the histogram peak. In application we find the sky following the masking procedure is only 7% less than the sky measured with no masking at all. The average sky value of the 16 images is 1.058 ± 0.035 DN, or a V-band surface brightness of 26.4 mag/arcsec 2 ; the associated error is statistical and is the error in the mean of the 16 images. This corresponds to 24.22 ± 0.80 nW m −2 sr −1 in flux units at the LORRI pivot wavelength of 0.608 µm. As shown in Figure 2, this sky level is significantly less than the typical raw sky levels of the 7 fields of NH21, but is essentially as expected given the estimated reduction of the DGL and SSL components.
THE COSMIC OPTICAL BACKGROUND FLUX
Isolating the COB flux from the total sky requires correcting for a number of foreground sources. We describe these in detail in NH21, but present their specific contributions to the present field here. A summary of the decomposition of the total sky is shown in Figure 3. The fluxes in all components and their associated errors are listed in Table 1. We break down the errors into systematic and statistical terms, as we discussed in detail in NH21. Understanding which uncertainties are systematic is critical when combining the measurements in several fields as we did in NH21. In the present case of a single field the errors in all flux components are independent, but again, this is no longer true when we compare the present results to those in NH21.
Scattered Light from Bright Stars (SSL) and
Galaxies (SGL)
As noted in §2.1 the field was selected for its low SSL of 5.17 ± 0.52 nW m −2 sr −1 . The error is systematic and is dominated by uncertainty in the New Horizons scattered light function. The SGL term comes from scattered light contributed by bright galaxies outside the LORRI field.
The surface density of bright galaxies is so low that this flux, 0.07 ± 0.01 nW m −2 sr −1 , is almost negligible. Figure 2. The total sky levels for the present (TF01 = Test Field 1) and seven NH21 fields are plotted as function of the total known flux components present. A line with unit slope going through the point representing the present field is shown. This demonstrates that the total sky level in the present field decreased by the amount expected as compared to the NH21 fields, given its reduced foreground flux components. The selection of the field was done to minimize the DGL foreground, as discussed in §2.1. We repeat the estimated DGL foreground flux here as 2.22 ± 1.00 nW m −2 sr −1 , based on the Zemcov et al. (2017) conversion of the 100 µm flux, as given by NH21 eqn. (7) with C 100 = 9.8 ± 3.9 nW m −2 sr −1 .
Diffuse Galactic Light (DGL)
As one check on our conversion we integrated the Onishi et al. (2018) DGL coefficients derived from their WD01 and WLJ15 dust models (normalized to the 1.1 µm measurement of their MBM32 field), as a function of wavelength over the LORRI response-function. This produced a mean coefficient only 10% larger than ours (when we compute our coefficient for the same galactic latitude as their MBM32 field), which is well within our assumed ∼ 40% errors.
As second check, we subtracted all the known flux components from the present and NH21 total sky fluxes, except any estimate for the DGL flux, and fitted a line to the residuals (which also contained the presumablyconstant anomalous flux) as a function of 100 µm flux. The slope of the line provides an estimate of the conversion coefficient. We recovered C 100 = 10.1 ± 5.2 nW m −2 sr −1 . in good agreement with the scaling used in NH21.
The systematic component in the DGL error dominates and is mainly due to the large error in the fluxconversion coefficient, with a smaller contribution from the error in the 100 µm flux. The statistical error is due to uncertainty in the correction of the 100 µm map for residual zodiacal light. See Table 1 for both components.
Integrated Faint Starlight (FSL)
The integrated light of faint stars (FSL) below the LORRI photometric detection limit is another foreground source that must be accounted for. Our approach is to integrate TRILEGAL models (Girardi et al. 2005(Girardi et al. , 2012 of the expected population of faint stars within our fields, following the procedures developed in NH21. The only difference is that for this field the bright limit of the integral (Eq. 3 in NH21) is V = 19.9. For our specific field we estimate the FSL component as 1.16 ± 0.18 nW m −2 sr −1 . The systematic and statistical components in the error (Table 1) are due to uncertainties in the TRILEGAL models parameters, and estimated fluctuations in the star counts, respectively.
The Two-Photon Continuum (2PC)
The existence of a full-sky diffuse Ly-α background from the Milky Way (e.g. Gladstone et al. 2021) means that there is also likely to be an associated hydrogen twophoton continuum (Spitzer & Greenstein 1951). New Horizons far UV spectroscopic observations taken of our field with the Alice instrument indeed show the existence of this component. The continuum extends to all wavelengths to the red of Ly-α and thus will have some flux contribution within the LORRI passband. The Alice spectra also appear to show the minor presence of the analogous continuum from singly-ionized He. Using the spectral form of the two-photon continuum given by Nussbaumer & Schmutz (1984), we find that the contribution of the H and He+ continua integrated over the LORRI passband to be 0.93 ± 0.47 nW m −2 sr −1 , a minor contribution to the total sky level. We treat the error as systematic as it is assumed to be the same for both the present and NH21 COB fields.
Foregrounds from the Spacecraft
Measuring the COB requires that the spacecraft environment itself is dark and does not contribute significant foreground light. In NH21 we demonstrated that the spacecraft shadow was sufficiently dark such that sunlight had no indirect path of significance into the LORRI aperture for the SEAs of the COB fields. We also considered it unlikely that the exhaust of the thrusters that stabilized the NH spacecraft could generate ice crystals sufficient to scatter light into LORRI. Subsequent to NH21, we identified two more effects of potential concern, the production of Cherenkov radiation and fluorescence, induced by energetic particles penetrating the LORRI field-flattening lenses. We estimate the strength of these two sources in Appendix A, concluding that they do not contribute significant foreground flux. Related to this, as part of the analysis done in NH21, we measured the dark current of the LORRI CCD at 0.334 ± 0.039 DN in 65 s, which is well within the CCD manufacturer's specified performance. There is no evidence for any strongly increased dark current due to irradiation of the CCD over the duration of the mission. We also note that the CCD dark/bias column will also witness the average level of any charge deposited directly in the CCD by energetic particles during an exposure.
The Total Cosmic Optical Background
The COB is the flux that remains after we remove the artifactual scattered light foregrounds of bright stars (SSL) and galaxies (SGL) contributed by sources outside the LORRI field, as well as the flux from faint stars (FSL) and diffuse Milky Way light scattered by IR-cirrus (DGL) within the field from the observed total sky level. As the COB should also reflect the integral flux from all external galaxies, we have also added in the light from the bright galaxies that were present in the LORRI field, but masked out in the measurement of the total sky level. This correction is small and is discussed in detail in the next section. The COB flux is thus 16.37 ± 1.47 (0.86 stat., 1.19 sys.) nW m −2 sr −1 . The error is the simple quadrature sum of all the errors associated with the first six components tabulated in Table 1. As we discuss in NH21, most of these errors are systematic, thus combining the present results with, say, the seven fields in our previous paper requires careful treatment of the correlated errors between all fields. For a single field, however, the errors can be regarded as statistical.
The present field provides the most significant detection of the COB to date.
Integrated Galaxy Light (IGL)
The COB does contain the integrated flux from all galaxies that fall within the LORRI field. This IGL component compared to the COB flux tests how well we understand the overall optical flux generated by the Universe.
For this analysis, the IGL is estimated in two steps: the bright IGL for galaxies with V < 19.9 that were masked during the sky estimation process and the faint IGL for galaxies below this LORRI detection threshold. The IGL for the bright galaxies (V < 19.9) is estimated by extracting non-stellar objects in our LORRI field of view from the second release of the PanSTARRS catalog available via the MAST archive (Flewelling et al. 2020). PanSTARRS objects with a difference greater than 0.05 mag between their PSF magnitude and their Kron magnitude in the PanSTARRS i-band are classified as galaxies. We compute a V-magnitude for each object from their g-band and r-band magnitudes provided by PanSTARRS. The transformation to V-mag from the g, r bands is derived from 8 templates of galaxy spectral energy distributions spanning the morphologies E, S0, Sa, Sb, Sc, and Ir types. We weight the templates by the morphological fractions observed in the field population of galaxies and derive an average (V − g) vs. (r − g) relationship over the redshift range 0 < z ≤ 1, typical for brighter galaxies. We then derive the IGL flux contribution based on the V magni-tude and sum up the contributions for all PanSTARRS galaxies with V < 19.9 in the LORRI field of view. The IGL contribution computed in this way comes to 1.70 ± 0.07 (0.06 (sys), 0.04 (stat)) nW m −2 sr −1 . The statistical error is derived from the photometric errors given in the PanSTARRS catalog and the systematic error is estimated by using different fitting functions and different SED templates for the (V −g) vs. (r −g) transformation.
The precepts for estimating the faint IGL due to galaxies at or below the V = 19.9 detection threshold are discussed at length in NH21. The faint IGL contribution in the present field is slightly reduced from that in our earlier fields due to the fainter V = 19.9 bright limit to the galaxy flux integral (Eq. 3 in NH21). Our NH21 estimate for the uncertainty in the faint IGL of 30% used in NH21 was conservative and was based on rough estimates of the variation in the faint end slope of the galaxy number count relations. We perform a more rigorous estimate of the uncertainty in our faint IGL flux by assessing the specific contribution to the error from the systematic terms (errors in the fits to the galaxy number counts) and from the statistical errors (cosmic variance). The two systematic errors associated with the fits to the galaxy number counts are from the errors in the coefficients to the power law fits used in NH21 and the error associated with the form of the fitting function (e.g., 4 power-laws vs a quadratic fit). The formal errors in the power law coefficients yield a fractional error of 13.1% in the IGL flux. The difference between the IGL derived from the power law fits versus that derived using a quadratic fit to the galaxy counts yields a fractional change in the IGL of 6.6%. Summing these two error components in quadrature yields a combined systematic fractional error of 14.7% in the IGL flux. The total error in the IGL must also include the statistical uncertainty due to the effects of cosmic variance over a single LORRI FOV. The cosmic variance error for a single LORRI field-of-view used in this work is the same as the single-field CV error adopted in NH21 (Trenti & Stiavelli 2008) -which translates to an IGL fractional error of 11.8%. Summing, in quadrature, this statistical error with the above systematic error yields a total fractional error of 18.8% in the faint IGL flux. This is smaller than our conservative estimate of 30% used in NH21 but represents a more accurate assessment of the error in the faint IGL component. Our computed faint IGL flux in the current field is 6.61 ± 1.24 (0.97 (sys), 0.78 (stat)) nW m −2 sr −1 .
Combining the bright and faint galaxy contributions to the IGL gives a total IGL flux for our field of 8.31 ± 1.24 (0.97 (sys), 0.78 (stat)) nW m −2 sr −1 . This IGL corresponds to the expected light in the LORRI bandpass from all galaxies brighter than V = 30 mag.
The Detection of a Significant Anomalous Flux Background
We find that IGL accounts for only half of the COB. Subtracting it from the COB yields an anomalous unexplained flux component of 8.06 ± 1.92 (1.16 stat., 1.53 sys.) nW m −2 sr −1 . The present anomalous sky residual as compared to those in the seven NH21 fields is shown in Figure 4. The present flux is statistically consistent with all seven previous fields, but its significance is markedly greater.
The COB from Galaxy Counts and γ-ray Absorption
A large anomalous background component would not be expected under the simple and perhaps default hypothesis that the COB and the IGL flux derived from the faint galaxies already known by HST deep counts are one and the same. The IGL has been estimated many times by many different parties, including us in NH21, and recently by Driver et al. (2016) and Saldana-Lopez et al. (2021). IGL traces from both groups are plotted as a function of wavelength over the LORRI passband in Figure 5. There is excellent agreement on the IGL level over the ensemble of estimates. Driver et al. (2016), Saldana-Lopez et al. (2021), and our own estimate, all imply a contribution to the COB flux of ∼ 8 nW m −2 sr −1 over the passband sampled by LORRI. To be fair, these results are often based on the same observations, but this at least shows that there is little interpretive "wiggle room" allowed in the analysis methodologies.
Very-high energy (VHE) γ-ray observations can be used to estimate the COB flux and is a completely different approach than integrating over external galaxy flux and has the virtue of depending only on the total flux density of optical photons, independent of any association with a stellar system. This is the very same quantity that we have attempted to measure with New Horizons. Observations of VHE (0.1 − 30 TeV) γ-rays from cosmologically distant AGN show that γ-rays are absorbed as a function of the distance of the source and the energy of the γ-ray photons (H.E.S.S. Collaboration et al. 2013;Fermi-LAT Collaboration et al. 2018). Quantum electrodynamics predicts that such an effect must occur (Nikishov 1962). The γ-ray photons interact with optical photons to produce e − /e + pairs. In effect, the ambient flux density of optical photons acts as an absorbing medium, attenuating the transmission of γ-rays over large distances. We show the COB constraints from five recent VHE γ-ray studies: Ahnen et al. (2016) Figure 5. The concordance of the COB inferred from galaxy counts and VHE γ-ray absorption evident in Figure 5 is a compelling argument that the COB may well be entirely due to the light of known galaxies and holds no surprises. However, while a number of VHE γ-ray traces shown in Figure 5 do appear to be essentially coincident with the IGL traces, it is noteworthy that when the analysis allows for arbitrary background flux as a function of wavelength, as was done in the H. E. S. S. Collaboration et al. (2017) and Acciari et al. (2019) papers, the VHE γ-ray constraints are markedly looser and pose no conflict with our result.
The Actual Optical Flux Measures Imply an Anomaly
But we should be able to measure the COB flux directly with optical observations. This is where surprises may exist. As with inferences from galaxy counts, direct detection of the COB has indeed been attempted many times by many different parties. As noted in the introduction, conducting such observations from the inner solar system is challenging, due to the strong ZL foreground. There are many clever ways to correct for ZL, but at the penalty of large errors in the observed flux. Direct COB measures generally struggle to achieve 2-σ detection significance of the total COB flux, let alone testing for an anomalous component. At the same time, formally, the direct flux measurements nearly always fall well above the flux implied by galaxy counts and γ-rays. Figure 5 shows several examples of COB measures made from Earth-space that fall within the LORRI passband. These include the HST/WFPC2 observations of Bernstein (2007), the CIBER rocket-based measures of Matsuura et al. (2017), and the "dark cloud" measures of Mattila et al. (2017). Of these, only the 0.40 µm flux of Mattila et al. (2017) and the 0.80 µm CIBER flux of Matsuura et al. (2017) detect the COB with greater than 2-σ significance. Figure 5 also shows the three "outer solar-system" COB estimates made prior to the present work. Two of these include our result from NH21, and the NH upper-limit derived by Zemcov et al. (2017), which we discussed in the introduction. The third value is the COB flux derived from Pioneer 10 and 11 observations, although Matsumoto et al. (2018) has questioned whether they are true measures of the absolute sky flux. Lastly our present COB measurement is also plotted in Figure 5. The drastically smaller error bars bracketing our result, as compared to the Earth-space measures, is due to simply having a camera far enough away from the Sun that zodiacal light doesn't matter any more. Our COB flux is in strong tension with the integrated galaxy light flux. The implied anomalous sky component, in fact, is essentially equal to the IGL flux, itself.
The present result represents a marked improvement of over our NH21 measurement of the COB flux. The errors bars have been reduced by a over a factor of two, greatly improving rejection of the hypothesis that the COB measured with New Horizons is consistent with the IGL. We presented a detailed discussion of this conflict in NH21, which is still valid for the present result. In brief, Conselice et al. (2016) has argued that the galaxy counts on which the IGL is based are strongly incomplete. Cooray et al. (2012), Zemcov et al. (2014), and Matsumoto & Tsumura (2019) have argued that the COB includes a substantial component of light from stars tidally removed from galaxies, or a population of faint sources in extended halos. None of these hypotheses may be correct, but serve to indicate that the census of extragalactic sources conducted with HST may yet be incomplete.
Finally, while many of the VHE γ-ray studies provide a constraint on the COB that is consistent with that predicted from known galaxy counts, we note speculation that the propagation of γ-rays over cosmological distances may be partially shielded from pair-production by the VHE photons oscillating into axion-like particles (ALP) and back over their trajectory (Ringwald 2014;Biteau & Meyer 2022). If this hypothesized interaction occurs, the observed VHE γ-ray attenuation might admit COB fluxes significantly higher than the IGL flux.
ACKNOWLEDGMENTS
We thank NASA for funding, and continued support of the New Horizons mission. The data presented was obtained during the Kuiper Extended Mission of New Horizons. We thank Michael Coln, Steven Conard, Bruce Draine, James Gunn, James Kinnison, and David Munro for useful conversations. We thank the referee for a prompt and thorough report, which significantly improved the paper. This work made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/ gaia/dpac/consortium). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.
The Pan-STARRS1 Surveys (PS1) Software: astropy (Astropy Collaboration et al. 2013, 2018, matplotlib (Hunter 2007), Vista (Lauer et al. 1983) APPENDIX
A. OPTICAL PHOTONS GENERATED IN THE LORRI OPTICS
The LORRI optics include three field-flattening lenses positioned immediately in front of the CCD (Cheng et al. 2008;Weaver et al. 2020). The lenses are roughly 2 cm in diameter by 0.5 cm thick, and are made of fused-silica (SiO 2 ). The LORRI CCD subtends ∼ 1 sr as seen from the closest element to it. A relativistic proton or electron penetrating the lenses can emit Cherenkov radiation or dislodge electrons that could excite fluorescence emission. Looking at the variety of energetic particles interacting with the lenses, it appears that γ-rays generated in the spacecraft RTG (Radioisotope Thermoelectric Generator) power supply are of the greatest concern.
While the 238 Pu isotope generates the RTG power produces a low-level flux of relatively low-energy γ-rays, the trace contaminant 236 Pu decays to a daughter product that generates a strong flux of 2.614 MeV photons. In 2021 the RTG is estimated to generate 2.0 × 10 9 2.6 MeV photons s −1 . LORRI is 2.0 m away from the nearest end of the cylindrical RTG and would receive a flux of F γ = 3.9 × 10 3 cm −2 s −1 , assuming isotropic radiation from the RTG and no shielding. Fortuitously, LORRI is positioned only ∼ 10 • off the long axis of the RTG and self absorption within the RTG is substantial. For an RTG of New Horizons' design, the flux at this angle is ∼ 5× less than the isotropic assumption, or F γ = 7.8 × 10 2 cm −2 s −1 , based on the measurements provided by Shirbacheh (1984).
A.1. Cherenkov radiation from RTG γ-rays
The 2.6 MeV γ-rays will Compton-scatter electrons in the lenses with enough energy to produce Cherenkov radiation. Using the Klein-Nishina equation (Klein & Nishina 1929), we calculate the cross section for an SiO 2 molecule to Compton-scatter a 2.6 MeV photon as 1.884 × 10 −24 cm 2 . Given the density of fused-silica, ρ = 2.2 g cm −3 , we estimate a that a lens of thickness 0.5 cm (the relevant dimension, as we argue in the next paragraph) will scatter P f s = 0.021 of the 2.6 Mev photons passing through it.
The Frank-Tamm equation (Jackson 1975) provides the energy loss per unit distance traveled due to the generation of Cherenkov emission, for a relativistic electron passing through the lens. The equation gives the monochromatic energy loss at a given optical frequency and integrates it over the desired interval:
dE dx = e 2 c 2 ω1 ω0 1 − 1 β 2 n 2 (ω) ω dω,(A1)
where e is the electron charge, β = v/c, n(ω) is the refractive index of the glass, and ω is the frequency of the light. For LORRI we are not concerned with energy loss directly, but the number of optical photons generated. Recasting the equation as the number of photons generated, using dN = dE/( ω), and taking into account that the refractive index of fused-silica (n = 1.5) is nearly constant over the passband:
dN dx = e 2 c 2 1 − 1 β 2 n 2 [ ω 1 − ω 0 ] .(A2)
Compton scattering will produce electrons with a range of energies, but for typical β = 0.92, and limiting frequencies ω 0 = 2.09 × 10 15 s −1 and ω 1 = 4.71 × 10 15 s −1 , corresponding to 0.9 µm and 0.4 µm, the Cherenkov photon production for a single scattered electron is dN/dx = 302 cm −1 .
If the RTG γ-ray flux produced isotropically-emitted Cherenkov radiation, that could account for ∼ 13% of the anomalous sky component. However, the Cherenkov radiation generated by γ-rays coming from the RTG is strongly anisotropic. The following calculation of the isotropic-flux example merely serves as a point of reference to establish that the anomalous sky component cannot in fact be due to Cherenkov radiation generated in the lenses. Potentially detectable Cherenkov-radiation photons are generated at the rate:
N L = F γ AP f s η L Ω L 4π dN dx ∆x,(A3)
where A is the total area of the lenses, η L = 0.9 is the LORRI quantum efficiency, Ω L = 1 sr is the solid angle of the LORRI CCD as seen from the lenses, and ∆x = 0.3 cm is typical length of the scattered electron's path through the lens. Each lens has area ∼ π cm 2 ; with three lenses in series, A = 9.4 cm 2 . For these parameters, N L = 1.0 × 10 3 s −1 , while the anomalous detected sky flux in LORRI is 8.0 × 10 3 photons s −1 . Given the reality that Cherenkov radiation is highly anisotropic and aligned around the velocity vectors of the relativistic electrons, we will now demonstrate that all of the Cherenkov photons generated within the lenses will be directed up and out of the LORRI optics to the sky, rather than down into the CCD. Since, as noted, LORRI is positioned on the opposite side of the spacecraft from the RTG at an angle of only ∼ 10 • with respect to the long axis of the RTG, the RTG will appear as a relatively compact source to LORRI. Further, the γ-rays will travel outwards through LORRI roughly aligned with its optical axis. Both Compton scattering and Cherenkov radiation have strong angular dependencies. For Compton scattering, conservation of momentum demands that the scattered electron has a forward component of momentum aligned with the incoming γ-ray photon in addition to whatever perpendicular component is transferred to it. The trajectories of the electrons are thus confined to the hemisphere ahead of the photon. The energy imparted to the electron is a strong function of the angle of its trajectory with respect to the path of the incoming photon, with the maximum energy occurring at zero angular deflection. Conversely, electrons with large scattering angles correspond to those with low energy; for a 2.6 MeV photon, electrons deflected at angles larger than ∼ 80 • will not generate Cherenkov radiation.
Cherenkov photons are emitted perpendicular to the surface of a cone with the scattered electron at its vertex; the cone's geometry is analogous to the "Mach cone" anchored to a supersonic aircraft. The angle of Cherenkov emission with respect to the trajectory of the electron is φ = arccos 1 βn .
For LORRI, φ is always < 47 • . 3 degrees. and the Cherenkov photons are always confined to the "outgoing" hemisphere. The largest Cherenkev emission angle with respect to the optical axis is 87 • , which is associated with electrons scattered at 51 • from the axis (for a 2.6 MeV γ-ray moving parallel to the optical axis). Now with the 10 • angle of the incident photons, some small fraction of Cherenkov photons will indeed be emitted into the "CCD hemisphere," but in directions still too far from the CCD to illuminate it. We conclude that there is no direct path for Cherenkov photons generated by RTG γ-rays to illuminate the LORRI CCD, and thus explain the anomalous sky component.
A.2. Fluorescent Emission induced by RTG γ-rays
Electrons scattered by γ-rays will also lose energy by Couloumb scattering (Jackson 1975) other electrons within the lenses. In the general case, as the electrons recombine with atoms within the glass, isotropically-emitted optical photons may be generated by fluorescence. Fused-silica, however, is known for its extremely low fluorescence response. a property used by Moore et al. (2018), for example, to allow clean isolation of Cherenkov-radiation diagnostic signals in fusion experiments. As Moore et al. emphasize, ultra-pure fused silica is essentially free of optical-band fluorescent emission. Any fluorescent emission in the LORRI lenses would thus be due to trace impurities. The purity of the LORRI fused-silica glass is not known in specific detail, but the lenses were fabricated with "standard" lens-grade material stated to have impurities at the < 1 ppm level.
Simple arguments based on the energetics of the RTG γ-ray flux at LORRI, as compared to the flux of the anomalous sky component, show that the anomaly is not likely to have been generated by fluorescence in the lenses. The quantitative inputs are largely identical to those used to estimate the Cherenkov flux. The anomalous sky signal is 8.0×10 3 photons s −1 delivered to the CCD. In a 65s exposure for an average photon energy of 2 eV (true at the ∼ 6000Å pivot wavelength) the total energy received is 1.0 MeV. The available energy provided by γ-rays is 7.8 × 10 2 cm −2 s −1 2.6 MeV photons, illuminating 9.4 cm 2 of glass. Only 0.021 of the photons will be scattered, and on average only 1/2 of a photon's energy will be transferred to an electron. With isotropic emission, only (4π) −1 of this energy is available for generating optical photons in the CCD. Multiplying all these factors yields a budget for generating photons of 1.1 × 10 3 MeV. As stated, pure fused-silica will absorb energy of the scattered electrons without converting it into the particular form of optical photons. With impurities in the lenses at < 10 −6 abundance, even if they converted the electron energy to optical photons at 100% efficiency, their net effect would be three orders of magnitude too small to account for the anomaly, if their molecular cross section for interacting with the scattered electrons is similar to that of SiO 2 molecules. We conclude that fluorescent emission from the lenses is unlikely to explain the anomaly.
Lastly, we do note that the LORRI lenses are coated to reduce ghosting. Depending on the thickness of the coatings, they may effectively regarded as a separate form of impurity; two 1000Å thick coatings relative to the 0.5 cm thickness
Figure 1 .
1An average of the first four images in the present dataset. The area is 17 . 4 × 17 . 4. The display range is 50 DN (linear stretch starting at −5 DN). The faintest stars visible are at V = 19.1. The top of the field is at PA 139 • . 6.
Figure 3 .
3A stacked bar chart showing the amplitudes of the known sky components for the present field (leftmost bar) as compared to the seven NH21 fields. The black horizontal lines with error-bars show our measured total sky values and their uncertainties for each field. The small flux from bright galaxies masked out in the LORRI field is not included in this figure.
Figure 4 .
4A bar chart showing the amplitudes of the anomalous sky components for the present field (blue) as compared to the seven NH21 fields.4. AN ANOMALOUS BACKGROUND
, the H. E. S. S. Collaboration et al. (2017), the Fermi-LAT Collaboration et al. (2018), Desai et al. (2019), and Acciari et al. (2019) in
Figure 5 .
5The present result is compared to previous COB measures over the wavelengths spanned by the LORRI passband. Our NH21 COB flux (for the Zemcov DGL) is shown in gray, offset to the blue for clarity. Direct COB flux measurements are shown as points with error bars. TheZemcov et al. (2017) flux-limit and theMattila et al. (2017) 0.52 µm limit are shown as 2-σ upper limits with 1-σ arrows. IGL estimates are shown as lines with 1-σ bounds. COB fluxes inferred from VHE γ-rays are shown as shaded bands.
Table 1 .
1Sky Flux DecompositionComponent nW m −2 sr −1 Stat. Sys.Total Sky
24.22 ± 0.80
0.80
0.00
− Scattered Starlight (SSL)
5.17 ± 0.52
0.00
0.52
− Scattered Milky Way Light (DGL)
2.22 ± 1.00
0.32
0.95
− Faint Stars (FSL)
1.16 ± 0.18
0.06
0.17
− Two-photon continuum (2PC)
0.93 ± 0.47
0.00
0.47
− Scattered Galaxy Light (SGL)
0.07 ± 0.01
0.00
0.01
+ Bright Field Galaxies
1.70 ± 0.07
0.04
0.06
= Cosmic Optical Background
16.37 ± 1.47
0.86
1.19
− Integrated Galaxy Light (IGL)
8.31 ± 1.24
0.78
0.97
= Anomalous Flux
8.06 ± 1.92
1.16
1.53
and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation.
. V A Acciari, S Ansoldi, L A Antonelli, 10.1093/mnras/stz943MNRAS. 4864233Acciari, V. A., Ansoldi, S., Antonelli, L. A., et al. 2019, MNRAS, 486, 4233. doi:10.1093/mnras/stz943
. M L Ahnen, S Ansoldi, L A Antonelli, 10.1051/0004-6361/201527256A&A. 59024Ahnen, M. L., Ansoldi, S., Antonelli, L. A., et al. 2016, A&A, 590, A24. doi:10.1051/0004-6361/201527256
. T P Robitaille, Astropy CollaborationE J Tollerud, Astropy CollaborationA&A. 55833Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33
. A M Price-Whelan, Astropy CollaborationB M Sipőcz, Astropy CollaborationAJ. 156123Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123
. R A Bernstein, ApJ. 666663Bernstein, R. A. 2007, ApJ, 666, 663
. J Biteau, M Meyer, arXiv:2202.00523Biteau, J. & Meyer, M. 2022, arXiv:2202.00523
A F Cheng, H A Weaver, S J Conard, Long-Range Reconnaissance Imager on New Horizons.SSRv. 140189Cheng, A. F., Weaver, H. A., Conard, S. J., et al. 2008, Long-Range Reconnaissance Imager on New Horizons.SSRv, 140, 189
. C J Conselice, A Wilkinson, K Duncan, ApJ. 83083Conselice, C. J., Wilkinson, A., Duncan, K., et al. 2016, ApJ, 830, 83
. A Cooray, Royal Society Open Science. 3150555Cooray, A. 2016, Royal Society Open Science, 3, 150555
. A Cooray, J Smidt, F De Bernardis, Nature. 490514Cooray, A., Smidt, J., de Bernardis, F., et al. 2012, Nature, 490, 514
. A Desai, K Helgason, M Ajello, 10.3847/2041-8213/ab0c10ApJL. 8747Desai, A., Helgason, K., Ajello, M., et al. 2019, ApJL, 874, L7. doi:10.3847/2041-8213/ab0c10
. S P Driver, S K Andrews, L J Davies, ApJ. 827108Driver, S.P., Andrews, S. K., Davies, L. J., et al. 2016, ApJ, 827, 108.
. S Abdollahi, Fermi-LAT CollaborationM Ackermann, Fermi-LAT Collaboration10.1126/science.aat8123Science. 3621031Fermi-LAT Collaboration, Abdollahi, S., Ackermann, M., et al. 2018, Science, 362, 1031. doi:10.1126/science.aat8123
. D J Fixsen, E S Cheng, J M Gales, ApJ. 473576Fixsen, D. J., Cheng, E. S., Gales, J. M., et al. 1996, ApJ, 473, 576
. H A Flewelling, E A Magnier, K C Chambers, 10.3847/1538-4365/abb82dApJS. 7Flewelling, H. A., Magnier, E. A., Chambers, K. C., et al. 2020, ApJS, 251, 7. doi:10.3847/1538-4365/abb82d
. A G A Brown, Gaia CollaborationA Vallenari, Gaia CollaborationA&A. 5952Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2016, A&A, 595, A2
. A G A Brown, Gaia CollaborationA Vallenari, Gaia CollaborationA&A. 6161Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, A&A, 616, A1
. L Girardi, M A T Groenewegen, E Hatziminaoglou, A&A. 436895Girardi, L., Groenewegen, M. A. T., Hatziminaoglou, E., et al. 2005, A&A, 436, 895
. L Girardi, M Barbieri, M A T Groenewegen, Astrophysics and Space Science Proceedings. 26165Girardi, L., Barbieri, M., Groenewegen, M. A. T., et al. 2012, Astrophysics and Space Science Proceedings, 26, 165
. G R Gladstone, W R Pryor, D T Hall, 10.3847/1538-3881/ac23cdAJ. 162241Gladstone, G. R., Pryor, W. R., Hall, D. T., et al. 2021, AJ, 162, 241. doi:10.3847/1538-3881/ac23cd
. A Abramowski, H. E. S. S. CollaborationF Acero, H. E. S. S. CollaborationA&A. 5504H. E. S. S. Collaboration, Abramowski, A., Acero, F., et al. 2013, A&A, 550, A4
. H Abdalla, H. E. S. S. CollaborationA Abramowski, H. E. S. S. Collaboration10.1051/0004-6361/201731200A&A. 60659H. E. S. S. Collaboration, Abdalla, H., Abramowski, A., et al. 2017, A&A, 606, A59. doi:10.1051/0004-6361/201731200
. M E Hill, R C Allen, P Kollmann, 10.3847/1538-4357/abb408ApJ. 90569Hill, M. E., Allen, R. C., Kollmann, P., et al. 2020, ApJ, 905, 69. doi:10.3847/1538-4357/abb408
VizieR Online Data Catalog. D Hoffleit, W H Warren, 50Hoffleit, D. & Warren, W. H. 1995, VizieR Online Data Catalog, V/50
. E Høg, C Fabricius, V V Makarov, A&A. 35527Høg, E., Fabricius, C., Makarov, V. V., et al. 2000, A&A, 355, L27
. J D Hunter, Computing in Science and Engineering. 990Hunter, J. D. 2007, Computing in Science and Engineering, 9, 90
. J D Jackson, 92/12/31WileyNew York2nd edJackson, J. D. 1975, 92/12/31, New York: Wiley, 1975, 2nd ed.
. O Klein, T Nishina, 10.1007/BF0136645352853Zeitschrift fur PhysikKlein, O. & Nishina, T. 1929, Zeitschrift fur Physik, 52, 853. doi:10.1007/BF01366453
. T R Lauer, 10.1086/132455PASP. 101445Lauer, T. R. 1989, PASP, 101, 445. doi:10.1086/132455
. T R Lauer, M Postman, H A Weaver, 10.3847/1538-4357/abc881ApJ. 90677Lauer, T. R., Postman, M., Weaver, H. A., et al. 2021, ApJ, 906, 77. doi:10.3847/1538-4357/abc881
The VISTA User's Guide. T R Lauer, R Stover, D Terndrup, No. 34Lick Observatory Technical ReportLauer, T. R., Stover, R., Terndrup, D. (1983), "The VISTA User's Guide," Lick Observatory Technical Report No. 34
. T Matsumoto, K Tsumura, Y Matsuoka, AJ. 15686Matsumoto, T., Tsumura, K., Matsuoka, Y., et al. 2018, AJ, 156, 86
. Y Matsuoka, N Ienaka, K Kawara, ApJ. 736119Matsuoka, Y., Ienaka, N., Kawara, K., et al. 2011, ApJ, 736, 119
. K Mattila, P Väisänen, K Lehtinen, 10.1093/mnras/stx1296MNRAS. 4702152Mattila, K., Väisänen, P., Lehtinen, K., et al. 2017, MNRAS, 470, 2152. doi:10.1093/mnras/stx1296
. T Matsumoto, K Tsumura, PASJ. 7188Matsumoto, T. & Tsumura, K. 2019, PASJ, 71, 88
. S Matsuura, T Arai, J J Bock, ApJ. 8397Matsuura, S., Arai, T., Bock, J. J., et al. 2017, ApJ, 839, 7
. M.-A Miville-Deschênes, G Lagache, ApJS. 157302Miville-Deschênes, M.-A., & Lagache, G. 2005, ApJS, 157, 302
. A S Moore, D J Schlossberg, E P Hartouni, 10.1063/1.5039322Review of Scientific Instruments. 89Moore, A. S., Schlossberg, D. J., Hartouni, E. P., et al. 2018, Review of Scientific Instruments, 89, 10I120. doi:10.1063/1.5039322
. A I Nikishov, Soviet Phys. JETP. 14393Nikishov, A. I. 1962, Soviet Phys. JETP, 14, 393
. H Nussbaumer, W Schmutz, A&A. 138495Nussbaumer, H. & Schmutz, W. 1984, A&A, 138, 495
. Y Onishi, K Sano, S Matsuura, 10.1093/pasj/psy070PASJ. 7076Onishi, Y., Sano, K., Matsuura, S., et al. 2018, PASJ, 70, 76. doi:10.1093/pasj/psy070
. J.-L Puget, A Abergel, J.-P Bernard, A&A. 3085Puget, J.-L., Abergel, A., Bernard, J.-P., et al. 1996, A&A, 308, L5
. A Ringwald, arXiv:1407.0546Ringwald, A. 2014, arXiv:1407.0546
. A Saldana-Lopez, A Domínguez, P G Pérez-González, 10.1093/mnras/stab2393MNRAS. 5075144Saldana-Lopez, A., Domínguez, A., Pérez-González, P. G., et al. 2021, MNRAS, 507, 5144. doi:10.1093/mnras/stab2393
International Solar Polar Mission. M Shirbacheh, GPHS-RTG Radiation Summary Report. NASA/JPL internal report. Shirbacheh, M. 1984, International Solar Polar Mission. GPHS-RTG Radiation Summary Report. NASA/JPL internal report 1628-43
. L Spitzer, J L Greenstein, 10.1086/145480ApJ. 114407Spitzer, L. & Greenstein, J. L. 1951, ApJ, 114, 407. doi:10.1086/145480
. M Trenti, M Stiavelli, ApJ. 676767Trenti, M., & Stiavelli, M. 2008, ApJ, 676, 767
. H A Weaver, A F Cheng, F Morgan, PASP. 13235003Weaver, H. A., Cheng, A. F., Morgan, F., et al. 2020, PASP, 132, 035003
. M Zemcov, J Smidt, T Arai, Science. 346732Zemcov, M., Smidt, J., Arai, T., et al. 2014, Science, 346, 732
. M Zemcov, P Immel, C Nguyen, Nature Communications. 815003Zemcov, M., Immel, P., Nguyen, C., et al. 2017, Nature Communications, 8, 15003
At this writing we have been unable to locate information on the lens coatings, but again, unless they have exceptionally large cross sections for generating optical photons. the coatings on each surface receive only half the electrons available to molecules in the bulk of the lenses, we have reduced their efficiency by 1/2). it is not likely that they can account for the anomalous sky. A.3. Cherenkov radiation from scattered RTG γ-raysof the lenses, for example, represents a 2 × 10 −5 relative effect (as the coatings on each surface receive only half the electrons available to molecules in the bulk of the lenses, we have reduced their efficiency by 1/2). At this writing we have been unable to locate information on the lens coatings, but again, unless they have exceptionally large cross sections for generating optical photons, it is not likely that they can account for the anomalous sky. A.3. Cherenkov radiation from scattered RTG γ-rays
High-fidelity estimates of the scattered flux require a detailed structural model of the spacecraft. However, simple arguments suggest that the effects of scattered γ-rays will be modest. The New Horizons spacecraft is optically thin to 2.6 MeV γ-rays. The fraction of γ-rays scattered is << 1, and is most likely < 0.1. LORRI thus "sees" the RTG surrounded by a low-amplitude γ-ray halo. The most energetic scattered γ-rays must be those scattered only by small angles, which will also generate outwardly-directed Cherenkov photons. Even γ-rays entering LORRI from behind at angles 45 • from the optical axis, however, will not generate Cherenkov photons that will directly illuminate the CCD. In any case, this is where the "isotropic" Cherenkov example is useful. If the full flux at LORRI of RTG γ-rays only produces 13% of the anomalous sky component even under the (incorrect) assumption of isotropic Cherenkov radiation. At some level, structures in the New Horizons spacecraft will scatter RTG γ-rays and direct lower-energy secondary γ-rays through the LORRI optics. a halo of scattered γ-rays down by an order of magnitude or more will certainly not be importantAt some level, structures in the New Horizons spacecraft will scatter RTG γ-rays and direct lower-energy secondary γ-rays through the LORRI optics. High-fidelity estimates of the scattered flux require a detailed structural model of the spacecraft. However, simple arguments suggest that the effects of scattered γ-rays will be modest. The New Horizons spacecraft is optically thin to 2.6 MeV γ-rays. The fraction of γ-rays scattered is << 1, and is most likely < 0.1. LORRI thus "sees" the RTG surrounded by a low-amplitude γ-ray halo. The most energetic scattered γ-rays must be those scattered only by small angles, which will also generate outwardly-directed Cherenkov photons. Even γ-rays entering LORRI from behind at angles 45 • from the optical axis, however, will not generate Cherenkov photons that will directly illuminate the CCD. In any case, this is where the "isotropic" Cherenkov example is useful. If the full flux at LORRI of RTG γ-rays only produces 13% of the anomalous sky component even under the (incorrect) assumption of isotropic Cherenkov radiation, a halo of scattered γ-rays down by an order of magnitude or more will certainly not be important.
Cherenkov radiation from the RTG Neutron Flux. A , A.4. Cherenkov radiation from the RTG Neutron Flux
provide cross sections for the generation of γ-ray sufficient to in turn generate relativistic electrons, which for a SiO 2 molecule is 8.3 × 10 −27 cm 2 , over two orders of magnitude smaller than the γ-ray Compton-scattering cross section. There is also an energy threshold; only neutrons with energies > 2 MeV can excite the particular nuclear transitions needed to generate the relevant γ-rays. In 2021, the neutron flux at LORRI, assuming isotropic emission from the RTG, is 89 cm −2 s −1 over all energies. Moore, Moore et al.neutrons do not generate Cherenkov radiation directly, having no electric charge, but collide with the nuclei of Si and O atoms in the lenses, exciting nuclear γ-ray emission, which in turn may Compton scatter electrons. While γ-rays will be emitted isotropically by the nuclei, the net production of γ-rays within the lenses, 0.1 s −1 by neutrons is negligibleThe RTG also emits low-level neutron emission; however, that also appears to be insufficient by a number of orders of magnitude to generate Cherenkov radiation that would explain the anomalous sky. As outlined by Moore et al. (2018), neutrons do not generate Cherenkov radiation directly, having no electric charge, but collide with the nuclei of Si and O atoms in the lenses, exciting nuclear γ-ray emission, which in turn may Compton scatter electrons. Moore et al. (2018) provide cross sections for the generation of γ-ray sufficient to in turn generate relativistic electrons, which for a SiO 2 molecule is 8.3 × 10 −27 cm 2 , over two orders of magnitude smaller than the γ-ray Compton-scattering cross section. There is also an energy threshold; only neutrons with energies > 2 MeV can excite the particular nuclear transitions needed to generate the relevant γ-rays. In 2021, the neutron flux at LORRI, assuming isotropic emission from the RTG, is 89 cm −2 s −1 over all energies. While γ-rays will be emitted isotropically by the nuclei, the net production of γ-rays within the lenses, 0.1 s −1 by neutrons is negligible.
These will be galactic in origin and thus radiate LORRI more or less isotropically, generating isotropic Cherenkov radiation. Their flux, 1.1 cm −2 s −1 , at New Horizons (Hill et al. 2020) is insufficient to generate significant Cherenkov radiation. A , Cherenkov radiation from Cosmic Ray Protons Cosmic ray protons with energies > 1.34 GeV will generate Cherenkov radiation directly in the LORRI lensesA.5. Cherenkov radiation from Cosmic Ray Protons Cosmic ray protons with energies > 1.34 GeV will generate Cherenkov radiation directly in the LORRI lenses. These will be galactic in origin and thus radiate LORRI more or less isotropically, generating isotropic Cherenkov radiation. Their flux, 1.1 cm −2 s −1 , at New Horizons (Hill et al. 2020) is insufficient to generate significant Cherenkov radiation.
|
[] |
[
"ON SYMMETRIC PRIMITIVE POTENTIALS",
"ON SYMMETRIC PRIMITIVE POTENTIALS"
] |
[
"Patrik Nabelek ",
"ANDDmitry Zakharov ",
"Vladimir Zakharov "
] |
[] |
[] |
The concept of a primitive potential for the Schrödinger operator on the line was introduced in [2, 3, 4]. Such a potential is determined by a pair of positive functions on a finite interval, called the dressing functions, which are not uniquely determined by the potential. The potential is constructed by solving a contour problem on the complex plane. In this paper, we consider a reduction where the dressing functions are equal. We show that in this case, the resulting potential is symmetric, and describe how to analytically compute the potential as a power series. In addition, we establish that if the dressing functions are both equal to one, then the resulting primitive potential is the elliptic one-gap potential.
|
10.1093/integr/xyz006
|
[
"https://arxiv.org/pdf/1812.10545v1.pdf"
] | 119,099,345 |
1812.10545
|
faddec88af30a43731740395c42b754b6999f7bf
|
ON SYMMETRIC PRIMITIVE POTENTIALS
26 Dec 2018
Patrik Nabelek
ANDDmitry Zakharov
Vladimir Zakharov
ON SYMMETRIC PRIMITIVE POTENTIALS
26 Dec 2018integrable systemsSchrödinger equationprimitive po- tentials
The concept of a primitive potential for the Schrödinger operator on the line was introduced in [2, 3, 4]. Such a potential is determined by a pair of positive functions on a finite interval, called the dressing functions, which are not uniquely determined by the potential. The potential is constructed by solving a contour problem on the complex plane. In this paper, we consider a reduction where the dressing functions are equal. We show that in this case, the resulting potential is symmetric, and describe how to analytically compute the potential as a power series. In addition, we establish that if the dressing functions are both equal to one, then the resulting primitive potential is the elliptic one-gap potential.
Introduction
One of the fundamental insights underlying the modern theory of integrable systems is the discovery of an intimate relationship between certain linear differential or difference operators, on one hand, and corresponding nonlinear equations on the other. The first of these relationships to be discovered, and arguably the most important one, is the link between the onedimensional Schrödinger equation on the real axis (1) − ψ ′′ + u(x)ψ = Eψ, −∞ < x < ∞, and the Korteweg-de Vries equation
(2) u t (x, t) = 6u(x, t)u x (x, t) − u xxx (x, t).
The study of solutions of the KdV equation has proceeded hand-in-hand with an analysis of the spectral properties of the Schrödinger operator that is applied to ψ on the left hand side of the Schrödinger equation (1). There are three broad methods for constructing solutions of the KdV equation, based on restricting the potentials of the Schrödinger operator. The inverse scattering method (ISM) allows us to construct potentials, and hence solutions of the KdV equation, that are rapidly vanishing as x → ±∞. 1 Such potentials have a finite discrete spectrum for E < 0 and a doubly degenerate continuous spectrum for E > 0, and a subset of them, corresponding to multisoliton solutions of the KdV equation, are reflectionless for positive energies. The finite-gap method, on the other hand, constructs periodic and quasi-periodic potentials of the Schrödinger operator (1) whose spectrum consists of finitely many allowed bands, one infinite, separated by forbidden gaps. These potentials are reflectionless in the allowed bands.
Both of these methods construct globally defined solutions of the KdV equation. The third method, called the dressing method [1], constructs solutions locally near a given point on the (x, t)-plane. An advantage of the method is that the constructed solutions can be quite general. However, the problem of extending such solutions to the entire (x, t)-plane is a difficult one.
Our work is motivated by a pair of related questions. First, one can ask what is the exact relationship between the ISM and the finite-gap method, and whether they can both be generalized by the dressing method. It has long been known that multisoliton solutions of the KdV equation are limits of finite-gap solutions corresponding to rational degenerations of the spectral curve. However, the converse relationship, which would consist in obtaining finite-gap solutions as limits of multisoliton solutions, has not been worked out. Additionally, one can ask which potentials of the Schrödinger operator, other than the finite-gap ones, have a band-like structure.
In the papers [2,3,4], the second and third authors presented a method for constructing potentials of the Schrödinger operator (1), called primitive potentials, that provides partial answers to these questions. Primitive potentials are constructed by directly implementing the dressing method, and can be thought of as the closure of the set of multisoliton potentials. This procedure involves a reformulation of the ISM that is inherently symmetric with respect to the involution x → −x, and the resulting primitive potentials are non-uniquely determined by a pair of positive, Hölder-continuous functions, called the dressing functions, defined on a finite interval.
In this paper we continue the study of primitive potentials. We consider primitive potentials defined by a pair of dressing functions that are equal. Such potentials are symmetric with respect to the reflection x → −x. We show that the contour problem defining symmetric primitive potentials can be solved analytically, and we give an algorithm for computing the Taylor coefficients of a primitive potential. In the case when the dressing functions are both identically equal to 1, we show that the corresponding primitive potential is the elliptic one-gap potential.
Primitive potentials
In this section, we recall the definition of primitive potentials, which were first introduced in the papers [2,3,4] as generalizations of finite-gap potentials. Primitive potentials are constructed by taking the closure of the set of N-soliton potentials as N → ∞, so we begin by summarizing the inverse scattering method (ISM) as a contour problem (see [6], [7]). The finitegap method is symmetric with respect to the transformation x → −x, while the ISM is not, so we give an alternative formulation of the ISM (in the reflectionless case) that takes this symmetry into account.
2.1. The inverse scattering method. Consider the self-adjoint Schrödinger operator
(3) L(t) = − d 2 dx 2 + u(x, t) on the Sobolev space H 2 (R) ⊂ L 2 (R)
. We suppose that the potential u(x, t) rapidly decays at infinity when t = 0:
(4) ∞ −∞ (1 + |x|)(|u(x, 0)| + |u x (x, 0)| + |u xx (x, 0)| + |u xxx (x, 0)|) dx < ∞
and satisfies the KdV equation (2). Under this assumption, the spectrum of L(t) consists of an absolutely continuous part [0, ∞) and a finite number of eigenvalues −κ 2 1 , . . . , −κ 2 N that do not depend on t. There exist two Jost solutions ψ ± (k, x, t) such that
(5) L(t)ψ ± (k, x, t) = k 2 ψ ± (k, x, t), Im(k) > 0,
with asymptotic behavior
(6) lim x→±∞ e ∓ikx ψ ± (k, x, t) = 1.
The Jost solutions ψ ± are analytic for Im k > 0 and continuous for Im k ≥ 0, and have the following asymptotic behavior as k → ∞ with Im k > 0:
(7) ψ ± (k, x, t) = e ±ikx 1 + Q ± (x, t) 1 2ik + O 1 k 2 , where (8) Q + (x, t) = − ∞ x u(y, t) dy, Q − (x, t) = − x −∞ u(y, t) dy.
The Jost solutions satisfy the scattering relations
(9) T (k)ψ ∓ (k, x, t) = ψ ± (k, x, t) + R ± (k, t)ψ ± (k, x, t), k ∈ R,
where T (k) and R ± (k, t) are the transmission and reflection coefficients, respectively. These coefficients satisfy the following properties:
Proposition 1. The transmission coefficient T (k)
is meromorphic for Im k > 0 and is continuous for Im k ≥ 0. It has simple poles at iκ 1 , . . . , iκ N with residues
(10) Res iκ j T (k) = iµ j (t)γ j (t) 2 , where (11) γ j (t) −1 = ||ψ + (iκ j , x, t)|| 2 , ψ + (iκ j , x, t) = µ j (t)ψ − (iκ j , x, t).
Furthermore,
(12) T (k)R + (k, t) + T (k)R − (k, t) = 0, |T (k)| 2 + |R ± (k, t)| 2 = 1. If we denote R(k, t) = R + (k, t), R(k) = R(k, 0), and γ j = γ j (0), then (13) T (−k) = T (k), R(−k) = R(k), k ∈ R, ,(14)|R(k)| < 1 for k 0, R(0) = −1 if |R(0)| = 1,
and the function R(k) is in C 2 (R) and decays as O(1/|k| 3 ) as |k| → ∞. The time evolution of the quantities R(k, t) and γ j (t) is given by
(15) R(k, t) = R(k)e 8ik 3 t , γ j (t) = γ j e 4κ 3 j t .
The collection (R(k, t), k ≥ 0; κ 1 , . . . , κ N , γ 1 (t), . . . , γ N (t)) is called the scattering data of the Schrödinger operator L(t). We encode the scattering data as a contour problem in the following way. Consider the function
(16) χ(k, x, t) = T (k)ψ − (k, x, t)e ikx , Im k > 0, ψ + (−k, x, t)e ikx , Im k < 0.
Proposition 2. Let (R(k); κ 1 , . . . , κ N , γ 1 , . . . , γ N ) be the scattering data of the Schrödinger operator L(0). Then the function χ(k, x, t) defined by (16) is the unique function satisfying the following properties:
(1) χ is meromorphic on the complex k-plane away from the real axis and has non-tangential limits
(17) χ ± (k, x, t) = lim ε→0 χ(k ± iε, x, t), k ∈ R
on the real axis. (2) χ has a jump on the real axis satisfying
(18) χ + (k, x, t) − χ − (k, x, t) = R(k)e 2ikx+8ik 3 t χ − (−k, x).
(3) χ has simple poles at the points iκ 1 , . . . , iκ n and no other singularities. The residues at the poles satisfy the condition
(19) Res iκ j χ(k, x, t) = ic j e −2κ j x+8κ 3 j t χ(−iκ j , x, t), c j = γ 2 j .
(4) χ has the asymptotic behavior
(20) χ(k, x, t) = 1 + i 2k Q(x, t) + O 1 k 2 , |k| → ∞, Im k 0.
The function χ is a solution of the equation
(21) χ ′′ − 2ikχ ′ − u(x)χ ′ = 0,
and the function u(x, t) given by the formula
(22) u(x, t) = d dx Q(x, t)
is a solution of the KdV equation (2) satisfying condition (4).
Remark 3.
We note that the contour problem for χ is not symmetric with respect to the transformation k → −k. The reflection coefficient R(k) satisfies the symmetry condition (13), however, χ is required to have poles in the upper k-plane and be analytic in the lower k-plane. This asymmetry comes from the definition (5) of the Jost functions and is therefore ultimately of physical origin: in the ISM, we consider a quantum-mechanical particle approaching the localized potential from the right, in other words the method is not symmetric with respect to the transformation x → −x. We will see in the next section that this asymmetry prevents us from directly relating the ISM to the finite-gap method. It is common (see [7]) to instead consider the two-component vector [χ(k) χ(−k)]. The jump condition on the real axis (18) is then replaced by a local Riemann-Hilbert problem. This Riemann-Hilbert problem includes poles on the upper and lower k-planes, but the transformation k → −k merely exchanges the components, which does not fix the asymmetry. Remark 4. It is possible to relax the constraint |R(k)| < 1 for k 0 and allow |R(k)| to be equal to 1 inside two symmetric finite intervals v < |k| < u. In this case, the Riemann-Hilbert problem (18) is still uniquely solvable and generates a potential of the Schrödinger operator and a solution of the KdV equation. However, in this case condition (4) is not satisfied, and the potential is not rapidly decaying, at least when x → −∞. This extremely interesting case is completely unexplored.
N-soliton solutions.
We now restrict our attention to the reflectionless case, in other words we assume that R(k) = 0. In this case, the function χ has no jump on the real axis and is meromorphic on the entire k-plane with simple poles at the points iκ 1 , . . . , iκ N . Hence Prop. 2 reduces to the following.
Proposition 5. Let (0; κ 1 , . . . , κ N , γ 1 , . . . , γ N ) be the scattering data of the Schrödinger operator L(0) with zero reflection coefficient. Then the function χ(k, x, t) defined by (16) is the unique function satisfying the following properties:
(1) χ is meromorphic on the complex k-plane with simple poles at the points iκ 1 , . . . , iκ N and no other singularities, and its residues satisfy condition (19). (2) χ has the asymptotic behavior (20) as |k| → ∞.
The corresponding solution u(x, t) of the KdV equation (2), given by formula (22), is known as the N-soliton solution. Finding this solution is a linear algebra exercise. If χ is expressed in terms of its residues
(23) χ = 1 + N n=1 χ n k − iκ n ,
then plugging this into equation (19) gives a linear equation
(24) χ n + c n e −2κ n x+8κ 3 n t N m=1 χ m κ n + κ m = c n e −2κ n x+8κ 3 n t .
Let A be the determinant of this system:
(25) A = I⊂{1,...,N} (i, j)⊂I, i< j (κ i − κ j ) 2 (κ i + κ j ) 2 i∈I q i e −2κ i x+8κ 3 i t , q i = c i 2κ i > 0.
Then the corresponding N-soliton solution of the KdV equation (2) is
(26) u(x, t) = −2 d 2 dx 2 log A. 2.3. The naïve limit N → ∞.
The papers [2], [3], [4] were motivated by the following question. There exists a family of solutions of the KdV equation, called the finite-gap solutions, that are parametrized by the data of a hyperelliptic algebraic curve with real branch points and a line bundle on it. The solutions are given by the Matveev-Its formula
(27) u(x, t) = −2 d 2 dx 2 ln Θ(U x + Vt + Z|B), where Θ(·|B)
is the Riemann theta function of the hyperelliptic curve, and U, V, and Z are certain vectors. The solution u(x, t) is quasiperiodic in x and in t. It is well-known that the N-soliton solutions of the KdV equation (26) can be obtained from the Matveev-Its formula by degenerating the hyperelliptic spectral curve to a rational curve with N branch points. Is it possible, conversely, to obtain the Matveev-Its formula (27) as some kind of limit of N-soliton solutions (26) when N → ∞?
We may attempt to naïvely pass to the limit N → ∞ in (26) in the following way. Let [a, b] be an interval on the positive real axis, let R 1 be a positive Hölder-continuous function on [a, b], and let µ be a non-negative measure on [a, b]. Consider the following integral equation
(28) f (p, x, t) + R 1 (p) π e −2px+8p 3 t b a f (q, x, t) p + q dµ(q) = R 1 (p)e −2px+8p 3 t imposed on a function f (p, x, t), where p ∈ [a, b]. Let a = κ 1 < κ 2 < · · · < κ N = b be a partition of [a, b] uniformly approximating µ.
Replacing the above integral with the corresponding Riemann sum, and denoting c n = (24). Hence equation (28) can be seen as the limit of (24) as N → ∞.
R 1 (κ n )(b − a)/πN and χ n = f (κ n )(b − a)/πN, we obtain equation
It is easy to show that (28) has a unique solution, and that the corresponding function
(29) u(x, t) = −2 d dx b a f (p, x, t)dµ(p)
is a bounded solution of the KdV equation, satisfying the condition −2b < u < 0. The solution is oscillating as
x → −∞, but as x → +∞ it is clear that f (p, x, t) → R(k)e −2kx+8k 3 t , hence u(x, t) decays exponentially.
In other words, u(x, t) can be viewed as a superposition of an infinite number of solitons uniformly bounded away from +∞. In particular, no solution obtained in this way will be an even function of x at any moment of time.
It is therefore impossible to obtain the finite-gap solutions given by the Matveev-Its formula (27) in this way, since these solutions are not decreasing as x → +∞. This lack of symmetry is due to the formulation of the ISM (see Remark 3). These observations were earlier made by Krichever [5], and a rigorous study of the properties of such solutions, showing the above results, was undertaken by Girotti, Grava and McLaughlin in [8].
Proof. At time t = 0, the function A(x) = A(x, t) is equal to
A(x) = 1+q 1 e −2κ 1 x +· · ·+q N e −2κ N x +· · ·+(q 1 · · · q N ) i< j (κ i − κ j ) 2 (κ i + κ j ) 2 e −2(κ 1 +···+κ N )x .
Denote Φ = κ 1 + · · · + κ N . We observe that the function
A(x) = e Φx A(x) is symmetric: A(−x) = A(x)
. Therefore, so is the corresponding solution of the KdV equation:
u = −2 d 2 dx 2 log A = − d 2 dx 2 log A.
We now observe that if we attempt to pass to the limit N → ∞, for example by setting κ n = a + (b − a)n/N, then the coefficients q n given by (30) have small denominators and diverge. Therefore we cannot obtain finite-gap solutions by this method.
2.5.
From the ISM to the dressing method. One of the main results of the papers [2], [3], [4] is a generalization of the ISM within the framework of the dressing method. This construction allows us to take the N → ∞ limit of the set of N-soliton solutions and obtain finite-gap solutions. We briefly describe this generalization.
An N-soliton solution is given by Eqs. (25)-(26), where the c i and the κ i are the scattering data of a reflectionless potential and are therefore positive. However, formally these equations make sense under the weaker assumption that κ i + κ j 0 for all i and j and that c i /κ i are positive. The corresponding function χ has poles on both the positive and the negative parts of the imaginary axis. Proposition 7. Let κ 1 , . . . , κ N , c 1 , . . . , c N be nonzero real numbers satisfying the following conditions:
(1) κ i ±κ j for i j.
(2) c j /κ j > 0 for all j.
Then there exists a unique function χ(k, x, t) satisfying the following properties:
(1) χ is meromorphic on the complex k-plane with simple poles at the points iκ 1 , . . . , iκ N and no other singularities, and its residues satisfy condition (19). We emphasize that, for a given N, the set of solutions of the KdV equation obtained using this proposition is still the set of N-soliton solutions. Specifically, one can check that the solution given by (25)-(26) for the data (κ 1 , . . . , κ N , c 1 , . . . , c N ) is the N-solition solution given by the scattering data (|κ 1 |, . . . , |κ N |, c 1 , . . . , c N ), where (32)
c j = c j κ n <0 κ j − κ n κ j + κ n 2 if κ j > 0, c j = − 4κ 2 j c j κ n <0, n j κ j − κ n κ j + κ n 2 if κ j < 0.
In other words, a N-soliton solution with a given set of parameters κ n > 0 and phases c n > 0 is described by Prop. 7 in 2 N different ways, by choosing the signs of the κ n arbitrarily and adjusting the coefficients c n using the above formula. We now give an informal argument why this alternative description of N-soliton potentials allows us to obtain finite-gap potentials in the N → ∞ limit. In the previous two sections, we made two attempts to use formulas (25)-(26) with κ n > 0 to produce N-soliton solutions with large N. We can either keep the q n bounded, in which case all solitons end up on the left half-axis, or symmetrically distribute the solitons about x = 0, in which case the q n (or, alternatively, the c n ) need to be large.
To obtain a symmetric distribution of N solitons using Proposition 7, we choose, as in Section 2.4, a set of parameteres κ n > 0, and set the phases q n according to (30). We then change the signs of half of the κ n , and change the c n according to Eq. (32). The resulting c n will be bounded for large N, enabling us to take the N → ∞ limit.
2.6. Primitive potentials. In the papers [2, 3, 4] the second and third authors considered a contour problem that can be viewed as the limit of Prop. 7 as N → ∞. (1) χ is analytic on the complex k-plane away from the cuts [ia, ib] and [−ib, −ia] on the imaginary axis, and has non-tangential limits
(33) χ ± (ip, x, t) = lim ε→0 χ(ip ± ε, x, t), p ∈ (−k 2 , −k 1 ) ∪ (k 1 , k 2 )
on the cuts.
(2) χ has jumps on the cuts satisfying
χ + (ip, x, t) − χ − (ip, x, t) = iR 1 (p)e −2px+8p 3 t [χ + (−ip, x, t) + χ − (−ip, x, t)],(34)χ + (−ip, x, t) − χ − (−ip, x, t) = −iR 2 (p)e 2px−8p 3 t [χ + (ip, x, t) + χ − (ip, x, t)],(35)
for p ∈ [k 1 , k 2 ]. (3) χ has asymptotic behavior at infinity
(36) χ(k, x, t) = 1 + i 2k Q(x, t) + O 1 k 2 , |k| → ∞, Im k 0. (4)
There exist constants C(x, t) and α < 1 such that near the points ±ik 1 and ±ik 2 the function χ satisfies
(37) |χ(k, x, t)| < C(x, t) |k ∓ ik j | α , k → ±ik j , j = 1, 2.
Then the function u(x, t) given by the formula
(38) u(x, t) = d dx Q(x, t)
is a solution of the KdV equation (2).
We call solutions of the KdV equation obtained in this way primitive solutions. For fixed moments of time, we obtain primitive potentials of the Schrödinger operator (1).
Remark 9. Condition (37) does not appear in the papers [2,3,4] and is an oversight of the authors. It is necessary, because we consider dressing functions R 1 and R 2 that do not vanish at k 1 and k 2 . For such functions χ may have logarithmic or algebraic singularities at the endpoints. Condition (37) is needed to exclude trivial meromorphic solutions of the Riemann-Hilbert problem, having poles at ±ik j and no jump on the cuts.
We also note that formulas (34)-(35) differ from the ones in [2, 3, 4] by a factor of π, this now seems to us to be a more natural normalization of the dressing functions R 1 and R 2 .
Remark 10. There is a simple observation that justifies the need to include poles in both the upper and lower half planes when producing a finite gap potential as a limit of N-soliton potentials as N → ∞. The spectrum of an N-soliton potential determined by {κ n , c n } N n=1 is purely simple for the negative energy values E = −κ 2 n , and doubly degenerate for E > 0. Therefore, a limit as N → ∞ of N-soliton solutions with poles in the upper half-plane will have a simple spectrum E ∈ [−k 2 2 , −k 2 1 ] (in the one band case) and a doubly degenerate spectrum for E > 0. This is precisely the structure of the spectrum of a one-sided primitive potential having R 2 ≡ 0, which limits to a finite gap solution as x → −∞, but a trivial solution as x → ∞.
A finite-gap potential, on the other hand, has a doubly degenerate continuous spectrum on the interior of its bands, and a simple continuous spectrum on the band ends. To produce a finite-gap potential as a limit of N-soliton potentials as N → ∞, we need to include poles in both half-planes, so that in the limit we end up with two linearly independent bounded wave functions for E in the interior of a band.
A function χ(k, x, t) satisfying properties (33)-(36) can be written in the form
(39) χ(k, x, t) = 1 + i π k 2 k 1 f (q, x, t) k − iq dq + i π k 2 k 1 g(q, x, t) k + iq dq,
for some functions f (q, x, t) and g(q, x, t) defined for q ∈ [a, b]. Plugging this spectral representation into (34)-(35), we obtain the following system of singular integral equations on f and g for p ∈ [k 1 , k 2 ]:
f (p, x, t) + R 1 (p) π e −2px+8p 3 t k 2 k 1 f (q, x, t) p + q dq + k 2 k 1 g(q, x, t) p − q dq = R 1 (p)e −2px+8p 3 t ,(40)g(p, x, t) + R 2 (p) π e 2px−8p 3 t k 2 k 1 f (q, x, t) p − q dq + k 2 k 1 g(q, x, t) p + q dq = −R 2 (p)e 2px−8p 3 t .(41)
The corresponding solution of the KdV equation is equal to
(42) u(x, t) = 2 π d dx k 2 k 1 f (q, x, t) + g(q, x, t) dq.
Symmetric primitive potentials
In this section, we show how to solve equations (40)-(41) analytically as Taylor series in the case when R 1 = R 2 . Suppose that
(43) R 1 (p) = R 2 (p) = R(p).f (p, x, t)+ R(p) π e −2px+8p 3 t k 2 k 1 f (q, x, t) p + q dq − k 2 k 1 f (q, −x, −t) p − q dq = R(p)e −2px+8p 3 t .
The corresponding primitive solution u(x, t) of the KdV equation
(45) u(x, t) = 2 π d dx k 2 k 1 f (q, x, t) − f (q, −x, −t) dq
satisfies the symmetry condition
(46) u(−x, −t) = u(x, t).
In particular, the potential u(x) = u(x, 0) at t = 0 is symmetric:
(47) u(−x) = u(x). (48) u(x) = 2 π d dx k 2 k 1 f (q, x) − f (q, −x) dq
Remark 11. We emphasize that, in order for a primitive potential to be symmetric, it is sufficient but not necessary for the dressing functions R 1 and R 2 to be equal.
We now denote f (p, x) = f (p, x, 0) and set t = 0 in Eq. (44):
(49) e 2px f (p, x)+ R(p) π k 2 k 1 f (q, x) p + q dq − k 2 k 1 f (q, −x) p − q dq = R(p), p ∈ [k 1 , k 2 ].
We show that this equation can be solved analytically. Introduce the variable s = p 2 and expand f (p, x) as a Taylor series in x, separating the even and odd coefficients in the following way:
(50) f (p, x) = ∞ k=0 1 (2k)! x 2k f k (s) + ∞ k=0 1 (2k + 1)! x 2k+1 √ sh k (s), s = p 2 .
Plugging this into (49) and collecting powers of x, we obtain the following system of equations on f k (s) and h k (s), where k is a non-negative integer:
(51)
f k (s)+R( √ s)H[ f k ](s) = R( √ s)δ 0k − k−1 i=0 2k 2i 2 2k−2i s k−i f i (s)− k−1 j=0 2k 2 j + 1 2 2k−2 j−1 s k− j h j (s),(52)h k (s)−R( √ s)H[h k ](s) = − k i=0 2k + 1 2i 2 2k−2i+1 s k−i f i (s)− k−1 j=0 2k + 1 2 j + 1 2 2k−2 j s k− j h j (s).
Here H is the Hilbert transform on the interval [k 2 1 , k 2 2 ]:
(53) H[ψ(s)] = 1 π k 2 2 k 2 1 ψ(s ′ ) s ′ − s ds ′ .
The corresponding primitive potential is given by
(54) u(x) = 2 π ∞ k=0 x 2k (2k)! k 2 2 k 2 1 h k (s ′ )ds ′ .
Equations (51)-(52) can be solved recursively for f k and h k provided that we know how to invert the operators 1 ± R( √ s)H. This can be done explicitly using the following proposition.
L −1 α [ϕ(s)] = cos 2 (πα)ϕ(s) − sin(πα) cos(πα) s − k 2 1 k 2 2 − s α H k 2 2 − s s − k 2 1 α ϕ(s) . Proof. The singular integral equation L α [ψ(s)] = ϕ(s) takes the form (58) ψ(s) − tan(πα(s)) π k 2 2 k 2 1 ψ(r) s − r dr = ϕ(s).
We invert this equation to express ψ in terms of ϕ by reformulating it as an inhomogeneous Riemann-Hilbert problem. The function Ψ(s) defined by
Ψ(s) = 1 π k 2 2 k 2 1 ψ(r) s − r dr is holomorphic in s ∈ C \ [k 2 1 , k 2 2 ].
The boundary values of Ψ from the right and the left for s ∈ [k 2 1 , k 2 2 ] satisfy
(59) i 2 (Ψ + (s) − Ψ − (s)) = ψ(s), 1 2 (Ψ + (s) + Ψ − (s)) = 1 π k 2 2 k 2 1 ψ(r) s − r dr.
The integral equation (58) is then equivalent to the Privalov problem
(60) Ψ + (s) − e −2iπα(s) Ψ − (s) = −2i cos(πα(s))e −iπα(s) ϕ(s)
where Ψ is normalized by the asymptotic behavior Ψ(s) → 0 as s → ∞.
To be able to apply the Plemelj formula to solve the Privalov problem (60) we first need to remove the multiplicative factor in front of Ψ − . We do this by looking for Ψ in the form Ψ(s) = Φ(s)Ξ(s). Here the functions Φ(s) and Ξ(s) are holomorphic in C\[k 2 1 , k 2 2 ], and satisfy the following conditions. The function Φ(s) satisfies the corresponding homogeneous Riemann-Hilbert problem Φ + (s) = e −2iπα(s) Φ − (s) and has the asymptotic behavior Φ(s) → 1 as s → ∞. Such a Φ(s) is given by
Φ(s) = exp k 2 2 k 2 1 α(r) s − r dr .
The boundary values of Φ are Using this proposition with α(s) = tan −1 R( √ s)/π, we can recursively solve equations (51)-(52) and obtain u(x) as a power series in x.
(61) Φ ± (s) = exp(−πH[α(s)] ∓ iπα(s)) for s ∈ [k 2 1 , k 2 2 ]. Note that Φ → Φ −
The case of constant R
As an example, we calculate the first two coefficients of u(x) as a Taylor series in the case when R is a constant positive function. Let α = tan −1 (R)/π, then 0 < α < 1. By Prop. 12, the operators
L α [ f 0 (s)] = tan(πα), L −α [h 0 (s)] = −2 f 0 (s), L α [ f 1 (s)] = −4sh 0 (s) − 4s f 0 (s), L −α [h 1 (s)] = −6 f 1 (s) − 12sh 0 (s) − 8s f 0 (s).
We compute
L −1 α [1] = cos(πα)a(s), L −1 −α [a(s)] = 1 2 (a(s) + a −1 (s)), L −1 α [sa −1 (s)] = s 2 (a(s) + a −1 (s)) − α(k 2 2 − k 2 1 )a(s), L −1 −α [sa(s)] = s 2 (a(s) + a −1 (s)) − α(k 2 2 − k 2 1 )a −1 (s).
We therefore obtain
h 1 (s) = 24(k 2 2 − k 2 1 )α sin(πα)L −1 −α [a(s)] − 8 sin(πα)L −1 −α [sa(s)] = (k 2 2 − k 2 1 )α sin(πα)(12a(s) + 20a −1 (s)) − 4 sin(πα)s(a(s) + a −1 (s)). The integrals k 2 2 k 2 1 a(s)ds = k 2 2 k 2 1 a −1 (s)ds = π(k 2 2 − k 2 1 )α sin(πα) , k 2 2 k 2 1 sa(s)ds = πα 2 sin(πα) ((k 4 2 − k 4 1 ) + α(k 2 2 − k 2 1 ) 2 ), k 2 2 k 2 1 sa −1 (s)d p = πα 2 sin(πα) ((k 4 2 − k 4 1 ) − α(k 2 2 − k 2 1 ) 2 ), allow us to compute 2 π k 2 2 k 2 1 h 0 (s)ds = −4(k 2 2 − k 2 1 )α, 2 π k 2 2 k 2 1 h 1 (s)ds = 8(k 2 2 − k 2 1 )α(4(k 2 2 − k 2 1 )α − (k 2 2 + k 2 1 )),
therefore by Equation (54) we get
(65) u(x) = −4α(k 2 2 − k 2 1 ) + 4α(k 2 2 − k 2 1 )(4α(k 2 2 − k 2 1 ) − (k 2 2 + k 2 1 ))x 2 + O(x 4 )
. We know that R = 1 (hence α = 1/4) and k 1 = 0 produces the exact solution u(x) = −k 2 2 , and indeed by the above formula we get u 0 = −k 2 2 and u 1 = 0 in this case.
Formula (65) has some interesting implications. In the limit as R → 0 we observe that u(0) → 0 and u ′′ (0) → 0. In the limit as R → ∞ we observe that u(0) → −2(k 2 2 − k 2 1 ) and u ′′ (0) → 4(k 2 2 − 2k 2 1 ). Note that if k 2 2 > 2k 2 1 then u ′′ (0) approaches a positive number from below as R → ∞, but if k 2 2 < 2k 2 1 then u ′′ (0) approaches a negative number. If k 2 2 < 2k 2 1 we see that in fact u ′′ (0) is negative for all R. On the other hand, if k 2 2 ≥ 2k 2 1 then u ′′ (0) will be negative for R ∈ (0, tan(π(k 2 2 − k 2 1 )/(k 1 2 + k 2 1 ))), u ′′ (0) will be positive for R ∈ (tan(π(k 2 2 − k 2 1 )/(k 1 2 + k 2 1 )), ∞), and u ′′ (0) = 0 for R = 0 or R = tan(π(k 2 2 − k 2 1 )/(k 1 2 + k 2 1 )).
One-zone symmetric potential
In this section, we show that the dressing R 1 = R 2 = 1 on the interval [k 1 , k 2 ] produces the elliptic one-gap potential
(66) u(x) = 2℘(x + iω ′ − ω) + e 3 .
Previously, in the papers [3,4], the second and third authors showed that this potential arises from the dressing
(67) R 1 (p) = 1 R 2 (p) = (q − k 1 )(q + k 2 ) (k 2 − q)(q + k 1 ) .
Our new result uses the notation and calculations of [3,4], but relies on the results of Chapter 4. First, we observe that if
R 2 (p) = 1/R 1 (p),
then equations (34)-(35) reduce to
χ + (ip, x, t) = iR 1 (p)e −2px+8p 3 t χ + (−ip, x, t), χ − (ip, x, t) = −iR 1 (p)e −2px+8p 3 t χ − (−ip, x, t),
for p ∈ [k 1 , k 2 ]. When R 1 (p) = 1 and t = 0, the contour problem for
χ(k, x) = χ(k, x, 0) is (68) χ + (ip, x) = ie −2px χ + (−ip, x), χ − (ip, x) = −ie −2px χ − (−ip, x), p ∈ [k 1 , k 2 ].
Our goal is to find the function χ satisfying (68). This can in principle be done using the inductive procedure described in Chapter 4 with R = 1 and α = 1/4. However, we will need only the first Taylor coefficient. Indeed, if we set x = 0, then f (p, 0) = f 0 (p) = sin(πα)a(s) = 1 √ 2
s − k 2 1 k 2 2 − s 1/4
. Hence we find that the function
ξ(k) = χ(k, 0) = 1 + i π k 2 k 1 f (q, 0) k − iq dq − i π k 2 k 1 f (q, 0) k + iq dq = k 2 + k 2 1 k 2 + k 2 2 1/4
satisfies equation (68) with x = 0:
(69) ξ + (ip) = iξ + (−ip), ξ − (ip) = −iξ − (−ip), p ∈ [k 1 , k 2 ].
We now look for a solution of (68) in the form χ(k, x) = ξ(k)χ 1 (k, x), where χ 1 (k, x) satisfies the condition (70) χ + 1 (ip, x) = e −2px χ + 1 (−ip, x), χ − 2 (ip, x) = e −2px χ − 2 (−ip, x), p ∈ [k 1 , k 2 ]. Such a function has already been found in [2,3]. Let e 1 , e 2 , e 3 be defined by the equations k 2 1 = e 2 − e 3 , k 2 2 = e 1 − e 3 , e 1 + e 2 + e 3 = 0. Let ℘(z) = ℘(z|ω, ω ′ ) be the Weierstrass function with half-periods ω and ω ′ , where ω is real and ω ′ is purely imaginary, such that e 1 = ℘(ω), e 2 = ℘(ω + iω ′ ), e 3 = ℘(iω ′ ).
We introduce, as in [2,3], the variable z via the relation which has an essential singularity ϕ(x, z) ∼ e −x/z near the point z = 0 (corresponding to k = ∞). Therefore the function (74) χ 1 (k, x) = ϕ(x, z)e −ikx = ϕ(x, z)e −ix/ sn z tends to 1 as k → ∞. It is easy to check that χ 1 (k, x) satisfies the contour problem (70). Putting everything together, we obtain the following result.
Proposition 13. Let k 2 > k 1 > 0. Then the function In Section 2.5, we observed that an N-soliton potential is described using the dressing method in 2 N different ways. Since primitive potentials are limits of N-soliton potentials, it is also true that a primitive potential can be described using the dressing method in multiple ways, in other words by different pairs of functions R 1 and R 2 . Here we observe an example of this behavior: the elliptic one-gap potential can be constructed using constant dressing functions R 1 = R 2 = 1, or using the dressing (67).
Acknowledgments
The first and third authors gratefully acknowledge the support of NSF grant DMS-1715323. The second author gratefully acknowledges the support of NSF grant DMS-1716822.
2. 4 .
4Symmetric N-soliton solutions. In this section, we consider what happens if we try to impose by hand symmetry with respect to the spatial involution x → −x at t = 0. We recall than an N-soliton solution of the KdV equation (26) is determined by N distinct positive parameters κ 1 , . . . , κ N and N additional positive parameters q 1 , . . . , q N . Proposition 6. Let κ 1 , . . . , κ N be distinct positive numbers, and let (30) q n = m n κ n + κ m κ n − κ m , n = 1, . . . , N. Then the N-soliton solution u(x, t) of the KdV equation given by (26) is symmetric at time t = 0: (31) u(−x, 0) = u(x, 0).
( 2 )
2χ has the asymptotic behavior (20) as |k| → ∞.The function u(x, t) given by Eqs.(25)-(26)is a solution of the KdV equation(2).
Proposition 8 .
8Let 0 < k 1 < k 2 , and let R 1 and R 2 be positive, Höldercontinuous functions on the interval [k 1 , k 2 ]. Suppose that there exists a unique function χ(k, x, t) satisfying the following properties:
In this case g(p, x, t) = − f (p, −x, −t) and Eqs. (40)-(41) reduce to the single equation for all p ∈ [k 1 , k 2 ]: (44)
Proposition 12 .
12Let α(s) be a Hölder-continuous function on the interval [k 2 1 , k 2 2 ]. The integral operator L α defined by (55) L α [ψ(s)] = ψ(s) + tan(πα(s))H[ψ(s)] has a unique inverse given by (56) L −1 α [ϕ(s)] = cos 2 (πα(s))ϕ(s) − sin(πα(s))e −πH[α(s)] H[cos(πα(s))e πH[α(s)] ϕ(s)]. If α is constant, then L −1 α can be written as (57)
1 under the transformation α → −α. The function Ξ(s) satisfies the jump conditionΞ + (s) − Ξ − (s) = cos(πα(s))e −iπα(s) −2iϕ(s) Φ + (s) = −2i cos(πα(s))e πH[α(s)] ϕ(s)for s ∈ [k 2 1 , k2 2 ] and has the asymptotic behavior Ξ(s) → 0 as s → ∞. By the Plemelj formula, Ξ(s) is given by πα(r))e πH[α(r)] ϕ(r) s − r dr = H[cos(πα(s))e πH[α(s)] ϕ(s)].The boundary values of Ξ are(62) Ξ ± (s) = H[cos(πα(s))e πH[α(s)] ϕ(s)] ∓ i cos(πα(s))e πH[α(s)] ϕ(s)for s ∈ [k 2 1 , k2 2 ].We now evaluate ψ(s) using (59), (61) and(62):ψ(s) = i 2 (Ψ + (s) − Ψ − (s)) = i 2 (Φ + (s)Ξ + (s) − Φ − (s)Ξ − (s)) = cos 2 (πα(s))ϕ(s) − sin(πα(s))e −πH[α(s)] H[cos(πα(s))e πH[α(s)] ϕ(s)], proving the proposition. The result for constant α comes from the wellknown fact that (63) πH[1] = log |s − k 2 2 | − log |s − k 2 1 |.
L
±α [ψ(s)] = ψ(s) ± tan(πα)H[ψ(s)] are inverted by L −1 ±α [ϕ(s)] = cos 2 (πα)ϕ(s) ∓ sin(πα) cos(πα)a ±1 (s)H[a ∓1 (s)ϕ(s)and has an integrable singularity at s = k 2 2 . The equations (51)-(52) determining f 0 , h 0 , f 1 , h 1 are
f 0 (s) = tan(πα)L −1 α [1] = sin(πα)a(s), h 0 (s) = −2 sin(πα)L −1 −α [a(s)] = − sin(πα)(a(s) + a −1 (s)), f 1 (s) = 4 sin(πα)L −1 α [sa −1 (s)] = 2 sin(πα)s(a(s) + a −1 (s)) − 4α sin(πα)
(71) k 2
2= e 3 − ℘(z).This relation expresses the complex plane C with cuts [ik 1 , ik 2 ] and [−ik 1 , −ik 2 ] along the imaginary axis as a double cover of the period rectangle of ℘. The Schrödinger equation (1) with potential given by (66) is the Lamé equation(72) ϕ ′′ − [2℘(x − ω − iω ′ ) + ℘(z)]ϕ = 0.The Lamé equation has a solution(73) ϕ(x, z) = σ(x − ω − iω ′ + z)σ(ω + iω ′ ) σ(x − ω − iω ′ )σ(ω + iω ′ − z) e −ζ(z)x
x, z)e −ikx , k 2 = e 3 − ℘(z)satisfies conditions (33)-(37) with R 1 = R 2 = 1 and t = 0. The potential u(x) defined by (38) is the elliptic one-gap potential (66).
Construction of higher-dimensional nonlinear integrable systems and their solutions. V Zakharov, S Manakov, Funct. Anal. Appl. 192V. Zakharov, S. Manakov, Construction of higher-dimensional nonlinear integrable systems and their solutions, Funct. Anal. Appl. 19 (2) (1985) 89-101
Primitive potentials and bounded solutions of the KdV equation. S Dyachenko, D Zakharov, V Zakharov, Phys. D. 333S. Dyachenko, D. Zakharov, V. Zakharov, Primitive potentials and bounded solutions of the KdV equation, Phys. D 333, 148-156, 2016
Bounded solutions of KdV and nonperiodic one-gap potentials in quantum mechanics. D Zakharov, S Dyachenko, V Zakharov, Lett. Math. Phys. 1066D. Zakharov, S. Dyachenko, V. Zakharov, Bounded solutions of KdV and non- periodic one-gap potentials in quantum mechanics, Lett. Math. Phys. 106 (2016) no. 6, 731-740
Non-periodic one-dimensional ideal conductors and integrable turbulence. D Zakharov, V Zakharov, S Dyachenko, Phys. Lett. A. 38046D. Zakharov, V. Zakharov, S. Dyachenko, Non-periodic one-dimensional ideal con- ductors and integrable turbulence, Phys. Lett. A 380, no. 46, 3881-3885, 2016
. I Krichever, private communicationI. Krichever, private communication.
Theory of solitons. The inverse scattering method. S Novikov, S Manakov, L Pitaevskii, V Zakharov, Contemporary Soviet Mathematics. S. Novikov, S. Manakov, L. Pitaevskii, V. Zakharov, Theory of solitons. The inverse scattering method., Contemporary Soviet Mathematics, 1984
Long-time asymptotics for the Korteweg-de Vries equation via nonlinear steepest descent. K Grunert, G Teschl, Math. Phys., Anal. and Geom. 123K. Grunert, G. Teschl, Long-time asymptotics for the Korteweg-de Vries equation via nonlinear steepest descent, Math. Phys., Anal. and Geom., August 2009, Volume 12, Issue 3, pp 287-324
M Girotti, T Grava, K Mclaughlin, arXiv:1807.00608Rigorous asymptotics of a KdV soliton gas. M. Girotti, T. Grava, K. McLaughlin, Rigorous asymptotics of a KdV soliton gas, arXiv:1807.00608
|
[] |
[
"Improving the Survivability of Clustered Interdependent Networks by Restructuring Dependencies",
"Improving the Survivability of Clustered Interdependent Networks by Restructuring Dependencies"
] |
[
"Student Member, IEEEGenya Ishigaki ",
"Student Member, IEEERiti Gour ",
"Senior Member, IEEEJason P Jue "
] |
[] |
[
"IEEE TRANSACTIONS ON COMMUNICATIONS"
] |
The interdependency between different network layers is commonly observed in Cyber Physical Systems and communication networks adopting the dissociation of logic and hardware implementation, such as Software Defined Networking and Network Function Virtualization. This paper formulates an optimization problem to improve the survivability of interdependent networks by restructuring the provisioning relations. A characteristic of the proposed algorithm is that the continuous availability of the entire system is guaranteed during the restructuring of dependencies by the preservation of certain structures in the original networks. Our simulation results demonstrate that the proposed restructuring algorithm can substantially enhance the survivability of interdependent networks, and provide insights into the ideal allocation of dependencies.
|
10.1109/tcomm.2018.2889983
|
[
"https://arxiv.org/pdf/1903.01583v1.pdf"
] | 59,235,574 |
1903.01583
|
07527f8f069525a87444297cba966a90758bbbe7
|
Improving the Survivability of Clustered Interdependent Networks by Restructuring Dependencies
Student Member, IEEEGenya Ishigaki
Student Member, IEEERiti Gour
Senior Member, IEEEJason P Jue
Improving the Survivability of Clustered Interdependent Networks by Restructuring Dependencies
IEEE TRANSACTIONS ON COMMUNICATIONS
110.1109/TCOMM.2018.2889983interdependent networksnetwork survivabilitycascading failurenetwork function virtualizationcyber physical systems
The interdependency between different network layers is commonly observed in Cyber Physical Systems and communication networks adopting the dissociation of logic and hardware implementation, such as Software Defined Networking and Network Function Virtualization. This paper formulates an optimization problem to improve the survivability of interdependent networks by restructuring the provisioning relations. A characteristic of the proposed algorithm is that the continuous availability of the entire system is guaranteed during the restructuring of dependencies by the preservation of certain structures in the original networks. Our simulation results demonstrate that the proposed restructuring algorithm can substantially enhance the survivability of interdependent networks, and provide insights into the ideal allocation of dependencies.
I. INTRODUCTION
M ANY network systems encompass layering and integration of the layers in both explicit and implicit manners. For example, Software Defined Networking (SDN) decouples the control logic from forwarding functions to realize the flexibility and agility of communication networks. Also, Network Function Virtualization (NFV) involves separation of network function logic from hardware. The concept of separating logic from hardware implementations is also commonly adopted in Cyber Physical Systems (CPS), such as smart grids, in which computing capability manages physical entities.
The dissociation of logic and functions, which is effective for system flexibility, has accelerated the amount of layering and obscure dependencies in network systems. The work [1] on software defined optical networks points out the dependency of logical nodes on physical nodes that provide physical paths for connections among logical nodes, as well as the dependency of physical nodes on the logical nodes through SDN control messages, which define the operations of the physical nodes. Similarly, it is revealed that NFV embraces the interdependency between Virtual Network Functions (VNF) and physical servers hosting the VNFs, when a virtualization An earlier version of this paper has been presented at IEEE International Conference on Communications (ICC) 2018.
v 1 v ′ 1 v ′ 2 G 2 G 1
Orchestrator v 2 (a) An interdependent network with two constituent graphs representing physical and logical network.
v ′ 1 v ′ 2 G 2 G 1 Orchestrator v 2 (b) Initial failure at a physical server v 1 . v ′ 1 v ′ 2 G 2 G 1 (c) Cascading failure affecting a log- ical node v 2 . v ′ 2 G 2 G 1
(d) Cascading failure affecting a physical server v 1 . The entire network becomes nonfunctional. orchestrator is recognized as one of the VNFs [2]. Furthermore, the integration of a control information network and an electricity network seen in smart grids is a typical example of the interdependency of two different layers in CPSs [3]. This tendency of layering and collaborative functionality of layered networks is likely to be more evident for next-generation network systems. However, it has been revealed that certain types of dependencies between different layers of networks can deteriorate the robustness of the entire interdependent system [4]. Consecutive multiple failure phenomena called cascading failures exemplify the unique fragility of such network systems. In networks without interdependencies, a failure would influence a certain part of a network. Nonetheless, in networks with interdependencies, some nodes that are not directly connected to the failed portion can become nonfunctional due to the loss of service provisioning from nodes in other layers, which are directly influenced by the initial failure. Fig. 1 shows an example of such a cascading failure, which starts as a single node failure of v 1 and results in the entire network failure. Suppose that a network G 1 consists of physical servers v 1 and v 1 , and G 2 represents logical computing nodes v 2 and v 2 hosting VNFs. The orchestrator, which coordinates the mapping between physical and logical arXiv:1903.01583v1 [cs.NI] 4 Mar 2019 layer, is realized as one of the VNFs on v 2 . The arcs from G 1 to G 2 ((v 1 , v 2 ), (v 1 , v 2 )) illustrate the dependency of NFVs or computing nodes on the physical servers, while the arcs from G 2 to G 1 ((v 2 , v 1 ), (v 2 , v 1 )) indicate the dependency of physical servers on a logical node in terms of the flow of coordination messages from the orchestrator to the physical servers. When the physical server v 1 fails, the logical node hosting the orchestrator v 2 loses its dependent physical node v 1 , and becomes nonfunctional. This induces another loss of the dependent node of v 1 , and eventually the single node failure causes a failure of the whole network.
Cascading failures can also lead to the malfunctioning of CPSs. In fact, it has been reported that some major electricity outages in smart grids, such as the 2003 nation-wide blackout in Italy [5], and the 2004 blackout over 8 states in US and 2 provinces in Canada [6], were due to cascading failures induced from poorly designed dependencies between the electricity network and control information network.
Many contributions have been made since the first theoretical proposal on the cascading failure model by Buldyrev et al. in 2010 [7]. The pioneering works [7], [8] focus on analyzing the behavior of cascading failures rather than proposing design strategies. In contrast, some following works identify vulnerable topologies in interdependent networks to avoid such fragile structures in the design phase by investigating the relation between node degree and failure impacts [9], or evaluating the importance of nodes exploiting the algebraic expression of dependencies [10]. Furthermore, other works propose design strategies in more realistic models to consider the impact of failures caused by a single component [11], integrated factors within and between layers [12], or the heterogeneity of nodes in each layer [13].
This paper discusses a design problem for interdependent networks to improve their survivability, which is a measure of the robustness against a whole network failure, by modifying an existing network topology. The contribution that contrasts our work with other related works is the consideration of existing network facilities. Our method is aimed at redesigning a relatively small part of the existing network to enhance the survivability so that the entire network remains operational even during the restructuring process. In order to realize this continuous availability, a special type of dependency, whose removal does not influence the functionality of the entire system, is identified in the first step of our restructuring method. Our heuristic algorithm increases the survivability of entire systems by the relocations of these dependencies. While our previous work [14] allows a node to have dependencies with any nodes in the other layer, this paper extends the model by considering geographical, economic, or logical accessibility of provisioning by nodes. These constraints are represented as clusters of nodes, and an interdependent network is modeled as a directed graph consisting of multiple clusters. The membership of a node in a specific cluster imposes restrictions on the nodes to which the node can provide support, and the nodes from which the node can receive support. Hence, possible modifications to the dependencies between nodes would vary, depending on the cluster to which a node belongs. Finally, our method is evaluated by simulations in different pseudo interdependent networks.
II. RELATED WORKS
Most of the preceding works on interdependent networks attempt to analyze the behavior of cascading failures in wellknown random graphs, which have certain characteristics in degree distributions and underlying topology [7], [8]. Those works analyze the propagation of failures based on percolation theory developed in the field of random networks. Following the directions shown by a seminal work by Buldyrev et al. in [7], more general models are discussed in [8].
The works [9]- [13] focus on the design aspect of interdependent networks. The relation between the impact of failures and interdependencies is empirically demonstrated to decide appropriate dependency allocations in [9]. A method to evaluate the importance of nodes in terms of network robustness is proposed in [10] by introducing a novel representation of interdependencies based on boolean algebra. This evaluation enables network operators to prioritize the protection of the nodes that contribute more to the robustness of the network. In [12], the authors consider dependency relations not only between layers but also within a single-layer. Combining multiple factors that make a node nonfunctional, their method adjusts the dependency of a node on the other nodes. The work in [13] also considers the influence within a singlelayer, supposing the heterogeneity of nodes. In this model, a network can have different types of nodes such as generating and relay nodes. Zhao et al. [11] formulate an optimization problem enhancing the system robustness, defining Shared Failure Group (SFG), a group of nodes that can simultaneously fail due to a cascading failure initiated by the same component.
Another branch of interdependent network research is recovery after failures [15]- [20]. The works in [15]- [17] analyze the behaviors of failure propagations when each node performs local healing, where a functioning node substitutes for the failed node by establishing new connections with its neighbors. The speed of further cascades and resulting network states are revealed by percolation theory [15], [16] or steady state analysis in the belief propagation algorithm [17]. Also, resource allocation problems, which consider the different roles of network nodes are discussed in [18]- [20]. The order of assigning repairing resources is a critical problem during the recovery phase when the amount of available resources is limited. The works in [18], [19] propose node evaluation measurements to decide the allocation, while an equivalent problem in the phase diagram is discussed in [20].
Our work proposes a method to improve the survivability of interdependent networks, following the survivability definition in [21]. Our work would be classified into the category of protection design methods before failures. Specifically, the proposed method is exploited in a redesign process of an existing network to enhance the survivability, while the existing works [9]- [13] discuss the initial design of an entire network. Our protection method, considering the functionality during the redesign, would reduce the cost of survivability improvement in contrast to the entire reconstruction of the systems.
III. MODELING AND MOTIVATING EXAMPLE
In this section, we present a mathematical model for describing interdependent networks, and we present a motivating example of our method. Section III-B summarizes related work [21] defining the survivability for interdependent networks, which we adopt to evaluate the networks.
A. Network Model
An interdependent network consists of k constituent graphs
G i = (V i , E ii ) (1 ≤ i ≤ k)
and their interdependency relationships, which are defined by sets of (directed) arcs A i j (1 ≤ i, j ≤ k, i j) representing the provisioning between a pair of nodes in different graphs. Edges in E ii ⊆ V i × V i are called intra-edges because they connect pairs of nodes in the same network. In contrast, arcs in A i j ⊆ V i × V j (i j) are called interor dependency arcs. If there exists an arc
(v i , v j ) ∈ A i j (v i ∈ V i , v j ∈ V j )
, it means that a node v j has dependency on a node v i . The node v i is called the supporting node, and v j is a supported node. A node v is said to be functional if and only if it has at least one functional supporting node.
When an interdependent network is logically partitioned, each constituent graph G i has a clustering function κ i :
V i −→ {1, 2, ..., γ i }, where γ i ∈ N is the number of clusters in G i = (V i , E ii ). Then, a graph I x i = (W x i ⊆ V i , E ii (W x i )) induced by a node set W x i = {v | κ i (v) = x (1 ≤ x ≤ γ i )} is called a cluster.
Note that this definition insists that a node is in exactly one cluster.
In order to emphasize the dependency between constituent graphs, an interdependent network can be represented as a single-layer directed graph G = (V, A), where V := i V i , and A := {(i, j) |i j } A i j by abbreviating intra-edges. With this notation, a node v is said to be functional if and only if deg in (v) ≥ 1. Note that all the discussions in the rest of this paper follow this single-layer graph representation.
Additionally, we introduce a different notation of arcs with respect to their source nodes. Let A(v) ⊆ A represent a set of arcs whose source node is v ∈ V. To identify each arc during the restructuring process, where some arc temporarily loses its destination, each arc is denoted as (v, ·) m (m = 1, ..., deg out (v)). The index m is a given fixed identification number for each arc in A(v). Hence, every arc in A can be specified by providing source node v and its identification number m.
A set of constituent graphs is totally ordered by the number of nodes that are the source of at least one intra-arc:
|V out i |, where V out i := {v ∈ V i | A(v) > 0}.
A constituent graph that has the least number of nodes with outgoing arcs is named the minimum supporting constituent graph G i : |V out i | ≤ min j |V out j |.
B. Survivability of Interdependent Networks
Parandehgheibi et al. [21] propose an index that quantifies the survivability of interdependent networks against cascading failures exploiting the cycle hitting set, and they prove that the computation of the survivability is NP-complete. They show that a graph needs to have at least one directed cycle in order to maintain some functional nodes; in other words, the existence
C 2 C 1 v 1 v 3 v 4 v 5 v 6 v 7 v 8 v 9 v 2 Fig. 2. Graph G with (v 1 , v 9 ). C 2 C 1 v 1 v 3 v 4 v 5 v 6 v 7 v 8 v 9 v 2 C 3 Fig. 3. Graph G with (v 1 , v 6 ).
of one cycle prevents an interdependent network from its entire failure. Thus, the survivability of interdependent networks is defined as the cardinality of the minimum cycle hitting set whose removal brings non-functionality for the entire network. Note that a cycle hitting set S is a set of nodes such that any cycle C = (V(C), E(C)) in a given graph G = (V, A) has at least one node in the hitting set: S ∩ V(C) ∅, ∀C ∈ C(G), where C(G) is the set of all cycles in the given graph. This definition implies that the entire failure of an interdependent network occurs when the corresponding graph becomes acyclic. Let H(G) denote a cycle hitting set with the minimum cardinality: |H(G)| := min S ∈S |S|, where S is the set of all the cycle hitting sets in G. Formally, the survivability of an interdependent network G is the cardinality of the minimum cycle hitting set, |H(G)|.
C. Motivating Example
Adopting the survivability definition shown above, improving survivability would be equivalent to increasing the number of disjoint cycles in a graph. Figs. 2 and 3 show an example comparing two similar interdependent networks.
In graph G in Fig. 2, there exists two cycles: C 1 and C 2 . If v 2 , which is in both V(C 1 ) and V(C 2 ), becomes nonfunctional because of a failure, all the nodes in G eventually lose their supporting nodes and become nonfunctional:
H(G) = {v 2 }.
On the other hand, no single node failure can destroy all the three cycles in G in Fig. 3, while a two-node failure can make it acyclic (e.g. H(G ) = {v 2 , v 7 }). Therefore, the graph G is more survivable than G, since 1 = |H(G)| < |H(G )| = 2, although they differ only in the destination node of one dependency arc (
(v 1 , v 9 ) in G or (v 1 , v 6 ) in G ).
Supposing that G is an existing topology of a network, a method that relocates (v 1 , v 9 ) to (v 1 , v 6 ) can achieve an enhancement of the survivability.
IV. PROBLEM FORMULATION A. Assumptions
This paper deals with the case in which interdependent networks have two types of homogeneous constituent networks with identical dependencies (k = 2). However, our discussion with the restriction on k can be easily extended to more general cases. In more advanced network models, each constituent network can have different types of nodes, such as independently functional generating nodes and relay nodes, which need provisioning from a generating node via paths of intraedges [13]. Nevertheless, for simplicity, this work follows the assumption in [21] that each node in a constituent network is directly connected to a reliable conceptual generating node by a reliable edge (homogeneous constituent graphs). Moreover, it is assumed that each supporting node provides a unit amount of support that is enough for a supported node to be operational (identical dependencies), following the same model in [21].
Additionally, this paper presumes that each cluster x receives some support from at least one of the clusters that are supported by cluster x. In other words, this presumption excludes the case that a cluster does not receive provisions from any of the clusters that the cluster is supporting.
B. Requirement Specification
One aspect contrasting our scheme to other works is the consideration to improve the survivability of existing interdependent networks by changing some topological structures. Because all the nodes need to remain functional even during the relocations of dependency relations, it is necessary to avoid the loss of all supporting nodes for any node at any stage of the restructuring. In other words, each node needs to be survivable from a cascading failure, which requires the direct or indirect support by the nodes in directed cycles. This constraint is formally represented as the following rule for the live restructuring.
1) Every node remains reachable from a node in a directed cycle via at least one directed path at any stage of the restructuring. In addition to guaranteeing the continuous availability, the amount of provisioning provided by each supporting node should remain the same after the restructuring in order to consider the capability of each node. The capability could be, for example, the limit on electricity generation, computation performance, or the number of ports available.
2) The number of supports that a node provides must remain less than or equal to its original provisioning capability. Furthermore, depending on which cluster a node in graph G i belongs to, the node has a constraint on clusters in G j that it can support. The constraint is given by a supportability function σ i j : V i −→ 2 γ j , where 2 γ j is the power set of the cluster indices in a constituent network G j . This means that a node v (∈ V i ) can provide its support to the nodes in the clusters of G j given by the supportability function. This specification corresponds the geographical, economic, or logical constraints on the accessibility of supports from a node to specific groups of nodes. For example, it is impossible for information control node v to have electricity supply from node u if v and u are geographically far apart or managed by different administrative institutions. The geographical or administrative domain is shown as a cluster in each constituent graph, and dependency relations of the nodes should be closed within a set of permitted nodes, which are geographically close, or managed by the same company or allied companies, since each cluster should be independent from the outsiders. This constraint relating to network clustering is simply expressed as follows.
3) All the provisionings from a node u are directed towards the nodes in the clusters that u can support, as designated by the supportability function σ i j .
C. Clustered ∆H Problem
This section formulates the clustered ∆H problem, which is aimed at enhancing the survivability of a given interdependent network with clusters by restructuring dependency relationships, considering the continuous availability, supporting capability, and clustering constraint of each node.
Considering the continuous availability of an existing network during restructuring leads to the formulation of a gradual reconstruction problem, where no relocation of two or more different arcs is conducted at a time. Each phase relocating one arc is named a step. Let G s = (V, A s ) denote the graph representing the interdependent network topology at step s. The improved interdependent network G s+1 after step s consists of a node set V, which is the same node set as in graph G s , and an arc set A s+1 amended by the relocation of
an arc (u, v) ∈ A s to (u, v ), where v ∈ V is a new destination for the arc (u, v).
The clustered ∆H problem is to maximize the difference in survivability between a given interdependent network, which is recognized as G 0 , and the resulting network after a sequence of consecutive improvements. The resulting network is represented as G f , where f denotes the step at which the last arc relocation is completed. Formally, the objective is to maximize the difference between |H(G 0 )| and |H(G f )|, which is defined as ∆H.
Problem (Clustered ∆H Problem). For a given G 0 = (V = i V i , A 0 ), the number of clusters γ i ∈ N in each constituent graph G i , a clustering function κ i : V i −→ {1, 2, .
.., γ i } for each constituent graph G i , and supportability functions σ i j :
V i −→ 2 γ j , maximize ∆H := |H(G f )| − |H(G 0 )|, where G s+1 = (V, A s+1 ) (0 ≤ s ≤ f − 1) is obtained by the relocation of the destination of a single arc in A s : A s+1 = A s \ (u, v) ∪ (u, v ), satisfying 1) deg in (v) G s ≥ 1 ∀v ∈ V, 2) deg out (v) G s+1 = deg out (v) G s ∀v ∈ V, 3) κ j (v ∈ V j ) ∈ σ i j (u ∈ V i ) ∀(u, v) ∈ A s .
These three conditions correspond to the three rules described in Section IV-B. The second and third conditions are easily derived from the corresponding rules. Lemma 1 shows the equivalence of the condition 1 and Rule 1. ), (a) G has at least one directed cycle, and (b) any node v ∈ V is reachable from a node u ∈ V that is contained in a directed cycle.
Lemma 1. When deg in (v) G ≥ 1 (∀v ∈ V) in a connected directed graph G = (V, AProof. deg in (v) G ≥ 1 (∀v ∈ V)
insists that any node v has at least one parent v . The path v ← v ← ... composed by repeating the trace of parents can be acyclic until the length of the path is |V − 1|. However, the |V |th node must have at least one parent from the assumption. Thus, the pigeonhole principle indicates that it is necessary that the path forms a directed cycle.
G i G j v ′ u ′ u ′′ u ′′′ Fig. 4. Original Dependencies, where (v , u ) is missing. Note that this fig- ure only shows A j i . The symmetric discussion can be done for A i j . (1) G i G j v ′ u ′ u ′′ u ′′′ (2)
Fig. 5. Relocation
Steps (1) to maintain the functionality of u , and (2) to form a length-2 cycle with v and u .
D. Problem Analysis
This section provides the analysis on the trivial optimal case of the clustered ∆H problem with a special setting, where each of constituent graph consists only of one cluster. Let ρ((u, ·) m ) denote the number of relocations that arc (u, ·) m ∈ A experienced during the restructuring process. Note
that u ∈V deg out (u) m=1 ρ((u, ·) m ) = f .
From the definition, the optimum survivability cannot exceed the number of supporting nodes, which each have at least one outgoing arc, in the minimum supporting constituent graph G i . This is because a set of such nodes covers all the directed cycles in an interdependent network G. This observation implies that the optimum survivability is achieved when every node v i ∈ V i of G i has an injective mapping to a node in V j ( j i). In other words, for each node v i in G i , there exists at least one unique disjoint cycle whose length is 2 with v j in G j . The following lemma gives a sufficient condition to reach the ideal state by repeated relocations while preserving the problem constraints.
Lemma 2. When the number of relocations for each arc ρ((u, ·) m ) is not upper bounded, in order to have the optimum restructuring, it is sufficient that the minimum support-
ing constituent graph G i satisfies |V j | < u ∈V i | A(u)| and v ∈V j | A(v)| > |V i | ( j i).
Then, the optimum survivability becomes |V out i |. Proof. The maximum survivability achievable by restructuring is equal to the number of nodes that have at least one outgoing arc |V out i | in the minimum supporting constituent graph
G i = (V i , E ii )
, because the removal of such nodes from G i must destroy all the cycles between G i and another constituent graph. In order to achieve the maximum survivability via the restructuring process, it is necessary that each node u ∈ V out i belongs to a cycle whose length is 2. Otherwise, the cycle contains another node w ∈ V out i , and the removals of such w's make u lose all incoming arcs. Note that a node in V i \ V out i is never a part of directed cycles, since it has no outgoing arc.
Suppose that we have the minimum supporting constituent graph G i and another constituent graph G j that satisfy the two conditions in the lemma. From the definition of the minimum supporting constituent graph, we can make |V out i | pairs of nodes u ∈ V out i , v ∈ V out j , which are expected to form a length-2 cycle together after restructuring, so that no two nodes in V i are paired with the same node in V out j . Figs. 4 and 5 illustrate a general example of a restructuring process to form such a length-2 cycle by dependency arc relocations. Note that the figures only show A ji , but the symmetric argument can be done for A i j . Let u ∈ V out i , v ∈ V out j be a pair such that (v , u )
A ji . In order to make a length-2 cycle between v and u , the arc (v , u ) should be relocated to (v , u ). However, the relocation makes u lose all of its incoming arc. The loss of incoming arc of u is always avoided by relocating one of the arcs incoming to u to u (See Figs. 4 and 5 (1)). The supposition in the lemma and the pigeonhole principle suggest the existence of at least one node u ∈ V i that has two incoming arcs. After the adjustment of the provisioning for u by this relocation, the arc (v , u ) can be relocated to (v , u ) (See 4 and 5 (2)).
For a pair u ∈ V out i , v ∈ V out j such that (u , v ) ∈ A i j , similar relocations are always possible, because |V j | < u ∈V i | A(u)|. Thus, these relocations eventually achieve the maximum survivability by forming |V out i | length-2 cycles that each consist of a pair u ∈ V out
i , v ∈ V out j .
Some propositions similar to Lemma 2 appear in related literature [11], [22]. The sufficient condition provided in Lemma 2 allows the entire restructuring of inter-arcs by repeated relocations of each arc. Therefore, the ∆H problem is recognized as a design problem of an entire interdependent network discussed in [11] under these assumptions. Also, the work [22] claims that such a one-to-one provisioning relation realizes the robustness, while assuming certain structural characteristics of random graphs.
However, it is unrealistic to relocate a dependency arc many times, when considering the overhead of the changes of provisioning relations in network systems. Therefore, the following part of our paper discusses the case where the number of relocations are strictly restricted: ρ((u, ·) m ) ≤ 1 (1 ≤ m ≤ deg out (u), ∀u ∈ V). Under this condition, it cannot be guaranteed to obtain the optimum survivability even when the sufficient condition above holds.
V. HEURISTIC ALGORITHM FOR ∆H PROBLEM This section proposes a heuristic algorithm for the clustered ∆H problem. Before providing the details of our heuristic algorithm, we first define special types of arcs named Marginal Arcs (MAs), which are candidates for the relocations in Section V-A. Then, the heuristic algorithm, which consists of two algorithms: Find-MAs and ∆H, is described. The Find-MAs algorithm enumerates all the arcs that match the definition of MAs. With the set of MAs found by the Find-MAs algorithm, the ∆H algorithm decides appropriate relocations of the dependency arcs in the set, considering disjointness of newly formed cycles, so that it can improve the survivability of a given network.
After the discussion for a simple case with only one cluster in each constituent graph in Sections V-B to V-C, Section V-D explains how the other cases with multiple clusters are broken down into the simple case.
A. Restructuring of Dependencies
In order to guarantee continuous availability, it is necessary to classify the dependency arcs into either changeable or fixed arcs. However, it is computationally difficult to know the
v 1 v 2 v 3 v 4 v 5 v 6 v 7 C 2 C 1 Fig. 6. Original graph G with Marginal Arcs (v 2 , v 3 ) and (v 5 , v 3 ). v 1 v 2 v 3 v 4 v 5 v 6 v 7 C 2 C 1 C 3 Fig. 7. Modified graph G with a new arc (v 5 , v 1 ). v 1 v 2 v 3 v 4 v 5 v 6 v 7 C 2 C 1 C ′ 3 Fig. 8. Modified graph G with a new arc (v 2 , v 4 ).
classification beforehand under the condition of ρ((u, ·) m ) ≤ 1 (∀u ∈ V), because this process involves enumeration of all the permutations of arc relocations and their combinations of destinations. Thus, in this paper, the classification is simplified by using a sufficient condition, while this enumeration is likely to become another optimization problem for a further investigation.
As observed in Section III-C, increasing disjoint cycles in a given network could be an important factor to enhance overall survivability. Hence, our method maintains all existing cycles, which is sufficient to avoid cascading failures, and tries to reallocate the destinations of the arcs that do not belong to directed cycles and that do not make their descendant nodes nonfunctional. Let the arcs that are not in any cycles in a given directed graph G = (V, A) be called Marginal Arcs (MAs). Formally, the set M A of MAs is defined as Moreover, appropriate relocations of the removed MAs could improve the survivability of interdependent networks, assuring operability during the relocation process and maintaining the provisioning capability of each node. Let us analyze the effect of dependency relocations using simple examples in Figs. 6-8. The given graph G in Fig. 6 has two marginal arcs:
M := {(u, v) | (u, v) A(C) ∀C ∈ C(G)}.(1)M = {(v 2 , v 3 ), (v 5 , v 3 )}.
In order to maintain at least one supporting node for v 3 , one of the MAs has to remain the same, and the other can be relocated. Fig. 7 shows the case of relocating (v 5 , v 3 ) to (v 5 , v 1 ); on the other hand, Fig. 8 indicates the case of relocation of (v 2 , v 3 ) to (v 2 , v 4 ). Even though one new cycle (C 3 and C 3 respectively) is formed by each relocation, the modified graphs G and G have different survivability: |H(G )| = 1 (= H(G)), and |H(G )| = 2. This is because the cycles in G are not disjoint with each other:
V(C 1 ) ∩ V(C 2 ) ∩ V(C 3 ) ∅; in contrast, V(C 1 ) ∩ V(C 2 ) ∩ V(C 3 ) = ∅ in G .
Therefore, it could be said that the appropriate relocation for improving survivability is to form disjoint cycles.
Algorithm 1 ∆H-algorithm(G, l) Input: subgraph (directed graph) G = (V, A), maximum hop l ∈ N (odd) 1: M ← find-MAs(G) # M ⊂ A 2: for each (v, w) ∈ M do 3: if deg in (w) ≥ 1 after A \ {(v, w)} then 4:
while True do 5: pick C ∈ C(v) (randomly) 6:
for i ← l; i > 0; i ← i − 2 do 7: pick u ∈ V(C) : d C (v, u) = i 8: if u U then 9: A ← A \ (v, w) ∪ (v, u) 10: U ← U ∪ {n | d C (v, n) ≤ i}1 v 1 v 3 v 4 v 5 v 6 v 7 v 2 Fig. 9. A given graph G with M = {(v 1 , v 5 ), (v 2 , v 6 ), (v 3 , v 7 ), (v 6 , v 7 ), (v 1 , v 7 ), (v 3 , v 5 ), (v 5 , v 6 )}. C 2 v 1 v 3 v 4 v 5 v 6 v 7 v 2 C 1 C 3 C 4 C 5
Minimal-add Regular relocation Fig. 10. A modified graph G with new arcs:
(v 1 , v 4 ), (v 3 , v 2 ), (v 2 , v 1 ), (v 6 , v 5 ).
If w still has some supporting node after the removal of (v, w), the next step is determining a new destination for (v, · ). Our algorithm randomly selects one of the cycles that contains the source v denoted by C ∈ C(v) (line 5). There may be multiple possible candidate nodes for a new destination in the cycle C. Thus, the new destination is decided by the size of the newly formed cycle, which is a result of the relocation (line 6, 7). To represent the size of the newly formed cycle, the distance from a node v to a node u in an (existing) cycle C in the counter direction is denoted as d C (v, u) in our pseudo code. When the maximum hop is designated by l, the algorithm tries to make a new cycle with size l + 1 using a node u, such that d C (v, u) = l, as the destination of the MA. If it fails to form the cycle, it attempts to compose a smaller cycle using a node u such that d C (v, u ) = l − 2. Because of the definition of the dependency, an arc must span between two different layers or constituent networks. Since the node at d C (v, u) = l − 1 in C is in the same constituent network as the source node v, it cannot be a new destination.
Consider an example using a given graph G shown in Fig. 2 and the restructured graph in Fig. 3. Since the removal of (v 1 , v 9 ) does not make v 9 lose all its incoming dependency arcs, our algorithm tries to relocate the destination of this arc to one of the nodes in the cycle C 1 , which are v 2 , v 6 , v 8 . For instance, in the case l = 3, a new cycle C 3 is formed as depicted in Fig. 3 by choosing
v 6 , that satisfies d C 1 (v 1 , v 6 ) = l (= 3). Similarly, if l is initialized to 1, a new cycle C 3 is formed using {v 1 , v 8 }.
After selecting a destination candidate u in line 7, our algorithm checks if u is already used to create a new cycle (line 8). This is confirmed by a set of nodes U storing all the nodes that are in newly formed cycles: {n | d C (v, n) ≤ i} (line 10). For instance in Fig. 3, U ← U ∪ {v 1 , v 6 , v 7 , v 8 }. As will be understood, when another MA tries to form a new cycle using one of these nodes in U, the new cycle and C 3 share some nodes, which means that those cycles are not disjoint. Also, the arc set A is updated when the new destination is finally fixed (line 9).
If there exists no possible destination for an MA (v, w) that satisfies all the conditions, the relocation of the MA is conducted by randomly selecting an incoming arc of v, (u, v) and relocating (v, w) to (v, u), so that it composes a cycle of length 2 (line 15,16). This random selection is named Minimal-add process.
The MAs relocated by the Minimal-add process satisfy either of the following cases: 1) The node v does not belong to any cycles: C(v) = ∅, or 2) all the nodes in the cycles of C(v) are already used to compose new cycles by other MAs. Figs. 9 and 10 show examples of these two conditions (dashed arcs). A given graph G has the MA set
M = {(v 1 , v 5 ), (v 2 , v 6 ), (v 3 , v 7 ), (v 6 , v 7 ), (v 1 , v 7 ), (v 3 , v 5 ), (v 5 , v 6 )}. Eventually, the ∆H algorithm respectively relocates (v 1 , v 5 ) and (v 3 , v 7 ) to (v 1 , v 4 ) and (v 3 , v 2 ).
Because v 6 is not in any cycles in G (reason 1), the Minimal-add process picks the source of one of the current incoming arcs in A in (v), v 5 as the new destination. Also, (v 2 , v 6 ) does not have any possible destinations that are not in the set U (reason 2), and it is relocated to (v 2 , v 1 ) by the Minimal-add process.
D. Application to Clustered Networks
Our heuristic algorithm employs another algorithm named Decompose-cluster to form subgraphs, which indicate candidate destinations for the MAs in each cluster, from a given interdependent network. When interdependent networks are clustered, the modification of the destinations of MAs needs to be conducted under more constraints given by supportability
functions σ i j : κ j (v ∈ V j ) ∈ σ i j (u ∈ V i ) ∀(u, v) ∈ A.
The Decompose-cluster algorithm selects each cluster (node set W x i (1 ≤ i ≤ k, 1 ≤ x ≤ γ i )) and collects MAs (u, v) whose sources are in the cluster (u ∈ W x i ), or whose destinations and sources are respectively in the cluster W x i and in a cluster in
σ i j (v) (v ∈ W x i & κ j (u ∈ V j ) ∈ σ i j (v))
). Using the collected MAs and their endpoints, a subgraph Y for reallocations of MAs in W x i is composed. Each subgraph for each cluster is given to the ∆H-algorithm so that it can improve the survivability by restructuring dependencies in the subgraph.
As will be understood, no directed cycles exist if no MA matches the condition of v ∈ W x i & κ j (u ∈ V j ) ∈ σ i j (v). However, this is not going to happen in our work due to the assumption mentioned in Section IV-A. Note that the absence of such MAs means that nodes in a cluster x are not provided any support by the nodes that receive some supports from the nodes in the cluster x.
E. Complexity Analysis
The Decompose-cluster algorithm extracts k i=1 γ i subgraphs from a given graph G = (V, A). The number of clusters γ i in each constituent graph tends to be much smaller than the number of nodes; thus, k i=1 γ i can be considered as a constant. In order to compose each subgraph, the algorithm requires to check the source and destination of each arc in A. However, each edge appears in exactly one subgraph because of the used edge set D. Therefore, the total complexity of the Decompose-cluster algorithm is O(|V | + | A|).
The complexity of the ∆H-algorithm is sensitive to the number of cycles in the interdependent network. It is known that Johnson's algorithm finds all elementary cycles within
Algorithm 2 Decompose-cluster(G) Input: interdependent network (directed graph) G = (V = k i=1 V i , A), clustering functions κ i 1: D ← ∅ 2: for a node set W x i (1 ≤ i ≤ k, 1 ≤ x ≤ γ i ) do 3: P ← ∅, R ← ∅ 4: for each (u, v) ∈ A \ D do 5: if u ∈ W x i or (v ∈ W x i & κ j (u ∈ V j ) ∈ σ i j (v)) then 6: P ← P ∪ {u, v} 7: R ← R ∪ (u, v) 8: D ← D ∪ (u, v) 9:
end if 10:
end for 11: compose graph Y = (P, R)
12:
∆H-algorithm(Y, l) 13: end for O((|V | + |E |)(|C(G)| + 1)). The ∆H-algorithm determines a new destination after l 2 × C(G) searches for each MA, in the worst case. When only one cycle whose size is 2 exists in the input, and the other nodes are supported by the cycle, the size of the set M becomes |E | − 2. It is obvious that the complexity of the Minimal-add process is O(1), so the worst case analysis takes the case where all MAs are reallocated by the ∆H-algorithm. Thus, its complexity is O((|V |+|E |)(|C(G)|+1))+O((|E |−2)( l 2 ×|C(G)|)). Assuming the maximum hop l is small enough to be considered as a constant, the overall complexity of our heuristic algorithm becomes O((|V | + |E |)|C(G)|). Note that the assumption on l is valid with our strategy, which tries to increase disjoint directed cycles in a given graph.
F. Optimality in Special Graphs
In order to analyze the performance of our heuristic algorithm, we consider the survivability improvement in special graphs where either an exhaustive search gives us the optimum survivability, or some special properties allow us to compute the optimum.
In the analysis, the upper bound of the survivability improvement, which is used as a benchmark for the rest of this paper, is calculated based on the number of the MAs that satisfy the following two conditions. First, let V s be a set of nodes that hold more than one MA, and M s be a set of MAs whose source nodes are in V s . Even when the MAs from v ∈ V s form more than one new cycles, the removal of such a source node v can destroy all the newly formed cycles. This indicates that restructuring increases the survivability by at most |V s |, when relocating MAs in M s . Second, let V d be a set of nodes whose incoming arcs are all MAs, and M d be a set of MAs whose destination nodes are in V d . If all the MAs incident to v ∈ V d are relocated, v loses its functionality during this restructuring. Therefore, at least one MA should remain as an incoming arc to v. This implies that the number of cycles newly formed by the MAs in M d is at most |M d | − |V d |. Thus, the upper bound U is obtained by |M | − |M s | + |V s | − |V d |. Fig. 11 illustrates a comparison of our algorithm with the optimum solution in a small interdependent network such that each constituent graph has 15 nodes, and the number of dependency arcs is 84, including 5 MAs. The optimum solution is obtained by an exhaustive search of 759,375 combinations of reallocations. This numerical example shows that the solution given by the ∆H algorithm would not provide solutions that are exceptionally divergent from the optimum solution. It also infers that the upper bound is not tight in general. Fig. 12 indicates that the survivability obtained by our restructuring heuristic algorithm matches the optimum in a special class of graphs, which are named MA-saturated Path-Sunlet Graphs ζ 2 (G), G ∈ S. The optimum value of survivability for these graphs is always computable based on the following discussion. Definition 1. Path-Sunlet Graphs L: A set of graphs satisfying the following conditions are named Path-Sunlet graphs. Let L denote the set of Path-Sunlet graphs.
• G ∈ L only has one cycle C. • The arcs that are not in the cycle C form a set of disjoint paths whose initial nodes are in C:
P = {P i = (v i 1 , v i 2 , ..., v i k i ) | v i 1 ∈ C and P i ∩ P j = ∅ (∀P j i ∈ P)}.
Definition 2. MA-saturation ζ δ (G) of a graph G: The MAsaturation is an operation of adding additional arcs to a given graph until any addition of an arc makes the graph non-simple, maintaining the out-degree constraint that the out-degree of any node does not exceed a given constant δ ∈ N.
Remark. The optimal restructuring of MAs in MA-saturated Path-Sunlet graphs ζ 2 (G), G ∈ L consists of forming length-2 cycles using an MA and an edge in either P i ∈ P or C.
We consider the cases where |P | ≥ 1, because the survivability in the case of |P | = 0 is obviously |V (C)| 2 . Lemma 4. By removing arcs that are not in any cycle, the optimally restructured MA-saturated Path-Sunlet graph ζ 2 (G) is decomposed into some sequence of cycles.
Proof. Three or more cycles do not meet at the same node, since δ = 2. Therefore, the only possible topology with multiple length-2 cycles is a chain of cycles, in which two cycles share exactly one node. Lemma 5. The survivability of the optimally restructured MA-saturated Path-Sunlet graphs ζ 2 (G), G ∈ L is q ∈Q q 2 , where Q is the set of all the sequences of cycles obtained by removing the arcs that are not in any cycles.
Proof. A removal of one node that is shared by two cycles breaks the two cycles. When q is even, the process gives us the survivability of q 2 . If q is odd, one additional removal is needed to destroy the remaining cycle. Thus, the survivability of a sequence of q cycles is q 2 . Since each sequence in Q is disjoint with the other, the survivability of the entire graph is obtained by summing up the survivability of each sequence.
VI. SIMULATION
In order to understand the performance of the proposed algorithm, our simulations are conducted in both non-clustered and clustered interdependent network models of different sizes. The results from the simplest cases where each constituent network only consists of one cluster (non-clustered) are first described, and the clustered cases follow.
A. Network Topology
The performance of the proposed algorithm is analyzed in random directed bipartite graphs that contain at least one directed cycle. Assuming the situation in which a current interdependent network is working normally, each node is either a member of some cycle or reachable from a node in a cycle through some directed path in the input graph. Because our algorithm only concerns the dependency arcs between 2 constituent graphs (k = 2), any interdependent network is represented as a directed bipartite graph whose arcs connect a pair of different types of nodes.
Each random bipartite graph is generated by specifying the following parameter:
V i , max v ∈V deg in (v) and min v ∈V deg in (v).
In order to observe the performance in different conditions, experiments are conducted in symmetric and asymmetric interdependent networks. A symmetric interdependent network has constituent networks which each have identical number of nodes: |V 1 | = |V 2 |, while constituent networks of an asymmetric interdependent network have different number of nodes: |V 1 | = |V 2 | q (q ∈ N). The degree of each node is determined based on the uniform distribution between the given maximum and minimum incoming degree.
B. Clustering Settings
As the non-clustered cases have symmetric and asymmetric constituent graphs, clustered interdependent networks are also examined in three patterns of topology configurations. In our simulations, each constituent graph has three clusters: Fig. 13). In symmetric cases, a pair of corresponding clusters in different constituent graphs have the same number of nodes: W x 1 = W x 2 , while a cluster is half-sized to the corresponding cluster in the other constituent graph in asymmetric models: Fig. 13 illustrates the three models that have different dependency relationships indicated as arrows. Note that when an arrow is drawn from W x i to W x j , it means that the nodes in cluster W x j can have supports from the nodes in W x i . Model 1 consists only of the solid arrows, which means that each pair of corresponding clusters has dependency relationships. Model 2 has the dependencies illustrated by the solid and dashed arrows, while Model 3 has all the arrows (solid, dashed and dotted). A major difference between these models is the possibility for a network to have some directed cycles over three or more clusters. In Model 1 and 2, directed cycles are able to exist only in a subgraph consisting of W 1 1 and W 1 2 , W 2 1 and W 2 2 , or W 3 1 and W 3 2 , while a directed cycle can lie over the entire graph containing all the clusters in Model 3.
W 1 i , W 2 i and W 3 i (i = 1, 2) (SeeW x 1 = W x 2 2 . Also,
C. Metrics
The survivability of the given graphs, restructured graphs, randomly reassigned graphs, and the upper bound of the improvement are illustrated in our results. The random reassignments of MAs are conducted with a uniform distribution over all the nodes in the other constituent graph from the constituent graph that includes the source of an MA.
Computing the size of the cycle hitting set is known to be NP-complete even in bipartite graphs, so the exact value cannot be obtained in larger graphs. Our evaluation is conducted using a well-known approximation algorithm whose approximation factor is ln |V | + 1 [24].
Furthermore, the density of a given graph G = (V, A) defined by | A| i |V i | is used to examine the relationship between the survivability improvement, and the maximum and minimum degrees.
D. Results
1) Non-clustered Cases: Figs. 14 and 15 illustrate the survivability of the given and restructured graphs with identical and halved size constituent graphs, respectively. In both cases, our method demonstrates more improvement of the survivability compared to the random reassignment. The survivability of the original graphs |H(G)| maintains a similar value regardless of the size of graphs, though the survivability of the graphs restructured by our method |H(G )| steeply increases along with the size of the graph. Since, in the original graph G, arcs are randomly added, it could be difficult to form larger directed cycles. Therefore, it is reasonable that the number of disjoint Fig. 16. The relationship between graph density and ∆H. cycles indicates the tendency to stay within a similar range of values. On the other hand, there would exist more MAs in larger graphs, because these graphs have more arcs that are not in directed cycles. This results in dramatic enhancement of the survivability in larger graphs. The difference caused by the given maximum hop l for our algorithm remains small over all sizes of a graph. Fig. 16 indicates the relationship between the density of graphs and ∆H, the amount of survivability improvement. We compare our method to the random reassignment. The result shows that, in graphs with lower density, our method has greater success in increasing the survivability. An observed general trend of our method is the gradual decrease in ∆H in accordance with the density. This trend seems to be induced by the fact that the graphs with more arcs have a higher possibility of composing cycles even in the original topology. This implies that graphs with higher density have fewer MAs that can form new disjoint cycles. On the other hand, the random reassignment does not demonstrate its effectiveness for the improvement in graphs with any density, which is the same result from Figs. 14 and 15. Moreover, the random reassignment sometimes decreases the survivability (∆H < 0). It is conceivable that the reassignment connects two (or more) cycles and make it possible to decompose all these cycles by the removal of a node. This result implies that imprudent restructuring of the dependency may cause more fragility of the interdependent networks.
✵ ✺ ✶ ✁ ✂ ✷ ✄ ☎ ✆ ✝ ✞ ✳ ✟ ✠ ✡ ☛ ☞ ✌ ✍ ✎ ✏ ✑ ✒ ✓ ✔ ✕ ✖ ✗ ✦ ❍ ✭ ❛ ✘ ♣ r ♦ ① ✐ ♠ ✙ ✚ ❡ ❞ ✮ ❚ ❤ ✛ ❉ ✜ ♥ s ✢ t ② ✣ ✤ ✥ • ✧ ★ ✩ ✪ | ✫ ✬ ✯ ✰ ✱ ❝ ✲ ✉ ✴ ✸ ✹ ✻ ✼ ✽ ✾ ✿ ❀| = |W 3 1 | = |W 1 2 | = |W 3 2 |, |W 2 1 | = |W 2 2 |, max v∈V deg in (v) = 4, min v∈V deg in (v) = 2, and l = 1.| = |W 3 1 | = |W 1 2 | 2 = |W 3 2 | 2 , |W 2 1 | = |W 2 2 | 2 , max v∈V deg in (v) = 4, min v∈V deg in (v) = 2, and l = 1.
2) Clustered Cases: The results in clustered interdependent networks whose dependency relationships follow Model 2 are shown in Figs. 17 and 18. Similar trends to non-clustered cases are observed for both symmetric and asymmetric cases. The proposed method succeeds in increasing the survivability for different sizes of interdependent networks. Fig. 19 illustrates the difference in survivability after restructuring among the three types of dependency models of symmetric networks. The value of "Additive" is obtained by the simple addition of non-clustered cases that jointly compose a clustered case. For instance, the case of clustered networks consisting of 20, 40, and 20 nodes clusters is compared with the sum of the survivability of the cases of non-clustered networks of 20, 40, and 20 nodes shown in Fig. 14. The dependency relations among clusters increase from Model 1 to Model 3 (See Fig 13).
Model 1 gives similar survivability to the simple addition of non-clustered cases, since a pair of corresponding clusters in two constituent graphs is independent from the other pairs in this model. In Model 2, the survivability of the entire network increases, because the nodes in cluster W 2 i can have more supports from the clusters whose cycles are disjoint from the cycles in W 2 i . Although more supports exist among the clusters in Model 3, its survivability is less than the other models. In Model 3, a cycle can lie on more clusters because of the bidirectional dependencies among all the clusters. This topological characteristic is likely to increase the overlapping of multiple cycles and results in the decline of survivability in this model. These results cast a doubt on a naive statement claiming that the increase of dependencies induces more fragility in general interdependent networks.
✷ ✳ ✺ ✸ ✁ ✂ ✄ ✹ ☎ ✆ ✝ ✞ ✟ ✠ ✡ ☛ ☞ ✌ ✍ ✎ ✏ ✑ ✒ ✓ ✔ ✕ ✖ ✗ ✘ ✙ ✚ ✛ ✜ ✢ ✣ ✻ ✤ ✥ ✦ ✧ ✼ ★ ✩ ✪ ✫ ✽ ✬ ✭ ✮ ✯ ✾ ✰ ✱ ✲ ✴ ✶ ✵ ✿ ❀ ❁ ❂ ❃ ❄ ❅ ❆ ❇ ❈ ❉ ❊ ❋ • ❍ ■ ❏ ❑ ▲ ▼ ◆ ❖ ❚ ◗ ❡ ♥ ✉ ♠ ❜ | ❙ ♦ ❯ ❢ ❱ ❲ ❳ ( ) ❬ ❭ ❞ ( ) ❴ ❵ ❛ ❝ ❣ ❤ ✐ • ❦ • ♣ q r s t ✈ ✇ ① ② ③ ④ ⑤ ⑥ ⑦ ⑧ ⑨ ⑩ ❶ ❷ ❸ ❹ ❺ ❻ ❼ ❽ ❾ ❿ ➀ ➁ ➂ ➃ ➄ ➅ ➆ ➇ ➈ ➉ ➊ ➋ ➌ ➍ ➎ ➏ ➐ ➑ ➒ ➓ ➔ → ➣ ↔ ↕ ➙ ➛ ➜ ➝ ➞ ➟ ➠ ➡ ➢ ➤ ➥ ➦ ➧ ➨ ➩ ➫ ➭ ➯ ➲ ➳ ➵ ➸ ➺ ➻ ➼ ➽ ➾ ➚ ➪ ➶ ➹ ➘ ➴ ➷ ➬ ➮ ➱ ✃ ❐ ❒ ❮ ❰ Ï Ð Ñ Ò Ó Ô Õ Ö × Ø Ù Ú Û Ü Ý Þ ß à á â ã ä å ae ç è é ê ë ì í î ï ð ñ ò ó ô õ ö ÷ ø ù ú û ü
VII. DISCUSSION: IMPACT ALLEVIATION VS SURVIVABILITY
Although it is not the primary focus of this paper, in this section, we evaluate the behavior of the proposed algorithms in terms of its effect on the size or impact of a cascading failure. Fig. 20 illustrates the influence of our dependency modifications on the size of cascading failures induced by a single node. In this experiment, the impact of a single node failure at a node v is defined as the number of nodes θ v that become nonfunctional after a cascading failure initiated by the failure of v. The results are analyzed in terms of the following two metrics:
• Worst (non-filled points): the size of the largest cascading failure: max v ∈V θ v , • Average (filled points): the average size of all possible cascading failures: v∈V θ v |V | . The robustness of restructured networks against a single node failure always declines in comparison with the original topology. The decline in the size of the largest cascading failure is most remarkable in the case of |V 1 | = |V 2 | = 50 in our simulation. In this case, the size of a cascading failure increases by 1 node after the restructuring.
In general, the concentrations of provisioning on a certain portion of a network can improve the survivability, though it can make the other portions more fragile. In contrast, appropriate distributions of provisioning are necessary in order to alleviate the impact of any possible single node failure. This difference in robustness against single node failures and system survivability could be a reason for the decline.
However, when examining the average size of cascading failures, it is observed that the increase in the average number of failed nodes is suppressed within 0.1 nodes over all network sizes. Thus, it could be said that our method does not deteriorate the robustness against single node failures.
VIII. CONCLUSION
This paper addresses the design problem of survivable clustered interdependent networks under some constraints relating to the existence of legacy systems during restructuring. Based on the definition of the survivability proposed in a related work, it is claimed that the increase of disjoint cycles could enhance the survivability. The proposed heuristic algorithm tries to compose new disjoint cycles by gradual relocations of certain dependencies (Marginal Arcs) in order to guarantee the functionality of existing systems. Our simulations indicate that the algorithm succeeds in increasing the survivability, especially in networks with fewer dependencies. Moreover, the empirical result implies that the number of dependencies, in general, is not the root cause of the vulnerability to cascading failures. Rather, the appropriate additions of dependencies can improve the overall survivability, while poorly designed dependencies make networks more fragile. When redesigning the interdependency between control and functional entities in SDN, NFV, or CPSs based on the proposed algorithm, the possibility to experience catastrophic cascading failures would decrease.
Manuscript submitted March 6, 2019. Genya Ishigaki, Riti Gour, and Jason P. Jue are with the Department of Computer Science at The University of Texas at Dallas, Richardson Texas 75080, USA (Email: {gishigaki, rgour, jjue}@utdallas.edu).
Fig. 1 .
1An example of cascading failure in an interdependent network representing the dependency between physical servers and NFVs.
Lemma 3 .
3A removal of any marginal arc never decreases the survivability of an interdependent network: |H(G)| ≤ |H(G)|, where G is a given graph, and G is the graph obtained by the removal. Proof. Let M be a set of marginal arcs. From the definition of MAs (Eq. (1)), the removal of MAs does not destroy or connect any existing cycles in G = (V, A). Therefore, |H(G)| = |H(G)|, where G = (V, A \ M).
u, v) ∈ A in (v) (randomly) # Minimal-add pro-Find-MAs algorithm first distinguishes MAs M, which are candidate arcs for relocations, from the arcs in directed cycles in a given graph G = (V, A), by employing Johnson's algorithm [23]. Johnson's algorithm enumerates all elementary cycles in a directed graph within O((|V |+|E |)(|C(G)|+1)). It is enough for distinguishing MAs to obtain elementary directed cycles because any non-elementary cycle can be divided into multiple elementary cycles within which dependency relationship are closed. After the enumeration of cycles in G by Johnson's algorithm, the set of MAs is obtained by M ← A \ C ∈ C(G) A(C). C. ∆H Algorithm With the set of MAs obtained by Johnson's algorithm, the ∆H algorithm (shown as pseudo code in Algorithm 1) relocates the destinations of MAs, considering disjointness of newly created cycles. (See the discussion in Section V-A.) For each MA (v, w), our algorithm first checks whether or not the relocation of this MA causes the loss of supports for the current destination w: deg in (w) G=(V, A\{(v,w)}) ≥ 1 (line 3).
C
Fig. 11 .
11Numerical comparison with the optimum solution in a small interdependent network.
Fig. 12 .
12Survivability of MA-saturated Path-Sunlet graphs ζ 2 (G ∈ L) with two length-3 paths: | P | = 2, k i = 3 (∀P i ∈ P).
Fig. 13 .
13Dependency models of clustered interdependent networks. Arrows show the dependency relationships between clusters. Model 1: solid. Model 2: solid and dashed. Model 3: solid, dashed, and dotted.
Fig. 14 .
14Survivability of interdependent networks before and after the improvement under |V 1 | = |V 2 |, max v∈V deg in (v) = 4, and min v∈V deg in (v) = 2, and l = 1, 3.
Fig. 15 .
15Survivability of interdependent networks before and after the improvement under |V 1 | = |V 2 | 2 , max v∈V deg in (v) = 4, min v∈V deg in (v) = 2, and l = 1, 3.
Fig. 17 .
17Survivability of clustered interdependent networks (Model 2) before/after the improvement under |W 1 1
Fig. 18. Survivability of clustered interdependent networks (Model 2) before/after the improvement under |W 1 1 | = |W 3 1 | =
Fig. 19 .
19Comparison of survivability among different dependency models.
Fig. 20 .
20The number of failed nodes (Worst case and Average) after a single node failure under |V 1 | = |V 2 |, max v∈V deg in (v) = 4, min v∈V deg in (v) = 2, and l = 1.
Cyberphysical interdependency in dynamic software-defined optical transmission networks. H Rastegarfar, D C Kilper, M Glick, N Peyghambarian, IEEE/OSA Journal of Optical Communications and Networking. 7H. Rastegarfar, D. C. Kilper, M. Glick, and N. Peyghambarian, "Cyber- physical interdependency in dynamic software-defined optical transmis- sion networks," IEEE/OSA Journal of Optical Communications and Networking, vol. 7, pp. 1126-1134, Dec 2015.
Reliability evaluation for NFV deployment of future mobile broadband networks. J Liu, Z Jiang, N Kato, O Akashi, A Takahara, IEEE Wireless Communications. 23J. Liu, Z. Jiang, N. Kato, O. Akashi, and A. Takahara, "Reliability evaluation for NFV deployment of future mobile broadband networks," IEEE Wireless Communications, vol. 23, pp. 90-96, June 2016.
Review on modeling and simulation of interdependent critical infrastructure systems. M Ouyang, Reliability Engineering & System Safety. 121Supplement CM. Ouyang, "Review on modeling and simulation of interdependent critical infrastructure systems," Reliability Engineering & System Safety, vol. 121, no. Supplement C, pp. 43 -60, 2014.
Cascading effects in interdependent networks. D H Shin, D Qian, J Zhang, IEEE Network. 28D. H. Shin, D. Qian, and J. Zhang, "Cascading effects in interdependent networks," IEEE Network, vol. 28, pp. 82-87, July 2014.
The Italian 2003 blackout. A Berizzi, IEEE Power Engineering Society General Meeting. 2A. Berizzi, "The Italian 2003 blackout," in IEEE Power Engineering Society General Meeting, 2004., pp. 1673-1679 Vol.2, June 2004.
Causes of the 2003 major grid blackouts in north america and europe, and recommended means to improve system dynamic performance. G Andersson, P Donalek, R Farmer, N Hatziargyriou, I Kamwa, P Kundur, N Martins, J Paserba, P Pourbeik, J Sanchez-Gasca, R Schulz, A Stankovic, C Taylor, V Vittal, IEEE Transactions on Power Systems. 20G. Andersson, P. Donalek, R. Farmer, N. Hatziargyriou, I. Kamwa, P. Kundur, N. Martins, J. Paserba, P. Pourbeik, J. Sanchez-Gasca, R. Schulz, A. Stankovic, C. Taylor, and V. Vittal, "Causes of the 2003 major grid blackouts in north america and europe, and recommended means to improve system dynamic performance," IEEE Transactions on Power Systems, vol. 20, pp. 1922-1928, Nov 2005.
Catastrophic cascade of failures in interdependent networks. S V Buldyrev, R Parshani, G Paul, H E Stanley, S Havlin, Nature. 464S. V. Buldyrev, R. Parshani, G. Paul, H. E. Stanley, and S. Havlin, "Catastrophic cascade of failures in interdependent networks," Nature, vol. 464, pp. 1025-1028, Apr 2010.
Networks formed from interdependent networks. J Gao, S V Buldyrev, H E Stanley, S Havlin, Nature physics. 81J. Gao, S. V. Buldyrev, H. E. Stanley, and S. Havlin, "Networks formed from interdependent networks," Nature physics, vol. 8, no. 1, pp. 40-48, 2012.
Measuring cascade effects in interdependent networks by using effective graph resistance. S Tauch, W Liu, R Pears, 2015 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). S. Tauch, W. Liu, and R. Pears, "Measuring cascade effects in in- terdependent networks by using effective graph resistance," in 2015 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), pp. 683-688, April 2015.
Identification of K most vulnerable nodes in multi-layered network using a new model of interdependency. A Sen, A Mazumder, J Banerjee, A Das, R Compton, 2014 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). A. Sen, A. Mazumder, J. Banerjee, A. Das, and R. Compton, "Identi- fication of K most vulnerable nodes in multi-layered network using a new model of interdependency," in 2014 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), pp. 831-836, April 2014.
Enhancing the robustness of interdependent cyberphysical systems by designing the interdependency relationship. Y Zhao, C Qiao, 2017 IEEE International Conference on Communications (ICC). Y. Zhao and C. Qiao, "Enhancing the robustness of interdependent cyber- physical systems by designing the interdependency relationship," in 2017 IEEE International Conference on Communications (ICC), pp. 1-6, May 2017.
Designing cascade-resilient interdependent networks by optimum allocation of interdependencies. M Rahnamay-Naeini, 2016 International Conference on Computing, Networking and Communications (ICNC). M. Rahnamay-Naeini, "Designing cascade-resilient interdependent net- works by optimum allocation of interdependencies," in 2016 Inter- national Conference on Computing, Networking and Communications (ICNC), pp. 1-7, Feb 2016.
Towards a realistic model for failure propagation in interdependent networks. A Sturaro, S Silvestri, M Conti, S K Das, 2016 International Conference on Computing, Networking and Communications (ICNC). A. Sturaro, S. Silvestri, M. Conti, and S. K. Das, "Towards a realistic model for failure propagation in interdependent networks," in 2016 In- ternational Conference on Computing, Networking and Communications (ICNC), pp. 1-7, Feb 2016.
Improving the survivability of interdependent networks by restructuring dependencies. G Ishigaki, R Gour, J P Jue, 2018 IEEE International Conference on Communications (ICC). G. Ishigaki, R. Gour, and J. P. Jue, "Improving the survivability of interdependent networks by restructuring dependencies," in 2018 IEEE International Conference on Communications (ICC), pp. 1-6, May 2018.
Enhancing resilience of interdependent networks by healing. M Stippinger, J Kertsz, Physica A: Statistical Mechanics and its Applications. 416M. Stippinger and J. Kertsz, "Enhancing resilience of interdependent networks by healing," Physica A: Statistical Mechanics and its Appli- cations, vol. 416, pp. 481 -487, 2014.
Simple and efficient self-healing strategy for damaged complex networks. L K Gallos, N H Fefferman, Phys. Rev. E. 9252806L. K. Gallos and N. H. Fefferman, "Simple and efficient self-healing strategy for damaged complex networks," Phys. Rev. E, vol. 92, p. 052806, Nov 2015.
Error correction coding meets cyberphysical systems: Message-passing analysis of self-healing interdependent networks. A Behfarnia, A Eslami, IEEE Transactions on Communications. 65A. Behfarnia and A. Eslami, "Error correction coding meets cyber- physical systems: Message-passing analysis of self-healing interde- pendent networks," IEEE Transactions on Communications, vol. 65, pp. 2753-2768, July 2017.
Progressive recovery for network virtualization after large-scale disasters. M Pourvali, K Liang, F Gu, H Bai, K Shaban, S Khan, N Ghani, 2016 International Conference on Computing, Networking and Communications (ICNC). M. Pourvali, K. Liang, F. Gu, H. Bai, K. Shaban, S. Khan, and N. Ghani, "Progressive recovery for network virtualization after large-scale disas- ters," in 2016 International Conference on Computing, Networking and Communications (ICNC), pp. 1-5, Feb 2016.
On progressive recovery in interdependent cyber physical systems. Y Zhao, M Pithapur, C Qiao, 2016 IEEE Global Communications Conference (GLOBECOM). Y. Zhao, M. Pithapur, and C. Qiao, "On progressive recovery in interde- pendent cyber physical systems," in 2016 IEEE Global Communications Conference (GLOBECOM), pp. 1-6, Dec 2016.
Multiple tipping points and optimal repairing in interacting networks. A Majdandzic, L A Braunstein, C Curme, I Vodenska, S Levy-Carciente, H Eugene Stanley, S Havlin, Nature Communications. 710850A. Majdandzic, L. A. Braunstein, C. Curme, I. Vodenska, S. Levy- Carciente, H. Eugene Stanley, and S. Havlin, "Multiple tipping points and optimal repairing in interacting networks," Nature Communications, vol. 7:10850, March 2016.
Robustness of interdependent networks: The case of communication networks and the power grid. M Parandehgheibi, E Modiano, 2013 IEEE Global Communications Conference (GLOBECOM). M. Parandehgheibi and E. Modiano, "Robustness of interdependent networks: The case of communication networks and the power grid," in 2013 IEEE Global Communications Conference (GLOBECOM), pp. 2164-2169, Dec 2013.
Designing optimal interlink patterns to maximize robustness of interdependent networks against cascading failures. S Chattopadhyay, H Dai, D Y Eun, S Hosseinalipour, IEEE Transactions on Communications. 65S. Chattopadhyay, H. Dai, D. Y. Eun, and S. Hosseinalipour, "Designing optimal interlink patterns to maximize robustness of interdependent networks against cascading failures," IEEE Transactions on Commu- nications, vol. 65, pp. 3847-3862, Sept 2017.
Finding all the elementary circuits of a directed graph. D B Johnson, SIAM Journal on Computing. 41D. B. Johnson, "Finding all the elementary circuits of a directed graph," SIAM Journal on Computing, vol. 4, no. 1, pp. 77-84, 1975.
A greedy heuristic for the set-covering problem. V , Mathematics of Operations Research. 43V. Chvatal, "A greedy heuristic for the set-covering problem," Mathe- matics of Operations Research, vol. 4, no. 3, pp. 233-235, 1979.
|
[] |
[
"Analysis of Ornstein-Uhlenbeck process stopped at maximum drawdown and application to trading strategies with trailing stops",
"Analysis of Ornstein-Uhlenbeck process stopped at maximum drawdown and application to trading strategies with trailing stops"
] |
[
"G Temnov \nDepartment of Probability and Statistics\nMFF\nCharles University\nPrague-818675Czech Republic\n"
] |
[
"Department of Probability and Statistics\nMFF\nCharles University\nPrague-818675Czech Republic"
] |
[] |
We propose a strategy for automated trading, outline theoretical justification of the profitability of this strategy and overview the hypothetical results in application to currency pairs trading.The proposed methodology relies on the assumption that processes reflecting the dynamics of currency exchange rates are in a certain sense similar to the class of Ornstein-Uhlenbeck processes and exhibits the mean reverting property.In order to describe the quantitative characteristics of the projected return of the strategy, we derive the explicit expression for the running maximum of the Ornstein-Uhlenbeck (OU) process stopped at maximum drawdown and look at the correspondence between derived characteristics and the observed ones.
| null |
[
"https://arxiv.org/pdf/1507.01610v1.pdf"
] | 153,357,007 |
1507.01610
|
0d653fde384779eb71b565a20ac020caaf74061a
|
Analysis of Ornstein-Uhlenbeck process stopped at maximum drawdown and application to trading strategies with trailing stops
G Temnov
Department of Probability and Statistics
MFF
Charles University
Prague-818675Czech Republic
Analysis of Ornstein-Uhlenbeck process stopped at maximum drawdown and application to trading strategies with trailing stops
We propose a strategy for automated trading, outline theoretical justification of the profitability of this strategy and overview the hypothetical results in application to currency pairs trading.The proposed methodology relies on the assumption that processes reflecting the dynamics of currency exchange rates are in a certain sense similar to the class of Ornstein-Uhlenbeck processes and exhibits the mean reverting property.In order to describe the quantitative characteristics of the projected return of the strategy, we derive the explicit expression for the running maximum of the Ornstein-Uhlenbeck (OU) process stopped at maximum drawdown and look at the correspondence between derived characteristics and the observed ones.
Structure of the work and description of the general idea
This paper is structured as follows. The current chapter outlines the general idea and the setting of the strategy as well as the motivation behind the underlying research.
Chapter 2 describes the direct intuitive scheme for the optimization of the key parameters of the strategy from historical data, quotes the results of the strategy's returns and poses questions for the further analysis.
Chapter 3 highlights the main analytical point of this work -explicit formula for the distribution of the running maximum of the OU process stopped at maximum drawdown, and discusses how it relates to the strategy and which characteristics of the strategy's returns can be derived from the running maximum distribution.
Chapter 4 overviews the parameters' estimation methods for OU process and refers to the results of the estimation on the basis of the historical data that we use for testing the strategy's efficiency.
Chapter 5 summarizes the correspondence between the actual results and analytical estimates, and provides the outlook on the further optimization.
General idea of the strategy As mentioned, the proposed methodology relies on the assumption that processes reflecting the dynamics of currency exchange rates are in a certain sense similar to the class of OU processes and exhibit the mean reverting property.
In other words, as such process deviates from its current mean, a certain "force" tends to revert it back to its mean value. This property can be exploited in the context of ForEx markets dynamics: Due to occasional sparks of increased volatility usually caused by economic factors, the exchange rate may burst out creating a potential force that tends to drive the process back to its mean trend level. Opening a long or short position, contrary to the direction of the outburst, may allow to take advantage of this driving force.
Observations of the weekly EUR/USD dynamics confirm that the profile indicating the configuration "outburst followed by a movement in the opposite direction" (in technical terms usually interpreted as correction or consolidation) is frequently observed and agrees with the above described intuition.
The strategy that relies on the above observation can loosely be outlined as follows:
• At the start of each week of trading, set up an exchange rate level that can serve as "zero-level" (usually, weekend rate or the opening of the week rate) and pre-set the triggers for position opening (described next).
• The position will open if the rate rises above a predetermined level U (short position opening) or drops below the level D (long position opening), where U and D are measured from the "zero-level'. The position opens automatically depending on which of the levels, U or D, is hit first.
• As soon as the position is opened, the trailing stop (TS) and the profit call (CP) levels are affiliated with the position so that it will close automatically as soon as either of these stops work out (or it will close at the weekly trading closure if none of them is hit).
To summarize, the above strategy is simply designed to take advantage of the correction that often follows the initial outburst, usually near the start of the trading week -and therefore use the driving force that reverts the process back to its long-term mean, in favour of the trader. Of course, it is possible that the initial price movement that triggers the opening of the position is actually a reflection of the trend rather than an "outburst", so that the opened position would in fact be held against the trend and would therefore be a potential loss.
However, long history of observations on EUR/USD trading pair show that configurations with drawdown/drawup following the outburst up/down are observed within almost each of the trading weeks, while the pure-trend configuration is a relatively rare situation.
The above argument is nevertheless purely intuitive and we will, of course, need a more solid probabilistic and statistical reasoning to justify the potential profitability of the strategy. We address the probability context of the model in the following chapters, where we will consider the analytical representation of the distribution of the return of the strategy as well as the estimation of parameters of the underlying process, under the hypothesis that this process is of an OU type.
Taking another look at the idea of the strategy and its implementation, we note that the key problem in this context is the task of optimizing the parameters U, D, TS, CP. We will be looking for such a set (U, D, TS, CP) that would produce the maximum aggregated profit for the strategy.
2 Practical realization of the strategy and empirical scheme for parameters optimization Before proceeding to analytical study in the following chapters, we start with a straightforward scheme of the strategy implementation, as the practical results are likely to highlight the strategy's potential and indicate the points of the further analysis. At first instance, we estimate the parameters in a simplistic way, via the "ad hoc" rule: assuming that the historical data that is constantly revealed, can be used for optimization of the parameters (U, D, T S, P C) directly by maximizing the "what if" returns of the strategy over the past years, the scheme can roughly be described as:
• At the start of year N , use the available data up to and including year (N − 1) to estimate the set (U, D, T S, P C) of parameters such that the aggregate return over the past period (say, over year (N − 1) only if no earlier data is available) would be largest possible.
• Implementing the strategy during the year N , use the set (U, D, T S, P C) of parameters estimated at the previous step.
• By the start of year N + 1, use the actual result of the strategy's return over the year N , and in addition use the data of the year N that is now available along with previous history, to simulate possible returns with other choices of parameters and therefore to perform the "what if" analysis and obtain the updated estimate of the set (U, D, T S, P C) of parameters such that the aggregate return over the year N would be largest possible (or, alternatively, the aggregate return over year (N − 1) and year N together would have been largest). • At the start of each following year N +i (i > 1), repeat the above updating scheme, using different selective data sets (using data of previous year, (N + i − 1), only, or using two or more years of history that has been revealed by the year N + i).
In relation to this scheme, note that the parameters' updating can, of course, be performed on a more frequent than yearly basis.
At first instance of the strategy implementation, the optimization of the parameters' set was performed approximately (without using automated optimization techniques which would be appropriate is this case but would also appear computationally expensive considering the set of 4, possibly cross-dependent, parameters). This approximate scheme allowed to draft first conclusions about preferable ways of parameters' updating. Figure 1 indicates the histogram of the weekly returns as a result of the implementation of the strategy with parameters' optimization of the basis of 1 recent year data only, using the historical data of eur/usd over 4.5 years from 2011 to the middle of 2015. We quote the results of different scheme's implementation in Table 1 below, which is structured as follows: the lines with the year number followed by "e", such as 2011e etc, indicate the results of estimation, so that the corresponding year's data were used to optimize the parameters to obtain the largest possible value of the mean weekly returns µ.
The lines with the year number followed by "A", on the other hand, indicate the "actual" results, so that if the parameters estimated from previous years were used during the current year, it would result in the actual mean weekly returns value as indicated.
Note that weekly mean, µ, is expressed in euros. It is calculated under the assumption that each single position is 1000 euro with leverage 1 : 200, and the commission for holding the asset overnight is 0.14%. The P &L process corresponding to these three schemes demonstrate closely related results, as indicated on Figure 2. 2: Dynamics of P &L according to three schemes of parameters updating: from the scheme when parameters are estimated from the previous year of historical data only -to the one that uses 3 previous years As follows from above, the updating scheme that uses the previous 1-year's history only, appears to lead to the best results.
(U, D, T S, P C),
That might have to do with the "memory" capacity of the underlying diffusion process -the issue that we will address in the following chapters.
The difference is not too large though, with the following resulting average weekly returns (in euros):
• 127/w when using 1 recent year history only.
• 118/w when using 2 recent years of history.
• 107/w when using 3 years of history.
As indicated in Figure 2, negative dynamics was observed in the second half of 2014. That period corresponds to a fast decrease of eur/usd rate, and the number of weeks when the rate dropped significantly without being preceded by a considerable drawup, lead to the decline of strategy's returns in that period.
In Figure 3, four weeks taken at random from the second half of 2014 history, indicate that effect.
In Chapter 4, we will address the problem of how the dynamics of the estimated parameters of the underlying OU process react on the time periods of such unusual activity, and in Chapter 5 discuss how the underlying parameter's dynamics can be used to improve the strategy's performance. The general assumption on the dynamics of most financial indices is that the underlying processes are of diffusion type and can be described by stochastic differential equations (SDE) of generic form
dX t = µ(X t )dt + σ(X t )dW t , X 0 = x .
In the context of probabilistic analysis of the proposed strategy and its returns, the key problem is describing the running maximum of such diffusion process stopped at a given level of drawdown.
Denote the running maximum of process
{X t } by M t = sup s∈[0,t]
X s , and the drawdown process as
DD t = M t − X t .
The common way to obtain the running maximum distribution of the process stopped at a fixed drawdown level is by representing this problem as an escape problem (i.e., first passage problem).
Denote by T D (a) the first passage time of the process {DD t } through the level a (a > 0):
T D (a) = inf{t 0; DD t = a} .
The distribution function (cdf) of the r.v. M T D (a) , e.g. the maximum of the process X t stopped at the drawdown level a, given the initial starting point x, is
P x M T D (a) v = 1 − e − v x Ψ(x,z) z z−a Ψ(x,y)dy dz , (3.1)
where the function Ψ is defined as
Ψ(u, z) = e −2 z u γ(y)dy , and γ(y) = µ(y) σ 2 (y) . (3.2)
The formula (3.1) is a classical expression originally derived in [1] and used in many later sources including [2].
In the context of the strategy, the random variable (M T D (a) − a) corresponds to the final balance of a long position stopped at the trailing stop (with a being the trailing stop threshold), so that the formulae (3.1) and (3.2) allow to obtain the distribution of the strategy returns, provided that the diffusion model is calibrated, functions µ(y) and σ(y) are properly chosen and all the parameters are estimated.
Let us also note that the minimum of the process stopped at a maximum drawup corresponds to a short position stopped at the trailing stop. As the cases of long and short position are symmetric, formulas for their running max/min only differ by the sign of the underlying variable, which allows us to focus primarily on the long position case (we will call it "D-configuration", which means hitting the lower level, D, first), while for the respective expressions for the short position case are quite similar and can be obtained as a "mirror reflection", i.e. the change of relevant signs.
For the case of Ornstein-Uhlenbeck (OU) process, the functions µ and σ are given by µ(y) = λ(θ − y) and σ(y) = σ ,
where λ is the mean reversion rate, θ is the (long-term) mean and σ the volatility parameter.
Using these, the function Ψ from (3.2) is given by
Ψ(u, z) = exp λ σ 2 (z − θ) 2 − (u − θ) 2 . (3.3)
Using (3.3) to calculate the probability of the maximum of OU process stopped at the drawdown (by the trailing stop) via (3.1), numerical integration can be used to obtain the distribution function and, respectively, the estimate of the probability density function (pdf).
Few examples of the pdf calculated as above for different combinations of parameters, are given on Figure 4.
In each of these examples, the starting point of the process is 1.3, and the maximum drawdown level is set to be 0.0055 (which corresponds to the trailing stop of 50 pips, in the context of the trading strategy). Figure 4: Examples of pdf of the maximum process stopped at drawdown. The second parameter in the set (the one changing from 485 to 7450 and down to 2850) is the ratio λ/σ 2 and the third parameter is the long-term mean θ.
Getting back to the point of the strategy realization, recall that, apart from the trailing stop, the profit call is also applied to the open position, so that the profile of the weekly strategy return is actually a composition of profit call probability impact along with the truncated and shifted probability density of the running maximum of the process stopped at the drawdown.
The resulting distribution is a semi-continuous density with a step at point PC, and for the case of OU process it can be calculated explicitly and has a type of profile as illustrated on Figure 5. For comparison, the histogram of returns obtained from simulated paths of OU process (with parameters similar to the theoretical example above) is given on Figure 6.
From the analytic form of pdf as above, required characteristics of the distribution of weekly returns can be estimated.
Some of the most important characteristics are the probability of the profit call and the expected value of the weekly returns. As the size and the value of weekly expected returns is the crucial indicator of the strategy's profitability, it might be useful to have a closer look at borderline cases, such as when expectation changes its sign.
By fixing the starting point of the process (say, x 0 = 1.3), and the drawdown level (a = 0.005), let us trace how expectation changes with the change of the long-term mean, θ (assume that the parameters λ and σ are also fixed at that time and the ratio is λ/σ 2 ). Table 2 shows the change of expected value with the shift of θ, near the edge of the profitability of the strategy (near negative expected value), considering separately the case when the profit call threshold is the same as the trailing stop threshold (P C = T S = 0.005), and when the profit call level is slightly larger P C = 0.0055 . P C = 0.005 Table 2.
P C = 0.0055 θ P(P C) E [W R ] P(P C) E [W
Clearly, as the long-term mean drifts away from the process' starting point to the negative side, the probability of hitting PC decreases, as does the mean of the entire profit distribution.
Arguing slightly ahead of Sections 4 and 5 where we will consider the estimation of underlying parameters θ, λ and σ and their implementation in the strategy, we can, at this stage, make the following point: if the estimation of the parameters from historical data confirm that the estimated parameters' current values leads to the positive expected returns, we stick to the position opening scheme described in Chapter 2.
If, however, the associated expectation is estimated as negative (which, roughly speaking, happens when the long-term mean is on the "wrong side" of the position opening rate), we might choose not to open the position and skip the week (or wait for the position to hit the opposite threshold opening level -"U" rather than "D" or "D" rather than "U", ignoring the one that was hit first if it is anticipated as "wrong-way" configuration).
Calibration of the OU model
Estimation of parameters of OU process is well established. Several methods, such Maximum likelihood and mean squares estimates, can be applied. We only make a brief reference to the estimation methods here.
It is well known that the explicit solution of the discrete-time version of the OU process (which is analogous to AR(1) model) is given by
S i+1 = S i e −λδ + θ 1 − e −λδ + σ 1 − e −2λδ 2λ Z ,
where parameters λ, θ, are as defined in the previous chapter, Z is the standard Normal random variable and δ is the discrete-time step size.
Using the above recursive formula, the Maximum likelihood estimates of the parameters can be obtained quite straightforwardly (multiple sources can be cited) and they result in
θ = S y S xx − S x S xy n(S xx − S xy ) − (S 2 x − S x S y ) , λ = − 1 δ ln S xy − µS x − µS y + nµ 2 S xx − 2θS x + nθ 2 , and σ 2 = σ 2 2λ 1 − α 2 , where σ 2 = 1 n [S yy − 2αS xy + α 2 S xx − 2θ(1 − α)(S y − αS x ) + nµ 2 (1 − α) 2 ]
, with α = e −λδ , and the sums S •• in the above are as
S x = n i=2 S i−1 , S y = n i=1 S i , S xx = n i=1 S 2 i−1 , S xy = n i=1 S i−1 S i , S yy = n i=1 S i−1 S i .
Returning to the historical data that we test the strategy with, one of the key questions is: what history period should be used for estimating the parameters at each time instance, as the strategy horizon evolves with time?
There is no doubt that the parameters of the underlying OU process keep changing with time, but which length of the period is relevant?
As we have four and a half years, or about 235 weeks at our disposal, we can consider several schemes to approach the dynamic estimation of the parameters, such as:
• At the start of each week of trading, use the data that covers the most recent time period of a fixed length, say, 22 weeks (about 5 months).
Use this recent data to estimate the set of parameters of the OU model: θ, λ and σ.
By the end of the trading week, the current week's data will be added to the historical dataset, and the data pack to estimate the parameters at the start of the following week will include the past week's data plus 21 previous weeks.
• Alternatively, we may use all the history accumulated up to the most recent available week, so that the gradually increasing dataset would be used for OU parameters' estimation at the start of each new week.
The scheme drafted above is just one possible way for dynamic estimation of parameters. Certainly, there are alternative ways, such as full Bayesian inference that might be also suitable in this context (in which case the most recent week could viewed, for example, as newly arrived data to be used to update the parameters' estimation and result in posterior distribution of parameters' vector).
The impact of the choice of the history period on the estimates of the longterm mean θ is reflected on the plot in Figure 7. actual wkly close rate long-term mean using 22 wks long-term mean using all history Figure 7: Dynamics of the estimates of long-term mean of the underlying OU process, calculated under different schemes of parameters' updating: 1) using the recent history (22 recent weeks) only and 2) using all the available history to date -plotted against the actual close-of-the-week rate dynamics.
As reflected on Figure 7, the most irregular time period is the second half of 2014 that was marked by the dramatic fall of the the euro.
The extreme degree of fluctuations of the estimates of θ (that starts earlier in 2014 for the recent history updating scheme, and only later towards 2015 when using the all-history scheme) might be an indication that during that period, the estimates are just not reliable, and the OU process is no longer a good model for the underlying process on such volatile markets.
Next, we also look at the dynamics of parameters' ratio λ/σ 2 -the rate of convergence to the long-term mean over current variance value (as the model distribution depends on this ratio rather than on each of the parameters alone).
The behavior of the estimates of this ratio, reflected in Figure 8, clearly makes an additional indication of the irregular character of the underlying process towards the second half of 2014.
At some point, the rate λ of reversion to long-term mean, as estimated from the recent 22 weeks data, even becomes negative which means that the model is no longer viable on that time interval.
Towards the start of 2015, the estimates seem to converge and the usage of During such time intervals, the strategy should probably not be used or only used with additional control measures to exclude the opening that could potentially result in negative returns -using the indicators such as the ones that we discuss in Chapter 5.
As already mentioned, the schemes for parameters' updating drafted above are clearly quite simplistic.
Also, we do not discuss goodness-of-fit at this point, and do not provide the justification that the OU is the reliable fit for the data (excluding the second half of 2014), as opposed to other models.
Clearly, goodness-of-fit is a separate issue and should be addressed in more details, which would be beyond the scope of this review paper.
Here we only note that the strategy outlined in this work, can of course be implemented under different assumptions about the dynamics of the underlying process and different models, such as jump-diffusion processes and related models, for many special cases of which, the weekly returns profile can also be analytically calculated in a way similar to the one we used in Chapter 3.
Discussion and outlook
Proposed usage of analytical distribution As discussed in Chapter 2, for the first test of the strategy implementation, we use a simple ad-hoc technique to optimize the parameters directly from historical data.
Ideally, of course, the parameters (U, D, T S, P C) should be estimated from the theoretical distribution of weekly returns.
A proper scheme for updating the (U, D, T S, P C) set could look like:
• Estimate the OU parameters in a way such as outlined in Chapter 4.
• Use the OU parameters estimates to calculate the analytical distribution of weekly returns, as described in Chapter 3, for a range of the strategy parameters (U, D, T S, P C)
• Pre-set the desirable level of probability of a Profit call (say, P C >= 40%) and/or the desired expected value of weekly returns, and optimize the set (U, D, T S, P C) such that the analytical predicted values of the PC probability and weekly returns would fit into the desired levels.
That way, we would be solving a sort of the inverse problem -find (U, D, T S, P C) such that optimize the expected profit.
Of course, this optimization problem, despite being quite straightforward in its description, is actually a problem of quite a significant computational complexity.
However, even if we stick to the simple optimization scheme described in Chapter 2 at this stage, we could still use similar ideas, to help identify the FALSE position opening cases (the ones with negative predicted expected returns and with relative small probability of PC) and exclude them.
That would increase the efficiency of the strategy even under the simple scheme for parameters' updating and optimization.
In order to make the first rough check of whether the predicted expected value of weekly returns and probability of a Profit Call are in correspondence with actual results for the returns of the strategy implemented under the simple optimization scheme, we can just compare these indicators directly.
We select a time interval from the history -the year 2013, -and plot actual returns according to the test run of the strategy as in Chapter 2 (at this point we use first of the schemes for the parameters' optimization described in Chapter 2 -with optimization w.r.t the last year history only and also first of the OU parameters' updating schemes from Chapter 4 -using the 22 recent weeks only) against the expected values estimated via the OU estimates. The results are given in Figure 9.
Mar
May
Jul Sep Nov For simplicity, we only consider the "D" configuration -when the price hits "D" threshold within the week first, and the long position opens. Combining "D" and "U" in one figure would complicate the visual analysis by mixing upward-with downward trends. Of course, the "U" configuration can be analyzed in the same way.
As observed in Figure 9, there is a certain correlation between expected (predicted) values and actual returns (although this can could only be used to get a general impression, and a more accurate analysis would be needed for practical implementation).
Some further visual analysis (though again quite a loose one, but this may be viewed as initial approach) for the whole available history is indicated in Figure 10. Again, we compare the expected values (calculated at start of each week) with actual return by the end of corresponding week (again, we only look at those open positions that correspond to "D" configuration).
Here, we use both of the schemes for parameters updating as described in Chapter 4: the first one uses the recent 22 weeks of data only, the other uses all the history available by the moment of parameters' estimation (on the weekly basis).
As initial visual analysis shows, before 2014, both schemes seem to be in good correspondence with each other, and also there appears to be certain match with the tendency of the actual returns. From 2014, however, the discrepancy between the mean values obtained using two schemes, becomes very large. The estimate that comes from scheme that uses the whole history, at least, seems to predict the significant decline in the expected values around the first half of 2014, which in reality would be a considerable warning sign for a user of the strategy, whereas the estimate coming from the parameters' updating scheme that uses 22 recent weeks of data only, appears to completely fail predicting the behavior of actual returns throughout the 2014 crisis.
Towards the start of 2015, both schemes seem to converge again and revert to a better prediction of actual returns dynamics (although the "prediction" still seems to be questionable in 2015, probably due to the continuing volatile and rough behavior of the market).
Comparison of the calculated probability of the Profit Call with its actual frequency Finally, we look at the behavior of such crucial indicator as the probability of the process hitting the Profit Call threshold.
This probability can be estimated using analytical formula for the maxi-mum of the diffusion process (provided that the PC level is hit before the TS threshold) as described in Chapter 3. If we introduce now a simple Bernoulli r.v. that takes value 1 in case of the Profit Call and 0 otherwise then we can consider the sequence of such r.v.'s, each of which corresponds to each next week of trading (certainly, the probability of the PC, say, p i , keeps varying from week i to the next one) and if we sum up all these Bernoulli r.v.'s over a time period, the resulting sum will be a Poisson Binomial r.v. with expected value As each of probabilities p i is pre-calculated at start of corresponding week, we can therefore calculate the resulting (theoretical) expected value of this Poisson Binomial r.v. (effectively, this would be an "average theoretical probability of a profit call" over the period) and we can compare it with the actual frequency of weeks with profit calls (simply dividing the number of weeks in which PC was hit, by the total number of weeks in the period).
The results are given in Table 3 (again we consider only the "D" configuration here for simplicity and consistency). Table 3. Relative number of actual profit calls vs. theoretical values Again, the PC estimate that comes from the scheme of parameters' updating that uses all the history available by the corresponding week, is much closer to the observed relative number of the actual profit calls, with respective variances (of the corresponding Poisson Binomial r.v.'s) also in good agreement.
Figure 1 :
1Weekly returns (in euros) of the test run of the strategy (the actual period covered is from start of 2012 to the mid of 2015 (around 182 weeks))
Figure
Figure 2: Dynamics of P &L according to three schemes of parameters updating: from the scheme when parameters are estimated from the previous year of historical data only -to the one that uses 3 previous years
Figure 3 :
3Four selected trading weeks of EUR/USD from 2014, marking the persistent downward trend 3 Analytic expression for the maximum of a process stopped at given level of drawdown
Figure 5 :
5Density profile of the weekly returns. The profit call and the trailing stop threshold are selected to be, respectively, 55 and 50.
yFigure 6 :
6Expressed via the absolute price change units (pips), the expected value has the formE [W R ] = · f M T S −T S (y)dy + P C · P(M T S P C) ,where T S and P C are the sizes of the trailing stop and profit call thresholds (ex-Histogram of simulated weekly returns. Parameters are fixed at same levels as in previous example.pressed in selected units), M T S is the running maximum of the process stopped at trailing stop and f M T S −T S is the pdf of running maximum shifted down by the value of the trailing stop (effectively, (M T S − T S) is the final balance of the position closed by trailing stop).
Figure 8 :
8Dynamics of the estimates of λ/σ 2 estimated using: 1) the recent history (22 recent weeks) only and 2) all the available history to date.OU might become valid again. However, the edge of 2014/2015 looks like a clear warning indicating that the models relying on a particular form of the underlying diffusion process (OU process in this case) should be used with extreme care.
Figure 9 :
9Dynamics of actual weekly returns against the calculated expected value for a selected time interval.
Figure 10 :
10(expected vs actual) actual expected (recent history used) expected (ALL history used) actual expected (recent history used) expected (ALL history used) Dynamics of actual weekly returns against the calculated expected value for the entire time interval.
p i )p i .
Table 1. Results of different schemes of parameters' updating.weekly mean µ
previous year only 2 recent years
3 recent years
4 yrs
2011e (19, 20, 51, 58), 171
2012A (19, 20, 51, 58), 122
2012e (45, 33, 52, 61), 307
(45, 34, 52, 61), 192
2013A (45, 33, 52, 61), 224
(45, 34, 52, 61), 199
2013e (39, 32, 52, 61), 251
(44, 33, 52, 61), 263
(44, 33, 52, 61), 198
2014A (39, 32, 52, 61), 31
(44, 33, 52, 61), 5
(44, 33, 52, 61), 5
2014e (20, 22, 51, 59), 157
(39, 32, 52, 61), 144
(45, 33, 52, 61), 184
tbf
2015A (20, 22, 51, 59), 128
(39, 32, 52, 61), 155
(45, 33, 52, 61), 81
tbf
2015e (23, 21, 55, 63), 239
to be filled
to be filled
tbf
AcknowledgmentsWe acknowledge and appreciate the collaboration with the Research and Development team of Quantum Brains Capital who provided feedback that allowed for useful and motivating discussions.
I I Gihman, A V Skorokhod, Stochastic Differential Equations. New YorkSpringe-VerlagGihman, I. I. and A. V. Skorokhod (1972). Stochastic Differential Equa- tions, Springe-Verlag, New York.
Formulas for stopped diffusion processes with stopping times based on drawdowns and drawups. L Pospisil, J Vecer, O Hadjiliadis, Stochastic Processes and their Applications. 119L. Pospisil, J. Vecer and O. Hadjiliadis (2009). Formulas for stopped diffusion processes with stopping times based on drawdowns and drawups. Stochastic Processes and their Applications, 119 (8), 2563-2578.
|
[] |
[
"A growth model for RNA secondary structures A growth model for RNA secondary structures 2",
"A growth model for RNA secondary structures A growth model for RNA secondary structures 2"
] |
[
"Francois David [email protected] \nService de Physique Théorique\nCEA Saclay\n91191Gif-sur-Yvette CedexFRANCE\n",
"Christian Hagendorf [email protected] \nLaboratoire de Physique Théorique\nCNRS\nEcole Normale Supérieure\n75231Paris cedex 05FRANCE\n",
"Kay Jörg Wiese [email protected] \nLaboratoire de Physique Théorique\nCNRS\nEcole Normale Supérieure\n75231Paris cedex 05FRANCE\n"
] |
[
"Service de Physique Théorique\nCEA Saclay\n91191Gif-sur-Yvette CedexFRANCE",
"Laboratoire de Physique Théorique\nCNRS\nEcole Normale Supérieure\n75231Paris cedex 05FRANCE",
"Laboratoire de Physique Théorique\nCNRS\nEcole Normale Supérieure\n75231Paris cedex 05FRANCE"
] |
[] |
A hierarchical model for the growth of planar arch structures for RNA secondary structures is presented, and shown to be equivalent to a tree-growth model. Both models can be solved analytically, giving access to scaling functions for large molecules, and corrections to scaling, checked by numerical simulations of up to 6500 bases. The equivalence of both models should be helpful in understanding more general tree-growth processes.
|
10.1088/1742-5468/2008/04/p04008
|
[
"https://arxiv.org/pdf/0711.3421v1.pdf"
] | 1,813,175 |
0711.3421
|
bf4c5fc74518febb2b8258678211d7f088c251ff
|
A growth model for RNA secondary structures A growth model for RNA secondary structures 2
21 Nov 2007
Francois David [email protected]
Service de Physique Théorique
CEA Saclay
91191Gif-sur-Yvette CedexFRANCE
Christian Hagendorf [email protected]
Laboratoire de Physique Théorique
CNRS
Ecole Normale Supérieure
75231Paris cedex 05FRANCE
Kay Jörg Wiese [email protected]
Laboratoire de Physique Théorique
CNRS
Ecole Normale Supérieure
75231Paris cedex 05FRANCE
A growth model for RNA secondary structures A growth model for RNA secondary structures 2
21 Nov 2007
A hierarchical model for the growth of planar arch structures for RNA secondary structures is presented, and shown to be equivalent to a tree-growth model. Both models can be solved analytically, giving access to scaling functions for large molecules, and corrections to scaling, checked by numerical simulations of up to 6500 bases. The equivalence of both models should be helpful in understanding more general tree-growth processes.
Introduction
RNA molecules play an important role in all living organisms [1]. They are usually found in a at least partially folded state, due to the pairing of a base with at most one other base. A given configuration is thus characterised by the set of base pairings, see figure 1. These pairings are mostly planar [2,3,4] (see [5] for non-planar corrections), which is what we will suppose from now on. At high temperatures, in the so-called "molten phase", energetic considerations only play a minor role, and the probability P ij of two RNA-bases to pair, is [6]
P ij ∼ |i − j| −ρ , ρ = 3 2 ,(1)
where i and j are the labels of the bases counted along the backbone/strand, and n is the overall size of the RNA-molecule, i.e. its total number of bases. At low temperature, the RNA-molecule will settle into the optimally paired (or folded) configuration, i.e. the minimal energy state, as long as this state is reachable in the available time-scales. The optimal fold for a given molecule is a question to be answered by biology. Since biological sequences are rather specific, much effort has been invested to understand the properties of a random sequence, termed "random RNA". The idea is that either the folding properties of random RNA are close to those of biological sequences, or if not, that they must be characterised in order to understand the deviations present for biological RNA, giving eventually a hint why nature is organising in a certain way. In solution, a single RNA molecule bents back onto itself and folds into a configuration of loops, stems and terminating bonds, due to pair formation from nucleotides located on different parts of the polymer strand. The set of base pairs, Watson-Crick pairs A-U, G-C and the less favorable wobble pair G-U, defines the secondary structure. Illustration of RNA secondary structures: (a) an RNA molecule with given base sequence folds into a base pair configuration (b). In the absence of pseudo-knots the secondary structure may be represented as a diagram of non-intersecting arches (c).
Characterising random RNA has proven a challenge so far: Numerical work [2,7,8,9] is restricted to relatively small molecules, with up to maximally 2000 bases [10], despite the fact that rather efficient polynomial algorithms exist (∼ n 3 ). Analytical work was pioneered by Bundschuh and Hwa [2,7]. From their numerical work, they claim that for large molecules, a random-base model is indistinguishable from a random pairing-energy model, where the pairing energy ǫ ij between base i and j is a random Gaussian variable, confirmed in [8]. Bundschuh and Hwa then conjectured that a phase transition separates the high-temperature molten phase from a low-temperature frozen phase. Using an RG treatment, Lässig and Wiese [11] showed analytically, that this phase transition exists, and is of second order. They also calculated the exponents characterising the transition, and using a locking argument extended their findings to the low-temperature (glass) phase. David and Wiese [12] substantiated these findings, by constructing the field theory, showing its renormalisability to all orders, and performing an explicit 2-loop calculation, yielding ρ transition = ρ frozen ≈ 1.36 .
(
The field theory makes some definite predictions about the transition, which are hard to verify numerically. A major problem is that the systems are not large enough to analyze the asymptotic behavior. Under these circumstances, the knowledge of a scaling function would be very helpful, as would be the knowledge of the form of corrections to scaling. We therefore propose a simple hierarchical model, where all this can be calculated analytically ‡. This is based on the observation, that if the n(n − 1)/2 possible pairing energies ǫ ij are ordered hierarchically ǫ i 1 j 1 ≫ ǫ i 2 j 2 ≫ . . . ≫ ǫ i n(n−1)/2 j n(n−1)/2 ,
then the construction of the minimal energy configuration is much simplified: First take the largest pairing energy ǫ i 1 j 1 , and pair bases i 1 and j 1 . Among the remaining pairings, consider only those allowed by planarity. Among those, choose the one with the largest pairing energy, and pair the corresponding bases. Repeat this procedure until no more bases can be found. The same idea is at the base of the dynamics for greedy algorithms of RNA folding: At each time-step, choose the most favorable base pairing and fold it.
In this article, we systematically analyse the statistical properties of the structures built in the hierarchical model. In particular, we compute exactly its properties as n → ∞, for example we prove that in this limit the pairing probability reads
P (i, j) ∼ |i − j| −ρ , ρ = 7 − √ 17 2 ≈ 1.44 .(4)
We then calculate scaling functions for higher moments of the "height function" (which encodes the pairings), and their finite-size corrections. This is achieved with two complimentary approaches: Generating functions for the arch-deposition model ‡ While working on this project, we learned from Markus Müller that he had considered this model in his PhD-thesis [13], but not published elsewhere. He also found the scaling exponent ζ to be discussed below, but did not consider the scaling-functions and corrections-to-scaling which are the main purpose of this article.
introduced above, and a dual tree-growth process. The advantage is that quantities which can easily be calculated in one model, are difficult to obtain in the other, and vice versa. This idea may be interesting for more general tree-growth processes, since if the dual model can be constructed there, it would allow to calculate otherwise inaccessible quantities. For examples of tree growth processes, we refer the reader to [14,15,16,17] among the vast existing literature.
The presentation is organised as follows. In section 2, we provide a general framework for the hierarchical model in terms of recursion relations for finite n. The recursion relations are analysed by means of generating functions. In the limit n → ∞ we extract the scaling behaviour of various quantities and compute sub-leading finitesize corrections in sections 3, 4 and 5. We compare our results to numerical simulations in section 6. In section 7 we present an alternative tree-growth model which we show to yield equivalent structures even though the dynamics of their construction is quite different. Several technical points and extensions are relegated to three appendices.
Arch deposition model
Arch systems and height functions
We consider a strand with n bases labeled by indices i = 1, . . . , n. Similarly, we use the same index i to label the segments between consecutive bases i and i + 1. A secondary structure C is a set of base pairs (i, j) with 1 ≤ i < j ≤ n. C is called planar if any two (i, j), (k, l) ∈ C are either independent i < j < k < l or nested i < k < l < j. In what follows, the structures are supposed to be planar. Thus, we may represent a given structure by a diagram of non-intersecting arches (see figure 2a). Given some structure C, it is natural to ask whether it contains an arch a = (i, j). This is answered by the contact operator Φ C defined by
Φ C (i, j) := 1 if a ∈ C 0 otherwise .(5)
For our investigations the so-called height function for the segment i will play a central role. It is defined as
h C (i, n) := i j=1 n k=i+1 Φ C (j, k) .(6)
It counts the number of arches above a given segment [i, i + 1], and thus has boundary conditions h C (0, n) = h C (n, n) = 0. Therefore, the height function h C (i, n) provides a one-to-one correspondence between C and mountain reliefs (Dyck-like paths) on the interval [0, n] subject to vanishing boundary conditions and |h C (i + 1, n) − h C (i, n)| = 1 or 0 (see figure 2b). We define the average height by
h C (n) := 1 n n−1 i=1 h C (i, n) .(7)
The random arch deposition process
Definition of the model (model A). The structures C are built up in the following way: At initial time step t = 0, we start with n unoccupied points on the line. At each time step t, we deposit a new arch as follows. At time step t − 1, we have already a planar system of t − 1 arches linking 2(t − 1) points. We have m = n − 2(t − 1) free points left, and we may build m(m − 1)/2 different arches. We now consider the subsets of these arches (i, j) which keep the arch system planar, when added to the present structure. We choose at random, and with equal probability, one of these arches, and add it to the system at time t (as depicted on figure 3). The process is stopped as soon as no more planar deposition is possible. The stopping time (t stop = number of arches of the final configuration) will vary from configuration to configuration, since not all points get paired.
We call this arch deposition process "model A". Hierarchy and recursion for probabilities. Our construction is "hierarchical" in the sense that each deposition partitions the strand into two non-connected substrands. Since this procedure is performed at random, it naturally induces a probability measure P A (C) on the set of structures C with a given number of points n. Although it turns out to be a quite tedious exercise to compute the probabilities for structures, even with only n = 4, 5, 6, . . . points, we can write a formal yet powerful recursion relation for P A (C). Given C, any arch a ∈ C may have been the first arch in the construction process (at t = 1). Since the deposition of a = (i, j), 1 ≤ i < j ≤ n, is not constrained by the presence of any other arches, its probability is uniform and simply given by 2/[n(n − 1)]. This first step leads to a separation of the strand into an "interior" part with n 1 = j−i−2 points and an "exterior" part with n 2 = n − j + i. The deposition process then grows structures C 1 inside and C 2 outside the first arch (see figure 4). The key observation is that these structures grow independently. Therefore, their joint probability factorises: Figure 4. Decomposition of a configuration C in model A This recursion relation, together with the initial condition P A = 1 for the n = 0 and the n = 1 configurations (no point and a single free point), is sufficient to obtain all probabilities. In fact, it is this relation that renders the arch-deposition model amenable to exact analytic calculations. With the help of P A (C) we can compute averages, i.e. expectation values of an observable F C via
P A (C) = arch a∈C 2 n(n − 1) P A (C 1 ) P A (C 2 ) (8) C 1 C 2 C 2 C 1 aF = F C = C P A (C)F C ,(9)
where the sum is carried out over all possible structures with a fixed number n of points. Throughout this article, we follow the convention to note objects depending on an individual structure C with a subscript.
Summary of results
In this article, we focus on the mean height at a given point h(i, n) = h C (i, n) and the probability P (i, j) that two bases located at i and j are paired. The latter is the expectation value of the contact operator Φ C (i, j), P (i, j) := Φ C (i, j) . Before embarking into calculations, let us briefly summarise some important properties of these quantities as well as our main results. First, note that the construction of the height function (6) implies that h(1, n) is the probability that point i = 1 is paired to any other point 2 ≤ j ≤ n on the strand. Since averaging over structures will lead to translational invariance of Φ C (i, j) = Φ C (i + m, j + m) , for all m, we can interpret h(1, n) as the probability that some arbitrary point 1 ≤ i ≤ n is involved in a pair. We compute h(1, n) for any n and show that it converges to
lim n→∞ h(1, n) = 1 − e −2 = 0.864665 . . .(10)
The full information about all possible structures is contained both in the height profiles as in the pairing probabilities. In the scaling limit n → ∞, we show that the height function and the pairing probabilities take the scaling forms
h(i, n) ∼ n→∞ n ζ H 1 i n and P (i, j) ∼ n→∞ n −ρ P |i − j| n(11)
with scaling functions H 1 and P which we compute exactly as well as the scaling exponents ζ and ρ. From eqn. (6), we immediatly deduce the scaling relation ζ + ρ = 2.
The exponent ζ is also related to the intrinsic Hausdorff dimension of the tree structure dual to the arch system by d h = 1/ζ. Therefore it is sufficient to determine the exponent ζ which we show to be
ζ = √ 17 − 3 2 ≈ 0.561553 , ρ = 7 − √ 17 2 ≈ 1.43845 .(12)
This agrees with [13]. It follows that the average mean height h(n) = h C (n) grows like n ζ for large n. We determine its exact generating function, which allows us to compute sub-leading corrections to the scaling limit to any desired accuracy. The analysis of higher moments h(i, n) k naturally raises the question of multifractality of the arch structures/height profiles. We show that
h C (i, n) k ∼ n→∞ n ζ k H k i n with ζ k = kζ(13)
with scaling functions H k that we can in principle compute. Using this result we are able to prove the absence of multifractality.
Recurrence relations and generating functions
We now exploit the recursion relation (8) to compute the moments of the height function h C (i, n) k , k = 0, 1, 2, . . . Our general strategy is to extract their properties by analyzing the behavior of their corresponding generating functions.
Recurrence relation for the height function: the principle
We want to evaluate the height h C (i, n) for a given structure C. The first arch a = (j, k) splits C into the two independent substructures C 1 and C 2 with lengths n 1 = n − k + j − 1 and n 2 = k − j − 1 respectively. We now consider the height over segment [i, i + 1]. With respect to first arch a, this segment may have three different locations, as indicated on figure 5. (a) if i < j, the segment is situated on the part of the strand which belongs to C 1 and thus the height is given by h C 1 (i, n − k + j − 1). (b) The case i ≥ k is similar, but we must shift the position i → i − k + j − 1, we thus find the height h C 1 (i − k + j − 1, n − k + j − 1).
(c) Finally, if j ≤ i < k, we have to count the height for the structure C 2 with the readjusted position i → i − j, the arches in C 1 over C 2 and the contribution from a. These three terms together are h C 1 (i − 1, n − k + j − 1) + h C 2 (i − j, k − j − 1) + 1. Upon averaging and using (8) we obtain the recursion relation for the average height function
n(n − 1) 2 h(i, n) = i<j<k<n h(i, n−k+j−1) + 0<j<k≤i h(i−k+j−1, n−k+j−1) + 0<j≤i<k<n [h(j−1, n−k+j−1) + h(i−j, k−j−1) + 1](14)
In the scaling limit n → ∞, we may insert the scaling ansatz h(i, n) ∼ n ζ H(i/n) from (11) and replace sums by integrals, which yields after a few manipulations a Volterra In the sequel, we shall develop a more systematic approach to extract the scaling behaviour which is based on recursion relations like (14). Furthermore, this allows to compute sub-leading corrections to the scaling limit and therefore to exactly quantify finite-size contributions.
Generating functions for the local height moments
Since the relations for the height h are additive in h, it is convenient to deal with the exponential function as a generating function of the moments. We thus consider the generating function for the height h at site i for a strand of length n
E(i, n; z) = exp (z h C (i, n)) = ∞ k=0 z k k! h C (i, n) k .(15)
We obtain the recurrence equation for E n(n − 1) 2 E(i, n; z) = i<j<k E(i, n − k + j − 1; z)
+ j<k≤i E(i − k + j − 1, n − k + j − 1; z) + e z j≤i<k E(i − 1, n − k + j − 1; z)E(i − j, k − j − 1; z) .(16)
Note the crucial factorisation in the last term due to the independence of the substructures C 1 and C 2 inside C once the first arch a is chosen. It is convenient to introduce the "grand-canonical" generating function
G(u, v; z) = ∞ n=0 n i=0 e zh C (i,n) u i v n−i = ∞ n=0 n i=0 E(i, n; z)u i v n−i ,(17)
which contains the contribution of strands with arbitrary length n, and which is left/right symmetric G(u, v; z) = G(v, u; z). The discrete recursion relation for E becomes the non-linear partial differential equation
1 2 u 2 ∂ 2 ∂u 2 + v 2 ∂ 2 ∂v 2 + uv ∂ 2 ∂u∂v G(u, v; z) = u 2 (1 − u) u ∂ ∂u + 1 + v 2 (1 − v) v ∂ ∂v + 1 G(u, v; z) + uv e z G(u, v; z) 2 (18)
We directly derive initial conditions for G(u, v; z) at u = 0 (or v = 0) from the series development (17). Since by definition the height function vanishes at the ends of the strand, we have
G(0, v; z) = ∞ n=0 e zh C (0,n) v n = ∞ n=0 v n = 1 1 − v .(19)
For
z = 0 we find G(u, v; 0) = (1 − u) −1 (1 − v) −1 .
2.4.3.
Generating functions for the local height h(i, n) and h(n) From G we obtain the generating function for the height h(i, n) itself
F (u, v) = ∞ n=0 n i=0 h C (i, n) u i v n−i = ∂ ∂z G(u, v; z) z=0 .(20)
Using (18) and G(u, v; 0) = (1 − u) −1 (1 − v) −1 , we conclude that F satisfies the linear partial differential equation
1 2 u 2 ∂ 2 ∂u 2 + v 2 ∂ 2 ∂v 2 + uv ∂ 2 ∂u∂v F (u, v) = u 2 (1 − u) u ∂ ∂u + 1 + v 2 (1 − v) v ∂ ∂v + 1 + 2 u 1 − u v 1 − v F (u, v) + u (1 − u) 2 v (1 − v) 2(21)
with initial conditions F (0, v) = F (u, 0) = 0. It is straightforward to obtain the generating function of the sum of the heights nh(n) (or total area below the height
curve h(i, n), 0 ≤ i ≤ n) from F (u, v) by setting u = v: K(v) := F (v, v) = ∞ n=0 n i=0 h C (i, n) v n = ∞ n=0 nh(n)v n(22)
(21) implies that K is solution of the ordinary differential equation
(1 − v) 2 K ′′ (v) − 2v(1 − v)K ′ (v) − 4(2 − v)K(v) = 2 (1 − v) 2(23)
From h(0) = h(1) = 0 we infer the intial conditions K(0) = K ′ (0) = 0. Analysis of (21) and (23) in the limit u, v → 1 will give access to the scaling limits of the height function as well as its average h(n).
Mean height and the scaling exponent ζ
In this section, we derive the exact scaling form of the mean height h(n) from the differential equation (23) for K(v). In order to get an idea of the scaling limit, let us suppose that h(n) scales like
h(n) ∼ n→∞ c n ζ .(24)
Since ζ > 0, insertion of this ansatz into (22) implies that the generating function K(v) is analytic in the vicinity of v = 0, with convergence radius 1. Its closest singularity is situated at v = 1, with a power-like divergence
K(v) ∼ v→1 − c Γ(ζ + 2) (1 − v) 2+ζ .(25)
Inserting this ansatz into (23), the most singular term is
p(ζ)(1 − v) −2−ζ ! = 0, with p(ζ) = ζ 2 + 3ζ − 2. The roughness exponent ζ is thus solution of p(ζ) = 0, i.e. ζ ± = −3 ± √ 17 2 (26)
We thus identify the roughness exponent with the larger solution
ζ = ζ + = √ 17 − 3 2 = 0.561552 . . . .(27)
This is the value obtained (using a different argument) by Markus Müller in [13]. Let us recall that the roughness exponent ζ is related to the pairing-probability exponent ρ by ζ + ρ = 2. Moreover, its inverse is equivalent to the intrinsic fractal dimension (or intrinsic Hausdorff dimension) d f = 1/ζ. Thus for our model
ρ = 7 − √ 17 2 = 1.438447 . . . , d f = √ 17 + 3 4 = 1.78077 . . . . (28)
This exact value for the roughness exponent ζ is larger than the one for generic arch systems (with weight factors given by the Catalan statistics, i.e. generic trees or branched polymers in the dual picture), which corresponds to RNA in the homopolymer phase (no disorder), where
ζ 0 = 1 2 , ρ 0 = 3 2 , d f 0 = 2 .(29)
However, it is smaller than the value observed in numerical simulations for random RNA [8,7] ζ random RNA ≈ 0.66 ,
and that of 2-loop RG [12] ζ = 0.64 for random RNA.
We now solve equation (23) exactly. First, note that a particular solution of the full equation is given by
K 0 (v) = − 1 (1 − v) 2 .(31)
Consequently, we need an appropriate solution K 1 (v) of the homogeneous version of (23). Performing the transformations
K 1 (v) = e −2v (v − 1) ζ+1 u(z) , z = 2(1 − v) , ζ = √ 17 − 3 2 (32)
the equation for K 1 is changed to a confluent hypergeometric equation for u(z)
zu ′′ (z) + [2(ζ + 2) − z] u ′ (z) − (ζ + 1)u(z) = 0 .(33)
After a few manipulations, the (appropriate) general solution of this differential equation
for K(v) = K 0 (v) + K 1 (v) is of the form K(v) = C + (1 − v) ζ+2 M(−ζ, −2 − 2ζ; 2 − 2v) + C − (1 − v) 1+ζ M(ζ + 3, 2ζ + 4; 2 − 2v) − 1 (1 − v) 2(34)
where M(a, b, z) is the confluent hypergeometric function [18]. The coefficients C + and C − are fixed by the constraint that K(v) be analytic at v = 0 and that its Taylor expansion start at order v 2 . Hence C + and C − are given by complicated and not especially enlightening combinations of confluent hypergeometric functions at z = 2.
Numerically we find
C + = 0.713263 . . . C − = 0.519299 . . . .(35)
The first terms of the Taylor expansion of K are rationals
K(v) = v 2 + 4 3 v 3 + 8 3 v 4 + 56 15 v 5 + . . . .(36)
The asymptotic limit n → ∞ is equivalent to v → 1 − . In this case, K(v) has a powerlaw divergence and its most singular terms contribute to the leading orders of h(n). A Taylor expansion of (34) yields
K(v) = C + (1 − v) 2+ζ 1 + ζ 1 + ζ (1 − v) − ζ(1 − ζ) (1 + ζ)(1 + 2ζ) (1 − v) 2 − 1 (1 − v) 2 + . . . (37)
where we have omitted the terms which remains finite v → 1 − . This expression allows to compute the scaling behaviour for the average height h(n) by inversion of the transformation. After some algebra, we find for n ≫ 1
h(n) = C + Γ(ζ + 2) n Γ(n + 1) Γ(2 + ζ + n) + ζ Γ(1 + ζ + n) − ζ(1 − ζ)(1 + ζ) (1 + 2ζ) Γ(ζ + n) − 1 − n −1 + . . . = 0.51334 n ζ − 1 + 1.31498 n ζ−1 − n −1 + 0.41413 n ζ−2 + O(n ζ−3 )(38)
Therefore at leading order we indeed find the scaling law h(n) ∼ c n ζ with c = C + /Γ(ζ + 2) = 0.51334 . . .. Note that in principle, all amplitudes in (38) as well as subsequent corrections may be computed exactly in terms of hypergeometric functions and the gamma function. However, we shall omit these rather lengthy expressions and content ourselves with numerical values. This explicit solution will be useful to test numerical simulations and the domain of validity for the scaling ansatz (see section 6).
Scaling behaviour and scaling functions
Scaling form for the F function
In this section we show that the average-height function h(i, n) = h C (i, n) takes the following scaling form in the limit of long strands
h C (i, n) = n→∞ n ζ H 1 (x) , x = i n .(39)
This is in fact a particular case of the general scaling form for the moments of h
h C (i, n) k = n→∞ n kζ H k (x) , x = i n .(40)
The partial differential equation (21) indicates that the generating function F (u, v) has in R 2 singular lines at u = 1 and at v = 1. These singularities govern the long-strand limit n → ∞ with respectively n − i = O(1) and i = O(1). The scaling limit n → ∞, i/n = O(1) is governed by the singularity at u = v = 1.
To prove validity of the scaling (39), it is sufficient to show that in the limit u, v → 1 the generating function F (u, v) scales as
F (u, v) = u,v→1 τ −2−ζ F 1 (ω) (41) with τ = 1 − u + v 2 , σ = v − u 2 , ω = σ 2 /τ 2 .(42)
In terms of the new variables σ and τ , the scaling limit is τ and σ → 0, ω = σ 2 /τ 2 = O(1) fixed. In this scaling limit the transformation (20) h C → F becomes a double Laplace transform. The corresponding transformation
H 1 (x) → F 1 (ω) is F 1 (ω) ≈ ∞ 0 dn 1 0 dx n ζ+1 (1 − √ ωτ − τ ) nx (1 + √ ωτ − τ ) n−nx H 1 (x) ≈ ∞ 0 dn 1 0 dx n ζ+1 e −( √ ω+1)τ nx e −(1− √ ω)τ n(1−x) H 1 (x) = Γ(2 + ζ) 1 0 dx H 1 (x) 1 − √ ω + 2x √ ω −(2+ζ) .(43)
To obtain the equation for F 1 (ω), we keep the most singular terms in (21) when u, v → 1. This gives 1 2
∂ 2 ∂u 2 + ∂ 2 ∂v 2 + ∂ 2 ∂u∂v − 1 1 − u ∂ ∂u − 1 1 − v ∂ ∂v − 2 (1 − u)(1 − v) F (u, v) ≃ 0 . (44)
Using ansatz (41), we obtain a hypergeometric differential equation for F 1 (ω)
ω(1 − ω)F ′′ 1 (ω) + 3 + 2ζ 2 − 7 + 2ζ 2 ω F ′ 1 (ω) − (ζ + 4) 2 F 1 (ω) = 0 . (45)
Thus the scaling function F 1 (ω) is a hypergeometric function
F 1 (ω) = D 2 F 1 (1 + ζ/2, (3 + ζ)/2, ζ + 3/2, ω) .(46)
where D denotes some constant. We explicitly compute D from the constants C + and C − obtained in (34) and (35)
since K(v) = F (v, v) ≃ (1 − v) −(2+ζ) F 1 (0) = τ −(2+ζ) F 1 (0) as v → 1. One obtains D = C + = 0.713263 · · · .(47)
As we shall see in the next sub-section, the form (46) for F 1 (z) implies a very simple scaling function for the average height.
Scaling form for the average height function h(i, n)
Proposition: The scaling limit H 1 of the average height distribution h(i, n), as defined in (39), is given by a simple "beta-law" with exponent ζ
H 1 (x) = E x ζ (1 − x) ζ , ζ = √ 17 − 3 2(48)
and the amplitude
E = Γ(2 + 2ζ) (1 + ζ)Γ(1 + ζ) 3 C + = 1.45717 . . .(49)
where C + is given in (47). Discussion and proof: The fact that the average heigth H 1 (x) scales as x ζ for small x was already known by Müller [13]. The simple exact form for H 1 (x) is quite remarkable and unexpected. Our first hints for (48) came from the numerical simulations that we describe in section 6.
To prove (48) it is simpler to start from (48) and to show that it implies the form (46) for F 1 (z) (the transformation H 1 → F 1 is linear and one-to-one). Inserting (48) into the definition for F (u, v) when u, v → 1 we have (this is equivalent to use (43))
F (u, v) ≃ E i,j i ζ j ζ (i + j) ζ u i v j ≃ E ∞ 0 di ∞ 0 dj i ζ j ζ (i + j) ζ e −i(1−u) e −j(1−v) = √ πE 2 1+2ζ Γ(1 + ζ)Γ(2 + ζ) Γ(3/2 + ζ) (1 − v) −(2+ζ) 2 F 1 1 + ζ, 2 + ζ, 2 + 2ζ; u − v 1 − v(50)
We now use the quadratic identity for hypergeometric functions [18]
2 F 1 (a, b, 2b, z) = 1 − z 2 −a 2 F 1 a 2 , 1 + a 2 , 2b + 1 2 , z 2 − z 2(51)
in the special case a = 2 + ζ, b = 1 + ζ. We obtain
F (u, v) = √ πE 2 1+2ζ Γ(1 + ζ)Γ(2 + ζ) Γ(3/2 + ζ) 1 − u + v 2 −2−ζ (52) × 2 F 1 2 + ζ 2 , 3 + ζ 2 , 3 2 + ζ; v − u 2 − u − v 2 .
(53)
Upon identification with (41) (and using the duplication formula for the Γ-function, [18]) we recover the scaling solution (46) for F 1 . Q.E.D.
Scaling for higher moments of the height function
In this section, we study the higher moments of the local height h C (i, n) at site i for a strand of length n, h C (i, n) k . Once again, the starting point is a generating function
G k (u, v) = ∞ n=0 n i=0 u i v n−i h C (i, n) k = ∂ k ∂z k G(u, v; z) z=0 .(54)
G(u, v; z) denotes the solution of (18). From this equation, we are able to recursively determine a generating function G k by a partial differential equation involving functions G k ′ , k ′ < k. For example, G 2 is a solution of the linear equations
1 2 u 2 ∂ 2 ∂u 2 +v 2 ∂ 2 ∂v 2 +uv ∂ 2 ∂u∂v − u 2 (1−u) u ∂ ∂u +1 − v 2 (1−v) v ∂ ∂v + 1 G 2 (u, v) = uv 1 (1−u) 2 (1−v) 2 + 4 (1−u)(1−v) F (u, v)+2F (u, v) 2 + 2 (1−u)(1−v) G 2 (u, v)(55)
where the right-hand side involves the k = 0 and k = 1 moments. F (u, v) = G 1 (u, v) is the generating function for the average height h C (i, n) studied previously.
Averaged k-moments
Let us first consider the generating function for the "integral" of the averaged k-moment
K(t; z) = G(t, t; z) = ∞ k=0 z k k! K k (t) (56) with K k (t) = ∞ n=0 t n n i=0 h C (i, n) k(57)
We already know K 0 (t) = (1 − t) −2 and K 1 (t) from (34). K(t; z) satisfies the non-linear differential equation
1 2 ∂ 2 ∂t 2 − t 1 − t ∂ ∂t − 2 1 − t K(t, z) = e z K 2 (t; z)(58)
The scaling limit corresponds to t → 1 − . In this limit, we expect the functions K k (t) to scale as
K k (t) ≃ b k (1 − t) −2−ζ k , ζ k = k ζ(59)
This implies that the average of the k-moment of the local height scales with the length of the strand n as 1 n
n i=0 h C (i, n) k ≃ a k n ζk , a k = b k Γ(2 + kζ)(60)
The coefficients a k can be computed recursively from the first non-trivial one a 1 ≃ 0.51334, that we computed previously, see (38). Indeed from (59) it follows that K(t, z) takes the scaling form
K(t, z) = t→1 − 1 (1 − t) 2 K(u) = 1 (1 − t) 2 ∞ k=0 b k k! u k ,(61)
with the scaling variable
u = z(1 − t) −ζ . Corrections are of order (1 − t) 1−ζ ; considering e z K(t, z), they would be of order (1 − t).
Using (58) and inserting the scaling function K(u), we obtain up to terms of order (1 − t) the equation
K(u) + uK ′ (u) + 1 2 ζ 2 u 2 K ′′ (u) = K(u) 2 .(62)
We obtain
b 2 = b 2 1 5 + √ 17 6 , b 3 = b 3 1 92 + 22 √ 17 59 , · · ·(63)
Numerically, we find
K(u) = 1 + 0.713243u + 0.386756u 2 + 0.18727u 3 + 0.0851827u 4 + 0.0372364u 5 + 0.015835u 6 + 0.00659914u 7 + 0.00270789u 8 + . . . (64)
As an application, let us evaluate the average height fluctuations by considering the quantity
∆ 2 = 1 n n k=1 h C (k, n) 2 − h C (k, n) 2 ≈ a 2 − E 2 Γ(2ζ + 1) Γ(4ζ + 2) n 2ζ ≈ 0.055658 n 2ζ (65)
We thus conclude that the fluctuations of the height function remain large in the scaling limit (see section 6).
General scaling function
We now consider the general scaling limit of the generating function G(u, v; z) for the moments of h C (i, n). The correct ansatz is
G(u, v; z) = u,v→1 τ −2 G(z, ω) (66) withz = zτ −ζ , τ = 1 − u + v 2 , ω = v − u 2 − u − v 2(67)
In the scaling limit we obtain for G(z, ω) the equation
2ω 2 ∂ 2 ∂ω 2 + ζ 2 2z 2 ∂ 2 ∂z 2 +2ζω ∂ 2 ∂ω∂z + 3−7ω 1−ω ω ∂ ∂ω + 1−(1+ζ)ω 1−ωz ∂ ∂z + 1 − 3ω 1 − ω G(z, ω) = G(z, ω) 2(68)
Expanding inz we find the scaling limit for the generating functions of the moments of h C (i, n) via a Taylor expansion
G(z, ω) = ∞ k=0z k k! G k (ω)(69)
At order k = 0 and k = 1 we recover our previous results (46)
G 0 (ω) = 1 1 − ω , G 1 (ω) = F 1 (ω)(70)
and for k ≥ 2 recursive second-order linear differential equations for the G k (ω) with coefficients and second members depending on the previous G k ′ (ω) (k ′ < k). From G k (ω) we obtain the scaling form for the moments
h C (i, n) k ∼ n→∞ n kζ H k (x) , x = i n(71)
The scaling function H k (x) is related to G k (ω) by the integral transformation
G k (ω) = Γ(2 + kζ) 1 0 dx H k (x) 1 − √ ω + 2x √ ω −(2+kζ) .(72)
which generalises (43).
Simple scaling or multifractality?
Studying the roughness properties of the height function in the scaling limit, we naturally are led to the question of multifractality. We shall argue that within our model the height-profile statistics is solely governed by the scaling exponent ζ. This excludes strong fluctuations which might lead to multifractal scaling. To this end, let us consider the moments of the local height variations
∆h k = |h C (i, n) − h C (j, n)| k(73)
In the scaling limit n → ∞, we expect a relation of type
|h C (i, n) − h C (j, n)| k ∝ |i − j| ζ k(74)
in the regime 1 ≪ |i − j] ≪ n. If ζ k = k ζ there is simple scaling, whereas ζ k > kζ (at least for large enough k) implies multifractal behaviour. We now argue that we are in the first case and there is no evidence for multifractality. Indeed, it is easy to show (using the height picture, and using translation invariance to move the point i to the origin of the strand) that the following general inequality holds
|h C (i, n) − h C (j, n)| k ≤ |h C (ℓ, n)| k , ℓ = |j − i|(75)
We know from (71) that for ℓ ≪ n this scales as
|h C (ℓ, n)| k ∝ n kζ H k (ℓ/n)(76)
In the limit n → ∞, ℓ finite, |h C (ℓ, n)| k remains finite, since it is bounded by |ℓ| k . This implies that H k (x) should behave for small x as
H k (x) ≃ x→0 x kζ .(77)
This can be shown more rigorously using (68) for the generating function G(z, ω) of the G k and the integral relation (72) between the G k and the H k . The small-x behavior of H k (x) is related to the ω → 1 behavior of G k (ω). One can check from (68) that the function G(z, ω) must behave when ω → 1 as
G(z, ω) ∼ ω→1 Ω(z) 1 − ω + O(log(1 − ω))(78)
Using (72), this implies that
G k (ω) ∼ ω→1 Ω k 1 − ω ⇒ H k (x) ≃ x→0 x kζ(79)
We conclude that
|h C (i, n) − h C (j, n)| k ≤ const. |i − j| kζ , for 1 ≪ ℓ ≪ n.(80)
what implies that ζ k ≤ kζ. However, we know that ζ k ≥ kζ from general correlation inequalities. Hence it follows
ζ k = k ζ(81)
what proves the abscence of multifractal behaviour, at least for moments of |h C (i, n) − h C (j, n)|.
Corrections to scaling
We can study the corrections to scaling for the height function h C (i, n) . Let us come back to equation (21) for the generating function F (u, v) defined by (20). A particular solution of (21) is
F 0 (u, v) = − 1 (1 − u)(1 − v)(82)
Thus the general solution of (21) is of the form
F (u, v) = F 0 (u, v) + C + F + (u, v) + C − F − (u, v)(83)
where F + (u, v) and F − (u, v) are two linearly independent solutions of the linear equation with no r.h.s.
u 2 2 ∂ 2 ∂u 2 + v 2 2 ∂ 2 ∂v 2 + uv ∂ 2 ∂u∂v − u 2 (1 − u) u ∂ ∂u + 1 − v 2 (1 − v) v ∂ ∂v + 1 − 2uv (1 − u)(1 − v) F (u, v) = 0(84)
It is possible to go to the scaling variable τ and ω used in equations (41) and (66)
u = 1 − τ (1 + y) , v = 1 − τ (1 − y) , ω = y 2(85)
and to take for F + and F − the solutions which can be written respectively as
F + (u, v) = τ −2−ζ +F + (τ, ω) , F − (u, v) = τ −2−ζ −F − (τ, ω) , ζ ± = ± √ 17 − 3 2(86)
(ζ + = ζ is the roughness exponent), such thatF + (τ, ω) andF − (τ, ω) have an asymptotic expansion in powers of τ in the scaling limit τ → 0, and are regular in the domain ω ∈ [0, 1[. Indeed (84) becomes forF ± the linear equations
−2(τ −1) 2 τ ζ ± +2τ 3 ω 2 (2+ζ ± ) −2 (τ −1) ω 4+2τ 2 +ζ ± +2τ (1+ζ ± ) F ± (τ, ω) +2 −3+2τ 3 (ω−1) 2 −2ζ ± −2τ (ω−1) ζ ± +ω (7+2ζ ± ) ω ∂ ∂ωF ± (τ, ω) + −2τ 3 ω 2 −2 (τ −1) ω (−2+τ (ζ ± −1) −ζ ± ) +2(τ −1) 2 (1+τ +ζ ± ) t ∂ ∂tF ± (τ, ω) + (ω−1) 4ω 2 ∂ 2 ∂ω 2F ± (τ, ω)+4 (τ −1) τ ω ∂ 2 ∂τ ∂ωF ± (τ, ω)+ (τ −1) 2 τ 2 ∂ 2
∂τ 2F ± (τ, ω) = 0 (87) Note that in the scaling variables the particular solution reads
F 0 (u, v) = −τ −2 (1 − ω) −1 .(88)
Although not simple, (87) implies that its solutions can be expanded in powers of τ . Indeed, let us expand in τ the functionsF ± (τ, ω)
F ± (τ, ω) = ∞ k=0 τ k k!F (k) ± (ω)(89)
Setting τ = 0 in (87) fixes the equation for the dominant term to
2ω(ζ ± +4)F (0) ± (ω)+2 ((7+2ζ ± )ω − (3+2ζ ± )) ω ∂ ∂ωF (0) ± (ω) + (ω−1) 4 ω 2 ∂ 2 ∂ω 2F (0) ± (ω) = 0(90)
This is nothing but the hypergeometric differential equation (45) for the scaling function F (ω) obtained previously. Its solution is thus
F (0) ± (ω) = 2 F 1 (1 + ζ ± /2, (3 + ζ ± )/2, ζ ± + 3/2; ω)(91)
The expansion in τ gives a hierarchy of hypergeometric-like differential equations for the correction-to-scaling functionsF (k) ± (ω) with a non-zero r.h.s. involving the previous scaling functionsF
(k ′ ) ± (ω), 0 ≤ k ′ < k.
It is easy to check that these equations admit a unique solutionF
The important result is that, using the inverse transformation F (u, v) → h C (i, n) , the average height function takes the general form
h C (i, n) = n ζ + ∞ k=0 n −k H (k) + (x) + n ζ − ∞ k=0 n −k H (k) − (x) − 1(93)
where the dominant term is the scaling function obtained in (48)
H (0) + (x) = H 1 (x) = E x ζ (1 − x) ζ(94)
the leading subdominant term is the last term −1 in (93). In fact, since we know that the leading order scales like n ζ and therefore only grows relatively slowly with n, the correction −1 turns out to be important, even at n = O(10 3 ), a case that we shall consider below. The subleading corrections H
Pairing probabilities
Up to now we focused on the height function and its scaling laws. However, for the original RNA problem pairing probabilities constitute more natural objects. In this section, we compute the single-base pairing probability as well as the scaling limit for the pairing probability P (i, j).
Single-base pairing probability
As mentioned above, h(1, n) is the probability that a given base is involved in a pair. We obtain the generating function g
(v) of h(1, n) from F (u, v), introduced in (20), through differentiation g(v) := v ∂F (u, v) ∂u u=0 = ∞ n=0 h(1, n)v n(95)
According to (21), it is solution of the ordinary differential equation
(1 − v)g ′′ (v) − 2vg ′ (v) = 2 1 − v , g(0) = g ′ (0) = 0.(96)
The initial conditions are due to the fact that h(1, 0) = h(1, 1) = 0. For our purposes, it is sufficient to solve for g ′ (v) and compare to the derivative of the series expansion (95):
g ′ (v) = 1 − e −2v (1 − v) 2 = ∞ n=1 nh(1, n)v n−1(97)
Comparison of the series development on both sides leads us to the explicit expression
h(1, n) = − n−1 k=0 (−2) k+1 (k + 1)! + 1 n n−1 k=0 (−2) k+1 k!(98)
In the limit of large strands n → ∞, the series converges and we obtain
lim n→∞ h(1, n) = 1 − 1 e 2(99)
For large n ≫ 1, the corrections to this result can be determined as follows:
h(1, n) = 1 − 1 e 2 − 2 ne 2 + r(n) , |r(n)| ≤ 2 n n! ln n ,(100)
where the bound is obtained by approximating the remaining terms in the sum by an integral. We shall reconsider this probability later when comparing the arch deposition model to a tree-growth model.
Scaling law for the pairing probability P (i, j)
In this section we compute the scaling function P as defined in (11). First of all, P (i, j) is indeed only a function of the distance -despite the fact that we have singled out an origin. Therefore we expect for large n the scaling form (11),
P (i, j) ∼ n→∞ n −ρ P |i − j| n(101)
With this ansatz, relation (6) in terms of the scaling functions for the height field and the pairing probability turns into the integral equation
n ζ H 1 (z) = n 2−ρ z−ǫ 0 ds 1 z+ǫ dt P(t − s)(102)
This identifies
ρ = 2 − ζ .(103)
The dimensionless scaling functions for height and pairing probability are then related by:
H 1 (z) = z−ǫ 0 ds 1 z+ǫ dt P(t − s)(104)
Note that we have introduced a small ultraviolet cutoff ǫ in order to circumvent possible singularities as t − s → 0. Upon differentiating twice, we find the harmless expression
H ′′ 1 (z) = −(P(1 − z + ǫ) + P(z + ǫ)) = ǫ→0 −(P(1 − z) + P(z))(105)
It is clear that the limit ǫ → 0 will not lead to any problems since potentially divergent terms have canceled out. Since P(z) is invariant with respect to the transformation z → 1 − z, it is possible to deduce its exact form
P(z) = −H ′′ 1 (z)/2 = E ζ 2 z ζ−2 (1 −z) ζ−2 [1 − ζ + 2(2ζ − 1)z(1 − z)] .(106)
We see that P(z) factorises into a beta law with characteristic exponent ρ = 2 − ζ and a polynomial correction. Note that a pure beta law is obtained if and only if ζ = 1/2; this corresponds to the RNA homopolymer roughness exponent. Since the amplitude E is known from (49), we have entirely characterised the scaling law.
Numerical simulations
Numerical simulations not only provide a verification of our analytical results in the scaling limit n → ∞ but are useful in order to quantify finite-size corrections. In this section, we present a simple algorithm for random generation of hierarchical arch structures. Furthermore, we compare the statistics obtained from random sampling to the exact solutions.
Outline of the algorithm
We give a description of the algorithm which we have used to generate hierarchical structures C: Information about arches is stored in the "adjacency matrix" Φ C (i, j). In order to take into account the planarity condition we label each basis i = 1, . . . , n with a "colour" c(i) ∈ Z. During the construction process, two bases i, j may be linked by an arch if and only if c(i) = c(j). Furthermore, we introduce two special colours: if a structure C contains an arch (i, j) we colour its endpoints with c(i) = 1 and c(j) = −1 (what turns out to be convenient).
The deposition of arches is carried out in the following way: Initially all colours are set to c(k) = 0, k = 1, . . . , n and all entries of the adjacency matrix to Φ C (i, j) = 0, i, j = 1, . . . , n. First, we randomly choose a base i among all unpaired bases. Next we collect all bases k which may be paired to i without violation of the planarity constraints, i.e. with the same colours c(k) = c(i), k = i, in an ordered list ℓ. If ℓ is empty, the point may be removed from the set of unpaired points and the procedure restarted. From the list ℓ of compatible bases we randomly choose a second base j. For simplicity, let us suppose that i < j ( the converse case is similar). We store information about the so-created arch (i, j) by setting Φ C (i, j) = 1. Moreover we label the starting point and the endpoint of the arch with colours c(i) = 1 and c(j) = −1. Finally, in order to mark the new substructure due to the insertion of this arch, we set c(k) = i + 1 for all i < k < j. We repeat the procedure until no more points can be paired without violation of planarity. See figure 6 for illustration of a single cycle. Once this procedure is finished, the matrix Φ C (i, j) contains all information about the structure. To compute the height field, the colours c(i) may be used as well: if we set c(i) = 0 for all i such that c(i) > 1, then we obtain a sequence {c(i)} n i=1 with entries 0, ±1. It precisely corresponds to the discrete derivative of the height function 1, n). Therefore, we can reconstruct
c(i) = h C (i, n) − h C (i −h C (i, n) = i k=1 c(k) (107)
For a given strand of length n, we perform this construction N times in order to average over the samples. This algorithm is a variant of the point process to be discussed in section 7. Though not being dynamically equivalent to the arch deposition model, the key feature of partitioning into independent sub-structures leads to the same final probability law in configuration space.
Results
We have constructed structures with up to n = 6500 bases in order to test our theoretical predictions. For n ≤ 200 bases, we have sampled 10 6 structures whereas for n > 200 bases, 10 5 structures per data point were sampled.
Single-base pairing probability h(1, n): Results for the probability that a base is paired, which equals the height h(1, n), is presented in figure 7a. We find agreement with the theoretical prediction from (98) within errorbars. In order to compare our data to the results of section 4.3.1 on averaged k-moments, we evaluate the height fluctuations via figure 8b. The data shows good agreement with the prediction from (65). We examine the deviation of the height function h(k, n) from the scaling limit by evaluation of its value at k = n/2. Figure 10 shows the rescaled deviation n −ζ (h(n/2, n) − n ζ H(k/n) + 1). Numerically, we find
∆ 2 = n k=1 ( h C (k, n) 2 − h C (k, n) 2 )/n inn −ζ [h(n/2, n) − n ζ H(k/n) + 1] = O(n −1 ) ,(108)
in agreement with the scaling form (93).
Growth model
The model considered in the previous sections (model A) is a deposition model. The size of the system is fixed; to study the folding of a strand with n bases, we start from a set of n unoccupied points {1, · · · , n} on the line and successively deposit arches in a planar way until the system is full (no deposition possible). Systems with different size n and n ′ are a priori different. In sections 7.1 and 7.2, we show that this arch deposition model A is equivalent to a stochastic growth model G for arch systems, where we start from a system with no points. At each time step t we deposit a new point according to a simple stochastic process, and create a new arch whenever it is possible. We show that in the growth model G the statistics for the arches at time t is the same as the statistics of arches of the deposition model A for a system of N = t points.
In section 7.3, we shall also show that this stochastic growth process can be reformulated (by a simple geometric duality) as a tree growth process T. In section 7.4, as an application, we compute in a simple way local observables of these models, such as the asymptotic (at large time) distribution of the number of branches for a vertex of the growing tree, which is related to the asymptotic distribution of substructures (maximal arches) in the arch model. Finally, in section 7.5, we study the dynamics of this growth model, and compute time-dependent pairing correlation functions. At time t = 0 we start from a closed line (a circle) with no point. At time t = 1 we deposit a point on the circle. Assume that at time t we have already deposited t points and constructed a maximal planar arch system between these points. Namely there are n a (t) arches and n f (t) = t − 2n a (t) free points such that it is impossible to construct a new arch linking 2 free points without crossing an already constructed arch (planarity condition).
At time t + 1 we deposit a (t + 1) th point, with equiprobability 1/t on the t intervals separating the t already deposited points. If it is possible to draw a planar arch between this last point and one of the free points (i.e. an arch which does not intersect one of the existing arches) we add this arch (It is clear that this arch is unique, otherwise the existing planar arch system at time t would not be maximal). Otherwise the new point stays free. we then deposit a (t + 1) th point, with equiprobability 1/(t + 1) on the t + 1 intervals separated by the t already deposited points. If it is possible to draw a planar arch between this last point and one of the free points we add this arch, otherwise the new point stays free.
Note that model G' can also be viewed as model G with an additional inactive point, marking the cut.
Relation between model G and model G'
It is clear that a configuration C of model G can be obtained from a configuration C of model G' by closing the line and that all configurations C that are equivalent by a discrete rotation give the same C (see figure 14). In other words, the configurations C of model G are the Z n -orbits of the configuration space of G' under the action of discrete rotations.
Equivalence between the growth model G' and the deposition model A
It is clear that the arch configurations C of model G' are the same as the arch configurations of model A. It is less obvious that the probability for each configuration in both models are the same.
Theorem: The probability P (C) of any configuration (i.e. class of diagrams) C in models G' and A are the same.
P A (C) = P G ′ (C) .(115)
To prove the theorem we start from the recursion equation (8) for the configuration probabilities in the arch-deposition model A, that we obtained in section 2.1, and rewrite here for completeness
P A (C) = arch a∈C 2 n(n − 1) P A (C 1 ) P A (C 2 )(116)
This recursion relation, together with the initial condition P = 1 for the n = 0 and the n = 1 configurations (no point and a single free point), is sufficient to obtain all probabilities. Any process is equiprobable, therefore the probability for any x is p(x) = 1/n!. It is equivalent to successively create the arches as soon as this is possible, or to create all the arches at time n, with the constraint that any point x(i) can only be connected to the points x(j) with j < i. To any permutation x is associated a unique arch system C and the probability for C is
P G ′ (C) = 1 n! number of x → C = 1 n! card{x : x → C} .(118)
Two configurations C and D of model G' are equivalent in model G if they are equivalent by some Z n rotation r.
C ≡ D ⇐⇒ C = D .(119)
This means that if x is a permutation for C, y = r • x is a permutation for D. Hence there are as many permutations for C as for D.
C ≡ D ⇒ P G ′ (C) = P G ′ (D) .(120)
We now count the number of C which are equivalent by rotation and give C. This is obviously
number of C → C = n s(C)
.
Now we go back to the model G. Any point deposition process on the circle is also in bijection with a permutation, but now with one point fixed, for instance x(1) = 1. Therefore We now go back to the proof of the theorem. In model G any configuration C (with n points) can be constructed by first depositing a couple of points (1, 2), which form a first arch a 1 and then by depositing n 1 points to the right of a and n 2 points to the left, with of course n 1 + n 2 = n − 2. Let us denote C 1 and C 2 the arch configurations to the right and to the left of a 1 in C. These configurations are arch configurations of model G', not of model G, since the arch a 1 cuts the circle into two segments. Once the first two points are deposited, amongst the (n − 1)! possible ways to deposit successively the last n − 2 points, each either to the left or to the right of a 1 , there are (n − 2)! possible ways to deposit n 1 points to the right and n 2 points to the left, independently of (n 1 , n 2 ). In other words, the distribution for (n 1 , n 2 ) is uniform. prob(n 1 , n 2 ) = 1 n − 1 ,
P G (C) = C→C P G ′ (C) = n s(C) P G ′ (C) .(122)n 1 + n 2 = n − 2 .(123)
This can be shown easily by using the recursion relation prob(n 1 , n 2 ) = prob(n 1 − 1, n 2 ) n 1 n 1 + n 2 + 1 + prob(n 1 , n 2 − 1)
n 2 n 1 + n 2 + 1 ,(124)
with initial condition prob(0, 0) = 1. Once this is done, the conditional probabilities to obtain C 1 and C 2 are independent, and given by P G ′ (C 1 ) and P G ′ (C 2 ). The total probability to obtain a configuration C in model G is therefore given by a sum over all (first) arches a 1 in C. Each term of the sum is the probability that a 1 is the first deposited arch, and that one obtains C 1 and C 2 in process G'. There is a counting factor 2/s(C) associated to each initial arch a 1 , where the factor of 2 accounts for the two possible choices for the first point 1 on a 1 , and the symmetry factor 1/s(C) is there to avoid multiple counting when several arches are equivalent. Therefore we have finally
P G (C) = arches a∈C 2 s(C) 1 n − 1 P G ′ (C 1 ) P G ′ (C 2 ) .(125)
Using Lemma (117) the symmetry factor disappears and we obtain for the probability in model G' the recurrence equation
P G ′ (C) = arch a∈C 2 n(n − 1) P G ′ (C 1 ) P G ′ (C 2 ) .(126)
This is exactly the same recurrence relation as for model A. The initial condition are the same for n = 0 and n = 1, which proves the theorem.
Equivalent tree growth processes
Duality with trees
There is a well-known dual description of planar arch systems in terms of planar trees. Represent faces by vertices, and arches by links between two vertices. In our model, we have also free points deposited on the external circle but not yet linked to another point by an arch. Every face of the planar arch system has at most one such free point. We represent such a face by a white vertex • with a white arrow pointing towards the free point. Every face with no free vertex is represented by a black vertex •. We thus obtain a dual description in term of decorated planar trees with at most one arrow per vertex (see figure 17). Within this dual description, the model G is a planar tree growth model T defined as follows.
7.3.2.
Tree growth processes T At t = 0 we start from the tree with a single black vertex and no link. We define the tree-growth process as follows: As illustrated in figure 18, at each time step we either add an arrow to any black vertex, so that it becomes a white vertex (for a black vertex with k links, i.e. a k-vertex, there are k different ways to add an arrow);
or add a second arrow to a white vertex (for a white vertex with k legs there are k + 1 different possibilities); and then transform this white vertex onto two black vertex with a new link ortogonal to the two arrows.
See figure 18 for an illustration. This internal budding process is a specific feature of our growth model.
Another tree growth process T'
A similar growth process is obtained if we forget about the arrow position for •-vertices. One considers trees with black vertices •, and white vertices • if there is an arrow (see figure 19). Indeed it is easy to see that at each step the position of the arrow around a white vertex is equiprobable, i.e. there is a probability 1/k for an arrow to be at a given position on a type • k-vertex. With this = Figure 19. The position of a arrow around a • vertex is uniformly distributed.
property, we consider undecorated trees made out of •and •-vertices. We start from a single • vertex at time t = 0. At each time step we either transform a black k-vertex into a white vertex with probability weight w •→• = k (where k is the coordination number of the black vertex);
or transform a white k-vertex into a pair of black vertices, one k 1 -vertex and one k 2vertex, with k 1 +k 2 = k+2 (and k 1 and k 2 > 0), with a uniform weight w •→• • = 2/k for each occurrence (since for each ordered pair (k 1 , k 2 ) there are k/2 possible ways to split • → • • and there are k + 1 such ordered pairs).
Let us denote the number of •and •-vertices at time t by n • (t) and n • (t) respectively. For any tree created through this growth process up to time t, it holds the Euler relation
2n • (t) + 3n • (t) = t + 2 ,(128)
This relation proven by induction. For t = 0, n • = 0 and n • = 1. During a time step t → t + 1 we either have (n • , n • ) → (n • − 1, n • + 1), or (n • , n • ) → (n • + 2, n • − 1). In both cases 2n • + 3n • increases by 1, as does t + 1. The transition probability p C→C ′ to go from C → C ′ is defined from the probability weights w C→C ′ by
p C→C ′ = w C→C ′ C ′′ w C→C ′′ .(129)
Thus the probability P (C, t) at time t to be in a configuration C is obtained recursively by
P (C, t) = C ′ p C ′ →C P (C ′ , t − 1) = C ′ w C ′ →C C ′′ w C ′ →C ′′ P (C ′ , t − 1) .(130)
Note also that the dual of an arch configuration C in model G' is a rooted tree T rooted (decorated with white arrows, or with black and white vertices as explained above). Thus model G' is dual to a growth model for rooted trees (see figure 21).
Mean-field calculation of local observables
In this section we present a mean-field theory for the point deposition model G. It allows to compute local observables defined on the tree structures such as average vertex densities, coordination numbers etc. in the large-size or long-time limit. More precisely, the mean-field approximation amounts to neglect fluctuations in the limit t → ∞ which are of order 1/t. This allows to transform the problem into a stationary process.
Vertex densities
The simplest observables are local observables, such as the average number of vertices of a given type. Let us denote by n • (k, T ) and n • (k, T ) the total number of k-vertices of type • and • in a tree configuration T obtained at time t, starting from • at time t = 0. We have shown above in (128) that at any time t and for any tree configuration T ∞ k=1 (2n • (k, T ) + 3n • (k, T )) = t + 2 .
Starting from T at time t, the total number of weighted moves t → t + 1 (i.e. of ways to add a new point on the dual configuration) is
Σ(t) = ∞ k=1 (kn • (k, T ) + (k + 1) n • (k, T )) = t .(132)
For simplicity we denote by b k (t) = n • (k, T ) t and w k (t) = n • (k, T ) t the average number of black and white k-vertices at time t. We write down a master equation for their time evolution during a step t → t + 1. To this end, we have to evaluate the transition probabilities for transformations of black and white vertices. During the time step t → t + 1 the probability for a given black k-vertex to become white is
p(• → •) = k Σ(t) = k t .(133)
Similarly, the probability for a given white k-vertex to split into a pair of black k 1 -and
k 2 -vertices (with k 1 + k 2 = k + 2) is p(• k → • k 1 • k 2 ) = k + 1 Σ(t) = k + 1 t .(134)
Hence the master equations for the vertex numbers are given by
b k (t + 1) = b k (t) + 1 t −k b k (t) + 2 q≥k−1 w q (t) (135) w k (t + 1) = w k (t) + 1 t k b k (t) − (k + 1) w k (t) .(136)
In the large time limit t → ∞ we expect the vertex numbers b k (t) and w k (t) to be extensive, i.e. proportional to t. Therefore, we define (assuming that the limit exists) the density of black and white vertices as
β k = lim t→∞ b k (t) t and ω k = lim t→∞ w k (t) t .(137)
Consequently, from the master equations we find two coupled recurrence equations
β k = 2 k + 1 q≤k−1 ω q , ω k = k k + 2 β k(138)
whereas relation (132) implies
k k β k + (k + 1) ω k = 1 .(139)
The solution of these equations for the densities is (see Appendix Appendix A for the derivation)
β k = 1 e 2 2 k (k + 1)! , ω k = 1 e 2 k 2 k (k + 2)! .(140)
Results for vertices and related local observables
The explicit expressions for the vertex densities allow to determine some interesting quantities. For a system with t bases, the average number of black and white vertices are
n • (t) = t × k>0 β k = t × e 2 − 3 2 e 2 and(141)n • (t) = t × k>0 ω k = t × 1 e 2 .(142)
Therefore, the average number of vertices is given by
n • (t) + n • (t) = t × 1 − e −2(143)
We are already familiar with the expression on the left-hand side: because of the duality between trees and arch diagrams, we have just calculated twice the average number of arches in the large strand limit. However, this is nothing but the number of bases (here t) times the single base probability (99). In fact, this observation is consistent with the value for the fraction of white vertices
ω = n • t = k>0 ω k = e −2 = 0.135335(144)
because of the relation lim n→∞ h(1, n) = 1 − ω.
In order to learn more about the average tree structure, we compute the average coordination numbers in mean-field theory:
k • = k>0 k β k k>0 β k = e 2 + 1 e 2 − 3 = 1.91136, k • = k>0 k ω k k>0 ω k = e 2 − 3 2 = 2.19453 .(145)
On average vertices have two legs. The probability for a branching, i.e. the probability to have a vertex with at least 3 points is
∞ k=3 ω k + β k ∞ k=1 ω k + β k = 3e 2 − 17 3 (e 2 − 1) = 0.269584 .(146)
More specifically, the probabilities p(k) to have a branching with a black or white kvertex are: p(1) = 0.41738, p(2) = 0.313035, p(3) = 0.166952, p(4) = 0.0695634, p(5) = 0.0238503, . . . . We thus conclude that branchings (i.e. vertices with at least three legs) are not rare.
Substructures and exterior arch statistics
The arch-tree duality allows us to use the tree growth model to analyse the number of substructures of arch diagrams. A substructure is defined as a maximal (or exterior) arch which has no further arch above itself (see figure 22). We characterise an arch diagram by (k, σ) where k denotes the number of substructures (number of maximal arches) and σ = • or • if the root vertex is black or white. We are interested in the large-time probability distribution p(k, σ) that the "state" of the arch system is (k, σ). Consequently, the probability that the arch diagram has k substructures is given by p(k) = p(k, •) + p(k, •). A (k, •) state is dual to a tree with a black kvertex with a marked leg. The same holds for (k, •) states. Taking into account the combinatorial factor of k for marking a black k-vertex (or equivalently cutting open a circle at a k-vertex), and a similar factor of k + 1 for a white circle (remembering that the additional white point, see figure 22, allows for one more option to cut the circle open) the probabilities p(k, •) and p(k, •) are proportional to the fraction of black or white vertices respectively
p(k, •) = kβ k k (kβ k + (k + 1)ω k ) = kβ k ,(147)
p(k, •) = (k + 1)ω k k (kβ k + (k + 1)ω k )
= (k + 1)ω k ,(148)
where we have used the Euler relation k (kβ k + (k + 1)ω k ) = 1 to simplify the results. Using (140) we obtain p(k, •) = 2 k e 2 (k + 1)(k − 1)! and p(k, •) = 2 k e 2 (k + 2)(k − 1)! .
With the probability distribution for the number k of substructures p(k) = p(k, •) + p(k, •) = 2 k k(2k + 3) e 2 (k + 2)! ,
we are able to evaluate its moments k m = k k m p(k) in order to characterise the arch diagrams. The average number of substructures is k = (5e 2 + 1)/(2e 2 ) ≈ 2.56767 to be compared with the result k Catalan = 3 found for Catalan structures [19]. For its variance we find k 2 − k 2 = (9e 4 − 16e 2 − 1)/(4e 4 ) ≈ 1.70408 which is smaller than the corresponding value k 2 Catalan − k 2 Catalan = 4. We therefore conclude that hierarchically constructed structures fluctuate less than generic, equiprobable Catalan structures.
Dynamical correlations
We can also compute dynamical quantities in the tree growth model G (and G'). The dynamics of this model is interesting in its own, but note that its dynamics is different from the arch deposition dynamics of model A. Let us give a few examples. 7.5.1. Probability of non-immediate pairing Consider process G. Having deposited a point at time t, one might ask for the probability that it does not get paired immediately with a free point already present. This equals the probability that we add an arrow to a black vertex, not to a white one. At large t it is k≥1 k β k = 1 + e 2 2 e 2 = 0.567668 .
Time-dependent pairing propabilities
What is in model G the probability Ψ(i, j) that the point deposited at time t 1 = i is paired with the point deposited at time t 2 = j > i, as a function of i and j? This amounts to the following event. At time t = t 1 = i a point is deposited on the circle so that no arch is formed, i.e. a certain black k-node is converted to a white k-node. The probability of this event reads b k (i) k/i, see (133). This particular node then remains white up to time t = t 2 = j where it is converted to a pair of black nodes. In the timestep t → t + 1 ≤ j the probability of keeping the white k-node unchanged is 1 − (k + 1)/t, the probability of splitting it is (k + 1)/t, see (134). Thus, if we start from a k-node at time t = 1, the probability is
Ψ k (i, j) = b k (i) k i j−1 t=i+1 1 − k + 1 t k + 1 j(152)
and the total probability is
Ψ(i, j) = k b k (i) k i j−1 t=i+1 1 − k + 1 t k + 1 j .(153)
It is interesting to consider the large-time, i.e. large-size limit t → ∞ with i, j → ∞, i/j = O(1). Indeed, using b k (t) ≃ t β k and j−1
t=i+1 1 − k + 1 t ≈ i j k+1 ,(154)
Note also that in the large-time, i.e. large-size limit t → ∞ a point deposited at a finite time i gets paired with probability one. Indeed
is the probability (151) of non-immediate pairing.
Conclusions and outlook
To summarise, inspired from the subject of RNA folding, we have introduced and studied a growth model of planar arch structures (which can be viewed as an arch deposition process). The construction of arch structures is similar to processes generated by greedy algorithms. The arch-growth model turns out to be amenable to analytical calculations. We have calculated the generating functions for the local height, and their moments. This allowed us to obtain the scaling exponent ζ for the height, the exponent ρ for the pairing probability, the corresponding scaling functions in the limit of long strands n → ∞, as well as finite-size corrections. We also proved the absence of multicriticality. These results were then confirmed by numerical simulations for systems of sizes up to n = 6500.
In a second step, we have defined an equivalent tree-growth model. This model involves growth by vertex splitting as well as by vertex attachment. This growth process allows to generate RNA configurations with arbitrarily large strands (number of bases). This allows us to obtain quantities as e.g. the probability, that a point gets paired, analytically.
This work leaves open many interesting questions:
-Some properties (e.g. distances on the tree, fractal dimension) are easy to study in the arch-deposition formulation, while some other properties (e.g. substructure statistics) are easier in the tree-growth formulation. It would be interesting to have a better understanding of this fact.
-The equivalence between the arch-deposition process and the tree-growth process is very specific to models A and G. We have not been able to find a tree-growth process which is equivalent to the compact arch-deposition modelĀ, although this modelĀ is in the same universality class as the non-compact arch-deposition model A.
-Is there a tree-growth process which gives the statistics of planar arches in the high temperature phase where all arch structures have the same probability (i.e. the statistics of the so-called "generic trees" or mean-field branched polymers)?
-Arch structures and trees appear in many problems in physics, mathematical physics, combinatorics, computer sciences, etc., in particular in integrable systems (Razumov-Stroganof conjecture, loops models), random permutations, random matrix models, and interface growth. Are the kind of models introduced in this article related to these problems?
Finally, since our scaling exponent ζ = ( √ 17 − 3)/2 deviates from the value found for random RNA ζ ≈ 0.66, we conclude that the low-temperature phase of random RNA is governed by rules which are more complicated than the greedy algorithm. It would be interesting to find a refined scheme that yields statistics closer to random RNA in order to comprehend the nature of the glassy phase of random RNA.
Figure 1 .
1RNA molecules, like DNA, are long chain heteropolymers built from four types of nucleotides: adenine (A), uracil (U), guanine (G) and cytosine (C).
Figure 2 .
2(a) Arch diagram for a planar structure. (b) Corresponding height relief, defined in(6).
Figure 3 .
3Building up planar structures via successive arch deposition.
Figure 5 .
5The decomposition used in order to derive the recurrence relation for the average height function h(i, n).
-like double integral equation for the scaling function H(x). It is possible, though tedious, to show that the integral equation allows a solution H(x) ∝ x ζ (1 − x) ζ with the scaling exponent ζ = ( √ 17 − 3)/2. Besides the quite complicated treatment of the integral equation we have found evidence for this scaling form from numerical simulations (see section 6).
±
(ω) which is analytic at ω = 0 and regular in the domain ω ∈ [0, 1[ (with a singularity at ω = 1).From this analysis, the coefficients C + and C − in the full scaling expansion (83) are those already calculated in sect. 3, (35):C + = 0.713263 . . . and C − = 0.519299 . . .
±
(x) are distributions on [0, 1] and may in principle be computed from the functionsF
±
(ω) via inverse Laplace transforms.
Figure 6 .
6A single step of the construction of hierarchical structures: (a) random choice of the point i. Point j is chosen amongst all points of the same colour as i (light gray). (b) Once thearch (i, j)is determined all the points i < k < j are re-coloured in order to mark the new substructure (dark gray).
Figure 7 .
7(a) Deviation of the single base pairing probability h(1, n) from 1 − e −2 as a function of n. The dashed line is the theoretical prediction (98) (b) Log-log-plot of the average mean height h(n) as a function of the number of bases for 5 ≤ n ≤ 6500. The straight line corresponds to the theoretical prediction from (38), the dashed line indicates the scaling limit. (The error bar is of point size.)Averaged mean height h(n), and its fluctuations: In figure 7b we compare results for the averaged mean height h(n) to the theoretical prediction in the limit n → ∞. Taking into account all terms of (38) is sufficient to show that the difference between numerical results and theory is of the order of the statistical error, see figure 8a.
Figure 8 .
8(a) Deviations of the averaged mean height h(n) from the theoretical prediction, normalised by the result of (38). Clearly, the deviations are of smaller than the errorbars. (b) Log-log-plot of the averaged second moment ∆ 2 . (The error bar is of point size.) The straight line presents the scaling limit.
Figure 9 .
9Averaged height function h(k, n). Results for the averaged height functions are shown in figure 9a,b. In order to point out universal behaviour we plot h(k, n)/n ζ as a function of x = k/n. In figure 9b, we compare the data to the first-order corrected scaling limit n −ζ H(k/n) − 1 where H(x) denotes the scaling function from (48). The deviations ∆h(k, n) = h(k, n) − n −ζ H(k/n) + 1 are large at the ends k = 1 and k = n.(a) Scaling plot for the height function H(k, n) for two strand lengths n = 100 and n = 6000. The dashed lines correspond to the scaling function plus the first finite-size corrections to the scaling limit n −ζ (n ζ H(k/n) − 1). (b) Residual scaled deviations ∆h(k, n)/n ζ for different strand lengths n = 50, 100, 200.
Figure 10 .
10Log-log-plot of deviations of the height function h(k, n) at k = n/2 as a function of n. The straight line corresponds to the function to n −1 .
7. 1 .
1Arch growth models via point deposition 7.1.1. Closed-strand growth model G Let us first define the model G (closed model), illustrated on figure 11.
Figure 11 .
11Model G: successive deposition of points on a circle. Unlinked points are marked in white, linked points in black.
Figure 13 .
13Model G': successive deposition of points on an open line. Unlinked points are marked in white, linked points in black.
Figure 14 .
14From a configuration C of G' to a configuration C of G
Figure 15 .
15Probability recursion (116) as decomposition of a configuration C in model A We now prove that the probabilities in model G' obey the same recursion relation. For this we first need to relate the propabilities in model G to those in model G'. Lemma: Let C be a configuration with n points in model G' (successive deposition of points on a line) and C its equivalent configuration in model G (successive point depositions on a circle). Let s(C) be the symmetry factor of the configuration C, i.e. the number of cyclic rotations that leave C invariant. Then P G' Proof of the lemma: It is clear that in model G' any deposition process of n points on the line is uniquely specified by the bijection i → x(i) where x(i) is the position at time t = n of the point deposited at time i. x is a bijection on {1, n}, i.e. a permutation.
Figure 16 .
16The decomposition of a configuration of model G used in the proof. To be compared with figure 15.
Figure 17 .Figure 18 .
1718First planar arches configurations and their dual decorated trees configurations (here n = 0 to n = 6) Elementary growth steps for the decorated tree model: at each step we add an arrow to some vertex; if there is already an arrow the vertex splits in two.
Figure 20 .
20Growth processes for a k = 4 vertex in the undecorated vertex model. For k = 4 the process • → • has probability weight w(k) = k = 4, each process • → • • has w(k) = 2/k = 1/2.w •→• (k) = k and w •→• • (k) = 2 k .
=Figure 21 .
21Open planar arch systems are dual to rooted trees. The root is indicated by the barred line pointing to the top.
Figure 22 .
22Example of a configuration with k = 3 substructures and white-coloured root.
AcknowledgementsThis work is supported by the EU ENRAGE network (MRTN-CT-2004-005616) and the Agence Nationale de la Recherche (ANR-05-BLAN-0029-01 & ANR-05-BLAN-0099-01). The authors thank the KITP (NSF PHY99-07949), where this work was started, for its hospitality. We are very grateful to M. Müller for stimulating discussions, and providing us with a copy of his PhD thesis. We also thank R. Bundschuh, P. Di Francesco, T. Jonsson, L. Tang and A. Rosso for useful discussions.Here we give all possible configurations from n = 2 up to n = 9 vertices, with their pr obabilities. Furthermorefigure 12shows a sample for n = 600 points. we assume that we have already deposited t points on the line and constructed a maximal planar arch system between these points. Namely there are n a (t) arches and n f (t) = t − 2n a (t) free points such that any link between two of these free points necessarily intersect one of the n a (t) existing arches. At time t + 1Appendix A. Mean field equation for the vertex densitiesThis appendix contains a detailed presentation of the computation of the vertex densities within the framework of mean field theory. We start from the Euler relationandTaking these two last relations together, we findThis leads to the following differential equation for the generating functionUsing the fact that β 2 = 2β 1 /3, one findswhose solution isThe simple pole at z = 0 is removed via setting C = C 1 = −C 2 . Furthermore, consistency requires B(z = 0) = 0 what yields C = β 1 /2. Thus, the generating function is determined up to a factor:The overall factor β 1 is obtained by insertion of these expression into A.1. The result isso that β 1 = e −2 . Therefore, we obtain the vertex densitiesAppendix B. The compact arch deposition modelĀIn the compact arch deposition model one deals with strands with an even number of bases ℓ = 2n, and the arches are always between an even and an odd base a = (even, odd) or (odd, even). At the end of the deposition process, there are no free base and there are always n = ℓ/2 arches. The recursion relation (8) for the probabilities PĀ(C) for the configurations C becomes in this modelThe recursion relation for the generating function of the heightis easily derived and reads(to be compared with the equation (21) obtained for the non-compact model A).The scaling limit ℓ → ∞ is still given by the singularity at u, v → 1. In this limit the dominant (most singular) terms areThis is the same equation than equation (44) for the scaling limit of the generating function F (u, v) for the non-compact model A. Therefore the scaling limit for the noncompact model A and the compact modelĀ are the same. The same result holds for the higher moments correlation functions and the N-points correlators.Finally, let us mention that, although the deposition models A andĀ are very similar, we have not been able to construct a growth model (i.e. a point deposition model) which could be equivalent to the compact arch deposition modelĀ.Appendix C. MulticorrelatorsWe can extend the recurrence equations(14)and(15)to compute correlation functions for heights at several points of the strand. Let us consider the 2-point correlators. They are the expectation values, at two points i and j, of(C.1)A generating function for these correlators isThe recurrence equation is obtained by considering all the possible positions for the first arch (k, l) with respect to the two points i and j (seefigure C1) Denoting by L the length operatorand G(u, v; z) the 1-point function studied in Sect. 4.3, we obtain the linear PDE for G 2+uv e z 1 G(u, v; z 1 )+vw e z 2 G(v, w; z 2 )+uw e z 1 +z 2 G(u, w; z 1 + z 2 ) G 2 (u, v, w; z 1 , z 2 )(C.
R F Gesteland, T R Cech, J F Atkins, The RNA world. Cold Spring Harbor Laboratory PressR.F. Gesteland, T.R. Cech and J.F. Atkins, editors, The RNA world, Cold Spring Harbor Laboratory Press, 2005.
RNA secondary structure formation: a solvable model of heteropolymer folding. R Bundschuh, T Hwa, Phys. Rev. Lett. 83R. Bundschuh and T. Hwa, RNA secondary structure formation: a solvable model of heteropolymer folding, Phys. Rev. Lett. 83 (1999) 1479-82.
RNA secondary structure: physical and computational aspects. P G Higgs, Q. Rev. Biophys. 33199P.G. Higgs, RNA secondary structure: physical and computational aspects, Q. Rev. Biophys. 33 (2000) 199.
Glassy transition in a disordered model for the rna secondary structure. A Pagnani, G Parisi, F Ricci-Tersenghi, Phys. Rev. Lett. 84A. Pagnani, G. Parisi and F. Ricci-Tersenghi, Glassy transition in a disordered model for the rna secondary structure, Phys. Rev. Lett. 84 (2000) 2026-9.
RNA folding and large N matrix theory. H Orland, A Zee, Nucl. Phys. B. 620H. Orland and A. Zee, RNA folding and large N matrix theory, Nucl. Phys. B B620 (2002) 456-76.
Statistics of branching and hairpin helices for the dat copolymer. P.-G De Gennes, Biopolymers. 6P.-G. de Gennes, Statistics of branching and hairpin helices for the dat copolymer, Biopolymers 6 (1968) 715-729.
Statistical mechanics of secondary structures formed by random RNA sequences. R Bundschuh, T Hwa, Phys. Rev. E. 65R. Bundschuh and T. Hwa, Statistical mechanics of secondary structures formed by random RNA sequences, Phys. Rev. E 65 (2002) 031903/1-22.
Nature of the glassy phase of RNA secondary structure. F Krzakala, M Mezard, M Müller, Europhys. Lett. 57F. Krzakala, M. Mezard and M. Müller, Nature of the glassy phase of RNA secondary structure, Europhys. Lett. 57 (2002) 752-8.
Freezing transition of the random bond rna model: Statistical properties of the pairing weights. C Monthus, T Garel, Phys. Rev. E. 7531103C. Monthus and T. Garel, Freezing transition of the random bond rna model: Statistical properties of the pairing weights, Phys. Rev. E 75 (2007) 031103.
Ground state and glass transition of the rna secondary structure. S Hui, L.-H Tang, Eur. Phys. J. B. 53S. Hui and L.-H. Tang, Ground state and glass transition of the rna secondary structure, Eur. Phys. J. B 53 (2006) 77-84.
The freezing of random RNA. M Lässig, K J Wiese, cond-mat/0511032Phys. Rev. Lett. 96M. Lässig and K.J. Wiese, The freezing of random RNA, Phys. Rev. Lett. 96 (2006) 228101, cond-mat/0511032.
Systematic field theory of the RNA glass transition. F David, K J Wiese, q-bio.BM/0607044Phys. Rev. Lett. 98F. David and K.J. Wiese, Systematic field theory of the RNA glass transition, Phys. Rev. Lett. 98 (2007) 128102, q-bio.BM/0607044.
Markus Müller, Repliement d'hétéropolymères. Université Paris-SudPhD thesisMarkus Müller, Repliement d'hétéropolymères, PhD thesis, Université Paris-Sud, 2003. http://www.physics.harvard.edu/˜markusm/PhDThesis.pdf
Asymptotical growth of a class of random trees. B Pittel, Annals Of Probability. 13B. Pittel, Asymptotical growth of a class of random trees, Annals Of Probability 13 (1985) 414-427.
Random forests. L Breiman, Machine Learning. 45L. Breiman, Random forests, Machine Learning 45 (2001) 5-32.
The degree sequence of a scale-free random graph process. B Bollobas, O Riordan, J Spencer, G Tusnady, Random Structures & Algorithms. 18B. Bollobas, O. Riordan, J. Spencer and G. Tusnady, The degree sequence of a scale-free random graph process, Random Structures & Algorithms 18 (2001) 279-290.
Rd, W D Mauldin, S C Sudderth, Williams, Polya trees and random distributions. 20RD. Mauldin, WD. Sudderth and SC. Williams, Polya trees and random distributions, Annals Of Statistics 20 (1992) 1203-1221.
M Abramowitz, A Stegun, Pocketbook of Mathematical Functions. Harri-Deutsch-VerlagM. Abramowitz and A. Stegun, Pocketbook of Mathematical Functions, Harri-Deutsch-Verlag, 1984.
Meander, folding and arch statistics. P Di Francesco, O Golinelli, E Guitter, Math. Comput. Modelling. 26P. Di Francesco, O. Golinelli and E. Guitter, Meander, folding and arch statistics, Math. Comput. Modelling 26 N8 (1997) 97-147.
|
[] |
[
"PROFILES OF DARK MATTER VELOCITY ANISOTROPY IN SIMULATED CLUSTERS",
"PROFILES OF DARK MATTER VELOCITY ANISOTROPY IN SIMULATED CLUSTERS"
] |
[
"Doron Lemze ",
"Rick Wagner ",
"Yoel Rephaeli ",
"Sharon Sadeh ",
"Michael L Norman ",
"Rennan Barkana ",
"Tom Broadhurst ",
"Holland Ford ",
"Marc Postman "
] |
[] |
[] |
Interest in the spatial distribution of dark matter (DM) velocities in galaxy clusters has grown recently in light of the improved capability of determining its degree of anisotropy, β, from (either separate or joint) analysis of several different sets of measurements. Since cluster evolution is a highly non-linear hierarchical process, detailed theoretical expectations for β and its profile can only be obtained from numerical simulations. We report statistical results for β from a sample of some 6000 cluster-size halos (at redshift zero) identified in a ΛCDM hydrodynamical adaptive mesh refinement simulation done with the Enzo code. These include profiles of β in clusters with different masses, relaxation states, and at several redshifts, modeled both as spherical and triaxial DM configurations. Specifically, although we find a large scatter in the DM velocity anisotropy profiles of different halos (across elliptical shells extending to at least ∼ 1.5r vir ), universal patterns are found when these are averaged over halo masses, redshifts, and relaxation stages. These are characterized by a very small velocity anisotropy at the halo center, increasing outward and leveling off at ∼ 0.1 − 0.2 of the virial radius in lower mass and redshift halos. We also find that at radii larger than about 0.2r vir , β tends to be lower in spherical than in elliptical halos. This finding may help in sharpening the contrast between cluster shape measurements (e.g., by the CLASH project) and results from simulations. Our analysis does not indicate that there is significant correlation (found in some previous studies) between the radial density slope, γ, and β at large radii, 0.3 r vir < r < r vir .
|
10.1088/0004-637x/752/2/141
|
[
"https://arxiv.org/pdf/1106.6048v2.pdf"
] | 118,378,574 |
1106.6048
|
45405b921600e94a1f81e59dae65af79f671b736
|
PROFILES OF DARK MATTER VELOCITY ANISOTROPY IN SIMULATED CLUSTERS
29 Jun 2011 July 1, 2011 Draft version July 1, 2011
Doron Lemze
Rick Wagner
Yoel Rephaeli
Sharon Sadeh
Michael L Norman
Rennan Barkana
Tom Broadhurst
Holland Ford
Marc Postman
PROFILES OF DARK MATTER VELOCITY ANISOTROPY IN SIMULATED CLUSTERS
29 Jun 2011 July 1, 2011 Draft version July 1, 2011arXiv:1106.6048v1 [astro-ph.CO] Draft version Preprint typeset using L A T E X style emulateapj v. 8/13/10Subject headings: Methods: Numerical -Galaxies: clusters: general
Interest in the spatial distribution of dark matter (DM) velocities in galaxy clusters has grown recently in light of the improved capability of determining its degree of anisotropy, β, from (either separate or joint) analysis of several different sets of measurements. Since cluster evolution is a highly non-linear hierarchical process, detailed theoretical expectations for β and its profile can only be obtained from numerical simulations. We report statistical results for β from a sample of some 6000 cluster-size halos (at redshift zero) identified in a ΛCDM hydrodynamical adaptive mesh refinement simulation done with the Enzo code. These include profiles of β in clusters with different masses, relaxation states, and at several redshifts, modeled both as spherical and triaxial DM configurations. Specifically, although we find a large scatter in the DM velocity anisotropy profiles of different halos (across elliptical shells extending to at least ∼ 1.5r vir ), universal patterns are found when these are averaged over halo masses, redshifts, and relaxation stages. These are characterized by a very small velocity anisotropy at the halo center, increasing outward and leveling off at ∼ 0.1 − 0.2 of the virial radius in lower mass and redshift halos. We also find that at radii larger than about 0.2r vir , β tends to be lower in spherical than in elliptical halos. This finding may help in sharpening the contrast between cluster shape measurements (e.g., by the CLASH project) and results from simulations. Our analysis does not indicate that there is significant correlation (found in some previous studies) between the radial density slope, γ, and β at large radii, 0.3 r vir < r < r vir .
INTRODUCTION
Dark matter (DM), the main mass constituent of galaxy clusters, dominates the dynamics of intracluster (IC) gas and member galaxies. The DM mass density profile was until recently the only cluster property that could be inferred from simulations and tested against observational data. Some effort is now devoted to determine also the DM velocity anisotropy either by using the gas temperature as a tracer of the DM velocity anisotropy, a method which is applicable at intermediate radii (Host et al. 2009), or by examining galaxy velocities, as has recently been demonstrated in the analysis of A1689 measurements (Lemze et al. 2011). It is in fact our plan to apply the latter procedure to 14 additional relaxed X-ray clusters in the CLASH program (Postman et al. 2011).
N-body simulations (for various cosmological models) suggest a nearly universal velocity anisotropy profile (Cole & Lacey 1996;Carlberg et al. 1997;Colin, Klypin, & Kravtsov 2000;Diemand, Moore, & Stadel 2004;Rasia et al. 2004;Wojtak et al. 2005), similarly to the universal DM density profile deduced from simulations (Navarro, Frenk, & White 1997, hereafter NFW;Moore et al. 1998) and various observations (X-ray: e.g. Pointecouteau, Vikhlinin et al. 2006;Schmidt & Allen 2007;Arnaud, Pointecouteau, & Pratt 2008; galaxy velocity distributions: Diaferio, Geller, & Rines 2005;SZ measurements: Atrio-Barandela et al. 2008; strong and weak lensing measurements: Broadhurst et al. 2005a, hereafter B05a;Broadhurst et al. 2005b, hereafter B05b;Limousin et al. 2007;Medezinski et al. 2007;Lemze et al. 2008, hereafter L08;Broadhurst et al. 2008;Zitrin et al. 2009Zitrin et al. , 2010Zitrin et al. , 2011Umetsu et al. 2010). If both the density and velocity anisotropy profiles are indeed universal, they must be correlated. Hansen & Moore (2006, hereafter HM06) have recently argued for a universal relation between the DM radial density slope γ(r) and the velocity anisotropy β(r) for structures at virial equilibrium. Their deduced relation was claimed to hold for various systems, including disk galaxy mergers, simulated halos undergoing spherical collapse, and CDM halos both with and without cooling.
However, while an analysis of 6 high-resolution simulated galactic halos from the Aquarius project, carried out by Navarro et al. (2010), exhibited a reasonably good fit to the HM06 relation in the inner regions, large deviations were reported outside r −2 , the radius at which the profile slope reaches −2. Analogous results were obtained in a study conducted by Tissera et al. (2010), in which they resimulated 6 (Aquarius) galactic halos, constructed so as to include metal-dependent cooling, star formation, and supernova feedback. In 3 of the halos a rather good match to the HM06 relation was found at small radii, 2 kpc · h −1 < r < r −2 , but no corresponding match was found in the other 3 halos. No evidence is seen for the HM06 relation at large radii, r > r −2 , in any of the six clusters. Such a relation between the DM density and velocity anisotropy is of interest for both fundamental (HM06) and practical reasons, since the latter quantity is not easily measurable, whereas the density profile can be determined in several different ways based on different sets of measurements.
We report the results of an analysis of a large number of cluster-size halos drawn from an Adaptive Mesh Refinement (AMR) cosmological simulation. The large number of halos at different redshifts allows us to address the dependence of the DM velocity anisotropy profile on redshift, halo mass, degree of relaxation, modeled both as spherical and triaxial DM configurations, and to address also the γ-β relation. The outline of the paper is as follows. In § 2 we describe the simulation dataset, and in § 3 we describe how we infer the radial profiles of the density and velocity anisotropy in spherical and elliptical shells. In § 4 we specify our criteria for relaxed halos, and in § 5 we present the β profiles for different halo mass, redshift, and relaxation stages, and the deduced γ − β relation. We conclude with a summary in § 6.
2. THE SIMULATION Clusters of galaxies were drawn from a cosmological AMR simulation performed with the hydrodynamical ENZO code developed by Bryan & Norman (1997;see also Norman & Bryan 1999;Norman et al. 2007), assuming a spatially flat ΛCDM model with the parameters Ω m = 0.3, Ω b = 0.04, Ω CDM = 0.26, Ω Λ = 0.7, h = 0.7 (in units of 100 km/s/Mpc), and σ 8 = 0.9. The hydrodynamics in the AMR simulation used an ideal gas equation of state (i.e., neither radiative heating, cooling, star formation or feedback were included), with a box size of 512 h −1 Mpc comoving on a side with 512 3 DM particles, and DM mass resolution of about 10 11 h −1 0.7 M ⊙ . The root grid contained 512 3 grid cells, and the grid was refined by a factor of two, up to seven levels, providing a maximum possible spatial resolution of 7.8 (1 + z) −1 h −1 kpc (this resolution is dependent on the criteria for refinement of the adaptive mesh, and we used the actual resolution when analyzing the halos). For more details on the simulation setup and analysis, see Hallman et al. (2007), in particular Section 2.2. Work reported here is based on analysis of halos found using the HOP halofinding algorithm (Eisenstein & Hut 1998).
To find the desired halo DM properties we extracted particle positions and velocities from the raw data. Particles within a cube with comoving side of 16 h −1 Mpc were extracted. This ensured that both the halo and a sufficiently large surrounding region was available for examination.
3. RADIAL PROFILES Radial profiles were extracted in both spherical and triaxial shells. The mass distribution is described in terms of the axial ratios of the density surface contours. Assuming that the density distribution is stratified in similar ellipsoids, it is possible to determine the axial ratios without knowledge of the radial density distribution (Dubinski & Carlberg 1991, but see also other works e.g. Katz 1991;Warren et al. 1992;Jing et al. 1995;Jing & Suto 2002;Allgood et al. 2006, and references therein). The mass density in a triaxial configuration, ρ ≡ ρ(r e ), is specified in terms of the elliptical distance in the eigenvector coordinate system of the halo particles, r e ,
r e = x 2 + y 2 q 2 + z 2 s 2 1/2(1)
where q and s are the normalized axial ratios with s q 1. These ratios can be derived from the tensor
M ij = x i x j r 2 e (2) through q = M yy M xx 1/2 and s = M zz M xx 1/2(3)
where the sum is over all the particles, and M xx , M yy , and M zz are the principal components of the diagonalized tensor, with M xx M yy M zz . An advantage of this scheme is the equal weighting given to each particle irrespective of its radial position. The large number of particles in each halo allows accurate determination of the axial ratios. In practice, the value of r e in M ij is not known in advance, due to its dependence on q and s (which we want to determine) through eq. 1. The axial ratios are therefore determined iteratively. M ij is initially calculated assuming that the contours are spherical, so that q = s = 1. Particle positions are first rotated into the diagonalized frame of M ij , where only particles inside of the ellipse volume were taken (a sphere, in the first iteration). The values of q and s are determined from M ij and then used to recalculate r e in this new frame and fed back into the M ij relation to determine iterated values of q and s. When the input values match the output values within a certain tolerance, convergence to the true axial ratios is achieved.
In each iteration new values for q and s are determined, so the halo volume is deformed. We kept the magnitude of the longest axis equal to 2r vir of the original spherical radius. This radius was taken since on the one hand its volume contains a large number of particles, 10 3 , and on the other hand at these radii the ellipticity is quite constant, see fig. 1. Thus, during the volume deformation only the two smaller axes were changed. In this figure we plotted the averaged q and s values over all halos at z = 0 at different portions of the virial radius for the main axis. The halo ellipticity first decreases a small amount, until ∼ (1.5 − 2)r vir , in agreement with Allgood et al. (2006) who found that halos become more spherical up to r = r vir , then their ellipticity increases. In other words on average the halos are more elliptical at small radii then with increasing radius they become more spherical, and then elliptical again. However, the change is small and over the radius range (0.3 − 3)r vir the ellipticity is quite constant, < q >≈ 0.66 and < s >≈ 0.5, especially compared to the large scatter. In addition, part of the increasing ellipticity at large radii is due to the presence of infalling halos.
The DM velocity anisotropy profile for each halo was determined as follows: We first identified the halo center The uncertainty is calculated such that ∆q i = q i / √ N i and ∆s i = s i / √ N i when q i and s i are the axes ratio of halo i, and N i is the particle number inside the relevant ellipse. We checked for the most massive halo and found that indeed in the particle number range of 10 2 − 10 4 the fractional error in the derived axes ratio is
∝ 1/ √ N i .
with the peak of the surrounding 3D density distribution and then determined the proper (non-comoving) velocities of the DM particles with respect to the cluster center by subtracting the velocity of the halo center. This procedure was carried out for 15 equally spaced shells within the virial radius, a division that yields DM particle counts of the same order of magnitude in each bin. Logarithmic spacing was impractical due to the low spatial resolution. The DM velocity anisotropy in each shell was calculated as
β = 1 − σ 2 θ + σ 2 φ 2σ 2 r ,(4)
where σ r , σ θ , and σ φ denote the radial, polar, and azimuthal velocity dispersions, respectively. Shells containing less than 10 DM particles were excluded by virtue of their statistical insignificance. For example, in a spherical halo and for all halos a total of 58 such shells were present at z = 0. We only considered halos containing at least 10 3 particles, so as to obtain robust results independent of numerical artifacts (as has also been done by Neto et al. 2007). Since our DM mass resolution is approximately 10 11 h −1 0.7 M ⊙ , we examined all halos having M vir 10 14 h −1 0.7 M ⊙ . For constructing the DM density profile we used the same binning and halo center definition as in DM velocity anisotropy profile, and averaged the DM density over spherical shells. The radial density slope is defined as
γ(r) = d log[ρ DM (r)] d log[r] .(5)
For comparison with the radial density slope derived from a fit to an NFW profile, we fitted the resulting distribution to an NFW profile, ρ N F W i = 4ρs (ri/rs)(1+ri/rs) 2 , where r s and ρ s are a scale radius and the density at this radius, respectively, both of which were treated as free parameters. The best fit was found by minimizing
χ 2 = N bins i=1 log(ρ i ) − log(ρ N F W i (ρ 0 , r s )) 2 ,(6)
where each bin was assigned equal weight.
4. CRITERIA FOR RELAXED CLUSTERS The distinction between relaxed and unrelaxed clusters was made according to criteria laid down by Thomas et al. (2001) and Neto et al. (2007). These address (i) the displacement between the center of mass r cm and the potential minimum r p , and (ii) the virial ratio 2T /|U |. For the first criterion we defined a normalized offset, s offset = |r p − r cm |/r vir (see also Duffy et al. 2008); for the second criterion we computed the total kinetic and gravitational energies of the halo particles within r vir . When halos were modeled as triaxial, their major axes were set equal to the virial radii of the respective spherical configurations. All relevant calculations were performed for particles lying within the virial radius. For the estimation of T we subtracted the motion of the halo center, whereas U was calculated using a random sample of 1000 particles. We controlled the precision level of this method by (a) repeating the calculation 10 times for the most massive halo (the one containing the largest number of DM particles), which generated a relative difference of (1.4 ± 1)%, and (b) calculating U in a single halo, using 10 4 particles. The relative average difference produced by this method was 0.8%.
In equilibrium s offset would be expected to vanish, and the virial ratio would approach a value slightly higher than unity, since even in relaxed systems there always is some infalling matter. While the two criteria are related to the degree of relaxation in a straightforward manner, the boundary levels between the two phases are quite arbitrary. For example, Neto et al. (2007) adopted s offset = 0.07 and 2T /|U | = 1.35.
A possible third criterion to distinguish between relaxed and unrelaxed systems is the substructure level, defined here as the displacement between the density peak r d and the center of mass r cm , with the latter quantity calculated using particles within the virial radius. The displacement was normalized with respect to the virial radius, s sub = |r d − r cm |/r vir . Since s sub and s offset turned up to be strongly correlated (R = 0.8), this additional criterion was employed only once for comparison. The profiles are essentially similar at small radii, r 0.3r vir , roughly independent of redshift. However, at larger radii, r ∼ 0.7r vir , values of β are somewhat higher in high redshift halos. At even larger radii, r r vir , β is lower at higher redshifts. As the redshift increases, the scatter in β increases as a function of radius from ∼ 0.3r vir . In figure 3 we illustrate the DM velocity anisotropy profiles for two mass ranges at two redshifts. The 100 most and least massive halos are compared at redshift z = 0, and a similar comparison is made at redshift z = 2 for the 10 most and least massive halos. As is apparent from the plot, at z = 0 the beta profile of the high-mass halos increases with radius; no such tendency is visible in the low-mass range. This result is consistent with the behavior seen in figure 2 which essentially reflects the late formation of high-mass clusters in ΛCDM (e.g., Sadeh & Rephaeli 2008). At z = 0 the mean beta profile of high-mass halos appears to be somewhat steeper at r 0.4r vir than that of low-mass halos, in agreement with the fact that for the same radii it tends to higher levels at high redshifts, as illustrated in the upper panel of figure 3.
As mentioned in § 4, the criteria according to which halos are classified as relaxed or unrelaxed are quite arbitrary. We chose in our analysis to compare among the β profiles by setting two comparable numbers corresponding to the most and least relaxed phases. In figure 4 we plot the velocity anisotropy profile of relaxed versus unrelaxed halos using spherical shells when the distinction is made according to the s of f set criterion. Using the virial ratio to distinguish between relaxed and unrelaxed halos gave a very similar β profiles, and therefore these are not shown here. At radii smaller than the virial radius applying the s offset criterion results in flattened velocity anisotropy profiles of the unrelaxed halos with respect to the relaxed halos.
To assess the impact of aspehrical halo configuration, we plot in figure 5 the velocity anisotropy profiles in both spherical and elliptical shells for all high-mass halos (M vir 10 14 h −1 0.7 M ⊙ ). It is interesting to note that the average β profile is almost the same in spherical and elliptical shells till ∼ (0.7 − 1)r vir . From these region, however, the scatter of β using elliptical shells is much smaller.
In figure 6 we used elliptical shells. We derived the average β profile of spherical halos, q > 0.95, (blue solid curves) and compared it to the one derived from elliptical halos, 0.5 < q < 0.517, (red dash curves). The β values of spherical halos are lower than the ones of elliptical halos. In addition, the scatter in β is larger in spherical halos.
In figure 7 we plotted the connection between halo ellipticity and relaxedness. We took only halos with q > 0.4 since more elliptical halos are very few and therefore give poor statistics. In addition, for halos with q < 0.4 the values of both relaxation criteria are strongly dependent on the length of the ellipse major axis, which is likely due to the fact that many of them are in the and unrelaxed (red dashed curves) halos in elliptical shells. Relaxation gauged by the s of f set criterion: relaxed and unrelaxed halos have s offset < 0.013 (311 halos) and s offset > 0.13 (323 halos), respectively. As in the previous plots, the central curve represents the mean value, whereas the upper and lower curves describe the ± 1-σ uncertainty range.
process of a major merger and highly unrelaxed. The threshold was chosen so the two will have about the same normalization
The β profile of relaxed versus unrelaxed halos in elliptical shells are not appreciably different than the ones using spherical shells. In figure 8 we plot the velocity anisotropy profile of relaxed versus unrelaxed halos using elliptical shells when the distinction is made according to the s of f set criterion. The profiles are similar to those for spherical shells except for a smaller decline and with a smaller scatter at large radii.
5.2. γ-β ratio As was mentioned in § 1, the question of whether γ and β are correlated is of both theoretical and practical interest. In figure 9 we show the velocity anisotropy vs. the radial density slope for all shells in all halos (left panel), all shells of relaxed halos according to the virial relation criterion 2T /|U | < 1.35 (middle panel), and all shells of highly relaxed halos with 2T /|U | < 1.35 and s offset < 0.025 (right panel). For each halo we checked the maximum grid level and determined the halo minimum spatial resolution. Unresolved shells were not included in the analysis. The black curve reproduces the Hansen & Moore relations for the −4 < γ < 0 range. In figure 10 we drew the same quantities for all shells in all halos, assuming NFW-distributed density profiles. Note that in this plot the γ range is (∼ −2.9, −1), not (−3,−1), due to the finite binned values of the radius. Finally, figure 11 describes the velocity anisotropy against the radial density slope of the four inner (0 < r < 0.3r vir , top panel) and all the other (0.3r vir < r < r vir , bottom panel) shells. This boundary value was chosen since shells included within this radius, 0.3r vir , display the strongest γ-β correlation.
We repeated this above analysis with triaxial halos; the results were essentially the same as those obtained for spherical halos.
6. SUMMARY Significant progress has recently been made in the ability to deduce the kinematic properties of DM in clusters from galactic dynamics and X-ray measurements. Comparison of these properties with results from numerical simulations can clearly test analysis methods and add new insights on DM phase space occupation. We presented results from an analysis of DM velocities in 6019 halos with masses M 10 14 h −1 0.7 M ⊙ at redshift z = 0, drawn from one of the largest ever hydrodynamic cosmological AMR simulations.
Our study indicates that the profiles of cluster DM velocity dispersions have a similar pattern for all halo masses, redshifts, and relaxation stages, even though there is a considerable scatter in magnitude due to the large differences in the β profile of individual halos. A typical behavior is a rising β profile from a nearly vanishing central value, leveling off at r ∼ 0.2r vir , out to large radii of at least (1.5 − 2)r vir . Lower mass halos at lower redshifts have, on average, lower β levels at r = r vir , and therefore even shallower profiles. For example, low mass halos of M ≈ 10 14 h −1 0.7 M ⊙ at z = 0 attain levels of β(r = r vir ) ≈ 0.2, while high mass halos, i.e. M 10 15 h −1 0.7 M ⊙ , reach slightly higher values, β(r = r vir ) ≈ 0.4. This behavior could possibly be due to the ability to reach higher accretion velocities at lower redshifts when the background density is much lower. Lau, Nagai, & Kravtsov (2010) showed that the inclusion of radiative cooling and star formation in the simulation slightly low-ers the β values at z = 0. At higher redshifts, e.g. z = 2, we found no significant difference between the β profile of high and low mass halos, even though the scatter is lower than at z = 0. This is also in agreement with the trend found by Lau, Nagai, & Kravtsov (2010). When gauged by s of f set , very relaxed halos have practically similar β levels to the corresponding levels of low mass halos, i.e. β(r = r vir ) ≈ 0.2 − 0.3 at z = 0. However, when gauged by the virial ratio, the very relaxed and unrelaxed halos have similar β profile.
The shapes of cluster halos are of considerable interest that has recently been enhanced by observational capabilities (e.g. the CLASH project). Observed samples may, however, not be large enough for a conclusive examination. We show here that cluster shape measurements can be combined together with β values at large radii for stronger constraints. It is important to measure β at large radii, 0.3r vir , since on the one hand it can be derived with a high level of confidence (Lemze et al. 2011); on the other hand, at these radii the β values tend to be different in spherical (lower) than in elliptical (higher) halos. Note that this difference depends on the value set for the major axis. It is also important to analyze β profiles in elliptical shells as this significantly reduces their scatter at large radii, r vir .
We have employed two different relaxation criteria and found that more spherical halos tend to be more relaxed. This can be explained by the evolution of clusters from highly aligned and elongated systems at early times to lower alignment and elongation at present, which reflects the hierarchical and filamentary nature of structure formation (Hopkins, Bahcall, & Bode 2005). Indeed, Vera-Ciro et al. (2011), who analyzed Aquarius data, found that q increases with time.
Lastly, we find that there is some correlation between γ and β at low radii, r < 0.3r vir , and that such a correlation can be induced at all radii merely by assuming a prescribed DM density profile. The level of γ -β correlation is very low at large radii, r > 0.3r vir , even for very relaxed halos. Repeating the same analysis with elliptical shells led to the same result. Plotted are values for all shells and halos, with β inferred from an NFW fit (blue dots). The HM06 relation is plotted in the −4 < γ < 0 range (black solid curve). Fig. 11.-Velocity anisotropy vs. radial density slope at z = 0. Top panel: plotted are values for the four inner shells, 0 < r < 0.3r vir . Bottom panel: values for the outer shells, 0.3r vir < r < r vir . The HM06 relation is depicted in the −4 < γ < 0 range (black solid curve). The vertical line at γ ≃ −2 is an artifact due to the fact that γ here is a discrete slope profile calculated between two bins with a low number of particles. This artifact can also be seen in fig. 9, though it is more prominent here.
Profiles of the average axes ratio at z = 0. Plotted are the average values of q (solid curve) and s (dashed curve), over all halos at volumes with a different main axes value at z = 0.
The DM velocity anisotropy profile Average DM velocity anisotropy profiles of all highmass halos, M vir 10 14 h −1 0.7 M ⊙ , are plotted in figure 2 (including 1σ uncertainty regions) for different redshifts.
Fig. 2 .
2-Velocity anisotropy profiles at different redshifts. The relatively small variation of the anisotropy in the region 0−1 r vir is shown separately in the upper panel. The central curves represent the mean value, with the the ± 1-σ uncertainty around the mean is marked by the upper and lower curves.
Fig. 3 .
3-Velocity anisotropy profiles of halos with different masses. Top panel: profiles of the 100 most (solid curve) and least (dashed curve) massive halos at z = 0. Bottom panel: profiles of the 10 most (solid) and least (dashed) massive halos at z = 2.
Fig. 4 .Fig. 5 .
45-Velocity anisotropy profiles of relaxed (blue solid curves) and unrelaxed (red dashed curves) halos analyzed in spherical shells. Relaxation gauged by the s of f set criterion: relaxed and unrelaxed halos have s offset < 0.015 (264 halos) and s offset > 0.17 (267 halos), respectively. As in the previous plots, the central curve represents the mean value, whereas the upper and lower curves describe the ± 1-σ uncertainty range. -Velocity anisotropy profiles when the shells are described as spherical (red dash curves) or elliptical (blue solid curves). Note, the radius in the elliptical case is the semi-
Fig. 6 .
6-Velocity anisotropy profiles in elliptical shells. The average β profile of spherical halos, q > 0.95, (blue solid curves) is compared to a elliptical halos, 0.5 < q < 0.517, (red dash curves).
Fig. 7 .
7-The fraction of relaxed halos at different ellipticities and for different relaxation criteria, 2T /|U | = 1.5 (blue circles) and s of f set = 0.05 (red asterisks). The uncertainties were taken to be Poisson statistics, i.e.
Fig. 8 .
8-Velocity anisotropy profiles of relaxed (blue solid curves)
Velocity anisotropy vs. radial density slope at z = 0. Plotted are values corresponding to all shells and halos (left panel), all shells of relaxed halos according to the virial relation criterion 2T /|U | < 1.35 (middle panel), and all shells of highly relaxed halos, specified by 2T /|U | < 1.35 and s offset < 0.025 (right panel). The HM06 relation is plotted in the −4 < γ < 0 range (black solid curve).
Fig
. 10.-Velocity anisotropy vs. radial density slope at z = 0.
ACKNOWLEDGMENTWe thank Brain O'shea and Steen Hansen for helpful discussions. This research is supported in part by NASA grant HST-GO-12065.01-A. Work at Tel Aviv University is supported by US-IL Binational Science foundation grant 2008452. The simulations were performed on the DataStar system at the San Diego Supercomputer Center using LRAC allocation TG-MCA98N020.
. B Allgood, R A Flores, J R Primack, A V Kravtsov, R H Wechsler, A Faltenbacher, J S Bullock, MNRAS. 3671781Allgood, B., Flores, R. A., Primack, J. R., Kravtsov, A. V., Wechsler, R. H., Faltenbacher, A., & Bullock, J. S. 2006, MNRAS , 367, 1781
. M Arnaud, E Pointecouteau, G W Pratt, A&A. 441893Arnaud, M., Pointecouteau, E., & Pratt, G. W. 2005, A&A , 441, 893
. F Atrio-Barandela, A Kashlinsky, D Kocevski, H Ebeling, ApJ. 67557Atrio-Barandela, F., Kashlinsky, A., Kocevski, D., & Ebeling, H. 2008, ApJ , 675, L57
. J Barnes, G Efstathiou, ApJ. 319575Barnes, J., & Efstathiou, G. 1987, ApJ , 319, 575
. T Broadhurst, ApJ. 62153Broadhurst, T., et al. 2005, ApJ , 621, 53 (B05a)
. T Broadhurst, M Takada, K Umetsu, X Kong, N Arimoto, M Chiba, T Futamase, ApJ. 619143Broadhurst, T., Takada, M., Umetsu, K., Kong, X., Arimoto, N., Chiba, M., & Futamase, T. 2005, ApJ , 619, L143 (B05b)
. T Broadhurst, K Umetsu, E Medezinski, M Oguri, Y Rephaeli, ApJ. 6859Broadhurst, T., Umetsu, K., Medezinski, E., Oguri, M., & Rephaeli, Y. 2008, ApJ , 685, L9
. G L Bryan, M L Norman, Computational Astrophysics. 12336312th Kingston Meeting on Theoretical AstrophysicsBryan, G. L., & Norman, M. L. 1997, Computational Astrophysics; 12th Kingston Meeting on Theoretical Astrophysics, 123, 363
. R G Carlberg, ApJ. 48513Carlberg, R. G., et al. 1997, ApJ , 485, L13
. S Cole, C Lacey, MNRAS. 281716Cole, S., & Lacey, C. 1996, MNRAS , 281, 716
. P Colín, A A Klypin, A V Kravtsov, ApJ. 539561Colín, P., Klypin, A. A., & Kravtsov, A. V. 2000, ApJ , 539, 561
. A Diaferio, M J Geller, K J Rines, ApJ. 62897Diaferio, A., Geller, M. J., & Rines, K. J. 2005, ApJ , 628, L97
. J Diemand, B Moore, J Stadel, MNRAS. 352535Diemand, J., Moore, B., & Stadel, J. 2004, MNRAS , 352, 535
. J Dubinski, R G Carlberg, ApJ. 378496Dubinski, J., & Carlberg, R. G. 1991, ApJ , 378, 496It
. A R Duffy, J Schaye, S T Kay, C Vecchia, MNRAS. 39064Duffy, A. R., Schaye, J., Kay, S. T., & Dalla Vecchia, C. 2008, MNRAS , 390, L64
. D J Eisenstein, P Hut, ApJ. 498137Eisenstein, D. J., & Hut, P. 1998, ApJ , 498, 137
. C S Frenk, S D M White, M Davis, G Efstathiou, ApJ. 327507Frenk, C. S., White, S. D. M., Davis, M., & Efstathiou, G. 1988, ApJ , 327, 507
. C S Frenk, ApJ. 525554Frenk, C. S., et al. 1999, ApJ , 525, 554
. E J Hallman, B W Shea, J O Burns, M L Norman, R Harkness, R Wagner, ApJ. 67127Hallman, E. J., O'Shea, B. W., Burns, J. O., Norman, M. L., Harkness, R., & Wagner, R. 2007, ApJ , 671, 27
. S H Hansen, B Moore, New Astronomy. 11333Hansen, S. H., & Moore, B. 2006, New Astronomy, 11, 333
. O Host, S H Hansen, R Piffaretti, A Morandi, S Ettori, S T Kay, R Valdarnini, ApJ. 690358Host, O., Hansen, S. H., Piffaretti, R., Morandi, A., Ettori, S., Kay, S. T., & Valdarnini, R. 2009, ApJ , 690, 358
. Y P Jing, H J Mo, G Borner, L Z Fang, MNRAS. 276417Jing, Y. P., Mo, H. J., Borner, G., & Fang, L. Z. 1995, MNRAS , 276, 417
. Y P Jing, Y Suto, ApJ. 574538Jing, Y. P., & Suto, Y. 2002, ApJ , 574, 538
. N Katz, ApJ. 368325Katz, N. 1991, ApJ , 368, 325
. E T Lau, D Nagai, A V Kravtsov, ApJ. 7081419Lau, E. T., Nagai, D., & Kravtsov, A. V. 2010, ApJ , 708, 1419
. D Lemze, R Barkana, T J Broadhurst, Y Rephaeli, MNRAS. 3861092Lemze, D., Barkana, R., Broadhurst, T. J., & Rephaeli, Y. 2008, MNRAS , 386, 1092
. D Lemze, T Broadhurst, Y Rephaeli, R Barkana, K Umetsu, ApJ. 7011336Lemze, D., Broadhurst, T., Rephaeli, Y., Barkana, R., & Umetsu, K. 2009, ApJ , 701, 1336
. D Lemze, Y Rephaeli, R Barkana, T Broadhurst, R Wagner, M L Norman, ApJ. 72840Lemze, D., Rephaeli, Y., Barkana, R., Broadhurst, T., Wagner, R., & Norman, M. L. 2011, ApJ , 728, 40
. M Limousin, ApJ. 668643Limousin, M., et al. 2007, ApJ , 668, 643
. E Medezinski, ApJ. 663717Medezinski, E., et al. 2007, ApJ , 663, 717
. S M Molnar, I.-N Chiu, K Umetsu, P Chen, N Hearn, T Broadhurst, G Bryan, C Shang, ApJ. 7241Molnar, S. M., Chiu, I.-N., Umetsu, K., Chen, P., Hearn, N., Broadhurst, T., Bryan, G., & Shang, C. 2010, ApJ , 724, L1
. B Moore, F Governato, T Quinn, J Stadel, G Lake, ApJ. 4995Moore, B., Governato, F., Quinn, T., Stadel, J., & Lake, G. 1998, ApJ , 499, L5
. J F Navarro, C S Frenk, S D M White, ApJ. 490493Navarro, J. F., Frenk, C. S., & White, S. D. M. 1997, ApJ , 490, 493
. J F Navarro, MNRAS. 40221Navarro, J. F., et al. 2010, MNRAS , 402, 21
. A F Neto, MNRAS. 3811450Neto, A. F., et al. 2007, MNRAS , 381, 1450
. M L Norman, G L Bryan, Numerical Astrophysics. 24019Norman, M. L., & Bryan, G. L. 1999, Numerical Astrophysics, 240, 19
. M L Norman, G L Bryan, R Harkness, J Bordner, D Reynolds, B O'shea, R Wagner, arXiv:0705.1556Norman, M. L., Bryan, G. L., Harkness, R., Bordner, J., Reynolds, D., O'Shea, B., & Wagner, R. 2007, arXiv:0705.1556
. E Pointecouteau, M Arnaud, G W Pratt, A&A. 4351Pointecouteau, E., Arnaud, M., & Pratt, G. W. 2005, A&A , 435, 1
. M Postman, arXiv:1106.3328Postman, M., et al. 2011, arXiv:1106.3328
. E Rasia, G Tormen, L Moscardini, MNRAS. 351237Rasia, E., Tormen, G., & Moscardini, L. 2004, MNRAS , 351, 237
. S Sadeh, Y Rephaeli, MNRAS. 3881759Sadeh, S., & Rephaeli, Y. 2008, MNRAS , 388, 1759
. R W Schmidt, S W Allen, MNRAS. 379209Schmidt, R. W., & Allen, S. W. 2007, MNRAS , 379, 209
. P B Tissera, S D M White, S Pedrosa, C Scannapieco, MNRAS. 786Tissera, P. B., White, S. D. M., Pedrosa, S., & Scannapieco, C. 2010, MNRAS , 786
. P A Thomas, O Muanwong, F R Pearce, H M P Couchman, A C Edge, A Jenkins, L Onuora, MNRAS. 324450Thomas, P. A., Muanwong, O., Pearce, F. R., Couchman, H. M. P., Edge, A. C., Jenkins, A., & Onuora, L. 2001, MNRAS , 324, 450
. K Umetsu, E Medezinski, T Broadhurst, A Zitrin, N Okabe, B.-C Hsieh, S M Molnar, ApJ. 7141470Umetsu, K., Medezinski, E., Broadhurst, T., Zitrin, A., Okabe, N., Hsieh, B.-C., & Molnar, S. M. 2010, ApJ , 714, 1470
. C A Vera-Ciro, L V Sales, A Helmi, C S Frenk, J F Navarro, V Springel, M Vogelsberger, S D M White, arXiv:1104.1566Vera-Ciro, C. A., Sales, L. V., Helmi, A., Frenk, C. S., Navarro, J. F., Springel, V., Vogelsberger, M., & White, S. D. M. 2011, arXiv:1104.1566
. A Vikhlinin, A Kravtsov, W Forman, C Jones, M Markevitch, S S Murray, L Van Speybroeck, ApJ. 640691Vikhlinin, A., Kravtsov, A., Forman, W., Jones, C., Markevitch, M., Murray, S. S., & Van Speybroeck, L. 2006, ApJ , 640, 691
. M S Warren, P J Quinn, J K Salmon, W H Zurek, ApJ. 399405Warren, M. S., Quinn, P. J., Salmon, J. K., & Zurek, W. H. 1992, ApJ , 399, 405
. R Wojtak, E L Lokas, S Gottlöber, G A Mamon, MNRAS. 3611Wojtak, R., Lokas, E. L., Gottlöber, S., & Mamon, G. A. 2005, MNRAS , 361, L1
. A Zitrin, MNRAS. 396Zitrin, A., et al. 2009, MNRAS , 396, 1985
. A Zitrin, MNRAS. 4081916Zitrin, A., et al. 2010, MNRAS , 408, 1916
. A Zitrin, arXiv:1103.5618Zitrin, A., et al. 2011, arXiv:1103.5618
|
[] |
[
"A counterexample to the simple loop conjecture for PSL(2, R)",
"A counterexample to the simple loop conjecture for PSL(2, R)"
] |
[
"Kathryn Mann "
] |
[] |
[] |
In this note, we give an explicit counterexample to the simple loop conjecture for representations of surface groups into PSL(2, R). Specifically, we show that for any surface with negative Euler characteristic and genus at least 1, there are uncountably many non-conjugate, non-injective homomorphisms of its fundamental group into PSL(2, R) that kill no simple closed curve (nor any power of a simple closed curve). This result is not new -work of Louder and Calegari for representations of surface groups into SL(2, C) applies to the PSL(2, R) case, but our approach here is explicit and elementary.
|
10.2140/pjm.2014.269.425
|
[
"https://arxiv.org/pdf/1210.3203v1.pdf"
] | 5,753,656 |
1210.3203
|
03b24240f2c3b6d33b331e69ab032c84b601b30c
|
A counterexample to the simple loop conjecture for PSL(2, R)
Kathryn Mann
A counterexample to the simple loop conjecture for PSL(2, R)
In this note, we give an explicit counterexample to the simple loop conjecture for representations of surface groups into PSL(2, R). Specifically, we show that for any surface with negative Euler characteristic and genus at least 1, there are uncountably many non-conjugate, non-injective homomorphisms of its fundamental group into PSL(2, R) that kill no simple closed curve (nor any power of a simple closed curve). This result is not new -work of Louder and Calegari for representations of surface groups into SL(2, C) applies to the PSL(2, R) case, but our approach here is explicit and elementary.
Introduction
The simple loop conjecture, proved by Gabai in [5], states that any non-injective homomorphism from a closed surface group to another closed surface group has an element represented by a simple closed curve in the kernel. It has been conjectured that the result still holds if the target is replaced by the fundamental group of an orientable 3-manifold (see Kirby's problem list in [8]). Although special cases have been proved (e.g. [6], [10]), the general hyperbolic case is still open.
Recently, Cooper and Manning showed that if instead of a 3-manifold group the target group is SL(2, C), then the conjecture is false. Precisely, they show: Theorem 1.1 (Cooper-Manning [3]). Let Σ be a closed orientable surface of genus g ≥ 4. Then there is a homomorphism ρ : π 1 (Σ) → SL(2, C) such that 1. ρ is not injective 2. If ρ(α) = ±I, then α is not represented by a simple closed curve 3. If ρ(α) has finite order, then ρ(α) = I
The third condition implies in particular that no power of a simple closed curve lies in the kernel.
Inspired by this, we asked whether a similar result holds for PSL(2, R), this being an intermediate case between Gabai's result for surface groups and Cooper and Manning's for SL(2, C). Cooper and Manning's proof uses a dimension count on the SL(2, C) character variety and a proof that a specific subvariety is irreducible and smooth on a dense subset, much of which does not carry over to the PSL(2, R) case. In general, complex varieties and their real points can behave quite differently. However, we show here with different methods that an analogous result does hold.
While this note was in progress, we learned of work of Louder and Calegari (independently in [9] and [2]) that can also be applied to answer our question in the affirmative. Louder shows the simple loop conjecture is false for representations into limit groups, and Calegari gives a practical way of verifying no simple closed curves lie in the kernel of a non-injective representation using stable commutator length and the Gromov norm.
The difference here is that our construction is entirely elementary. We use an explicit representation from DeBlois and Kent in [4] and verify that this representation it is non injective and kills no simple closed curve by elementary means. Our end result parallels that of Cooper and Manning but also include surfaces with boundary and all genera at least 1:
Theorem 1.2.
Let Σ a surface of negative Euler characteristic and of genus g ≥ 1 , possibly with boundary. Then there is a homomorphism ρ : π 1 (Σ) → SL(2, R) such that 1. ρ is not injective 2. If ρ(α) = ±I, then α is not represented by a simple closed curve 3. In fact, if α is represented by a simple closed curve, then ρ(α k ) = 1 for any k ∈ Z.
Moreover, there are uncountably many non-conjugate representations satisfying 1. through 3.
Proof of theorem 1.2
We first present a construction of a (non-injective) representation from DeBlois and Kent in [4], and then show that no power of a simple closed curve lies in the kernel of this representation. The full construction appears in [4], we describe it here for convenience.
Let Σ be a surface of genus g ≥ 1 and negative Euler characteristic, possibly with boundary. Assume for the moment that Σ is not the once-puntured torus -Theorem 1.2 for this case will follow easily later on.
Let c ⊂ Σ be a simple closed curve separating Σ into a genus 1 subsurface with single boundary component c, and a genus (g − 1) subsurface with one or more boundary components. Let Σ A denote the genus (g − 1) subsurface and Σ B the genus 1 subsurface. See Figure 1 below. Finally, we let A = π 1 (Σ A ) and B = π 1 (Σ B ), so that π 1 (Σ) = A * C B, where C is the Z-subgroup generated by the element [c] represented by the curve c. We assume that the basepoint for π 1 (Σ) lies on c. Let x ∈ B and y ∈ B be generators such that B = x, y , and that c represents the commutator [x, y]. Fix α and β in R \ {0, ±1}, and following [4]
define φ B : B → SL(2, R) by φ B (x) = α 0 0 α φ B (y) = β 1 0 β −1 We have then φ B ([x, y]) = 1 β(α 2 − 1) 0 1 so that φ B ([x, y])
is invariant under conjugation by the matrix λ t := 1 t 0 1 .
Projecting this representation to PSL(2, R) gives a representation which is upper triangular, hence solvable and therefore non-injective. Abusing notation, let φ B denote the representation to PSL(2, R). Now let φ A : A → PSL(2, R) be Fuchsian and such that the image of the boundary curve c under φ A agrees with φ B ([x, y]). Such a representation exists for the following reasons. First, if Σ has negative Euler characteristic, genus g > 1, and is not the once punctured torus, then Σ A will have negative Euler characteristic as well and admit a hyperbolic structure. Secondly, the Fuchsian representation coming from the hyperbolic structure will send the element [c] representing the boundary curve to a parabolic, so after conjugation we may assume that it is equal to
φ B ([x, y]), provided φ B ([x, y]) is parabolic, i.e. β(α 2 − 1) = 0.
Finally, combine φ A and φ B to get a one-parameter family of representations φ t of π 1 (Σ) = A * C B to PSL(2, R) as follows. For t ∈ R and g ∈ A * C B, let
φ t (g) = φ A (g) if g ∈ A λ t • φ B (g) • (λ t ) −1 if g ∈ B This representation is well defined because φ B ([x, y]) = φ A ([x, y]) and φ B ([x, y]) is invariant under conjugation by λ t .
Our next goal is to show that for appropriate choice of α, β and t, the representation φ t satisfies the criteria in Theorem 1.2. The main difficulty will be checking that no element representing a simple closed curve is of finite order. To do so, we employ a stronger form of Lemma 2 from [4]. This is:
Lemma 2.1. Suppose w ∈ A * C B is a word of the form w = a 1 b 1 a 2 b 2 ...a l b l with a i ∈ A and b i ∈ B for 1 ≤ i ≤ l.
Assume that for each i, the matrix φ t (a i ) has a nonzero 2,1 entry and φ t (b i ) is hyperbolic. If t is transcendental over the entry field of φ 0 (A * C B), then φ t (w) is not finite order.
By entry field of a group Γ of matrices, we mean the field generated over Q by the collection of all entries of matrices in Γ.
Remark 2.2. Lemma 2 of [4] is a proof that φ t (w) is not the identity, under the assumptions of Lemma 2.1. We use some of their work in our proof.
Proof of Lemma 2.1. In [4], DeBlois and Kent show by a straightforward induction that the entries of φ t (w) are polynomials in t, where the degree of the 2,2 entry is l, the degree of the 1,2 entry is at most l, and the other entries have degree at most l − 1. Now suppose that φ t (w) is finite order. Then it is conjugate to a matrix of the form u v −v u . where u = cos(θ) and v = sin(θ) for some rational angle θ. In particular, it follows from the deMoivre formula for sine and cosine that u and v are algebraic. Now suppose that the matrix conjugating φ t (w) to u v −v u has entries a ij . Then we have
φ t (w) = u − (a 12 a 22 − a 11 a 21 )v (a 2 12 a 2 11 )v −(a 2 22 a 2 21 )v u + (a 12 a 22 + a 11 a 21 )v
Looking at the 2,2 entry we see that a 12 a 22 + a 11 a 21 must be a polynomial in t of degree l. But this means that the 1,1 entry is also a polynomial in t of degree l, contradicting Deblois and Kent's calculation. This proves the lemma.
To complete our construction, choose t to be transcendental over the entry field of φ 0 (A * C B). We want to show that no power of an element representing a simple closed curve lies in the kernel of φ t . To this end, consider any word w in A * C B that has a simple closed curve as a representative. There are three cases to check. First, if w is a word in A alone, then φ t (w) is not finite order,
1. w = x ±1 or w = y ±1 2. w = [x ±1 , y ±1 ]
3. Up to replacing x with x −1 , y with y −1 and interchanging x and y, there is some n ∈ Z + such that w = x n1 yx n2 y...x ns y where n i ∈ {n, n + 1}.
We leave this as an exercise for the reader. This classification of words representing simple closed curves in Σ B also follows from a much more general theorem in [1]. By construction, no word of type 1, 2 or 3 is finite order provided that α s β k = 1 for any integers s and k other than zero -indeed, we only need to check words of type 3, and these necessarily have trace α s β k + α −s β −k for some s, k = 0. Note that in particular, under the condition that α s β k = 1 for s, k = 0, all type 3 words are hyperbolic. We will use this fact again later on.
For the remaining case where w is a word with letters in both A and B, we claim that it can be written in a form where Lemma 2.1 applies. To write it this way, use the following procedure: First take a simple representative γ for w and apply an isotopy so that each crossing of γ with c occurs in some small neighborhood of the basepoint p. This gives us a well defined shortest path along c to p from each crossing. After further isotopy, we may assume additionally that no segment of γ together with the shortest path along c from its endpoints to p bounds a disc, and that γ is transverse to c. All this can be done without introducing any self-crossings in γ. Now γ is of the form γ 1 δ 1 γ 2 δ 2 ...γ l δ l where γ i is a simple arc in Σ A and δ i a simple arc in Σ B . Close each γ i and δ i into a simple loop by connecting its endpoints to p using the shortest path along c and let a i ∈ A (respectively b i ∈ B) be the corresponding element of the fundamental group. See Figure 2. This gives us a word a 1 b 1 a 2 b 2 ...a l b l equivalent to w after cyclic reduction, and each a i is represented by a simple closed curve in Σ A and each b i by a simple closed curve in Σ B . The elimination of discs bounded between segments of γ and short segments of c ensures that each a i and b i is nontrivial.
We can also show that each a i either has a non-zero 2,1 entry or is represented by the curve c or its inverse. This is because φ A is Fuchsian, so the only elements fixing infinity -that is, with 2,1 entry equal to zero -are powers of c, and no powers of c other than c ±1 have a simple closed curve representative. Similarly, the classification of words representing simple closed curves in Σ B shows that each b i is either hyperbolic or represented by c or c −1 . We claim that we may now rewrite w to eliminate all appearances of c, keeping each a i with a non-zero 2,1 entry and each b i hyperbolic. After doing so, we will have w in a form where we can apply Lemma 2.1.
To rewrite w in the desired form, first note that all γ i such that a i is represented by c may be homotoped (simultaneously) into Σ b without introducing any self intersections of γ. Thus, we can replace each such δ i−1 γ i δ i with a simple loop δ i in Σ B alone, and rewrite w = a 1 b 1 ...a i−1 b i a i+1 ...a l b l . Reindex so that w = a 1 b 1 a 2 b 2 ...a k b k for k < l, and reindex the corresponding δ i and γ i as well. Now repeat the procedure on this new word with each b i : homotope all δ i such that b i is represented by c over to Σ A without introducing any self intersections of γ, and then replace each such γ i δ i γ i−1 with a simple loop γ i in Σ B alone. Then rewrite w so that, after reindexing, w = a 1 b 1 a 2 b 2 ...a m b m with m < k and each a i and b i is a simple closed curve. Repeat the process again with the a i of this new word. The procedure ends when either no a i or b i is represented by c, or when w is a word in A or B alone, represented by a simple loop in Σ A or Σ B . In the first case, Lemma 2.1 applies to show that φ t (w) is not finite order. In the second case, we have already shown that a word in A or B represented by a simple loop in Σ A or Σ B cannot be finite order.
It remains only to remark that the representation φ t is non-injective and that, by choosing appropriate parameters, we can produce uncountably many nonconjugate representations. Noninjectivity follows immediately since φ t (B) is solvable so the restriction of φ t to B is non-injective. Now for any fixed α and β (satisfying the requirement that α s β k = 1 for all integers s, k), varying t among transcendentals over the entry field of φ 0 (A * C B) produces uncountably many non-conjugate representations that are all non-injective, but have no power of a simple closed curve in the kernel. This concludes the proof of Theorem 1.2, assuming that the surface was not the punctured torus.
The punctured torus case is now immediate: any representation of the form of φ B where α s β k = 1 for any integers s and k is non-injective and our work above shows that no element represented by a simple closed curve has finite order. Fixing α and varying β produces uncountably many non-conjugate representations.
Figure 1 :
1The setup: decomposition of Σ and generators x and y for B
Figure 2 :
2a 1 and b 1 in w, represented by γ i and δ i joined to p since φ t (A) is Fuchsian and therefore injective. Secondly, if w is a word in B, then an elementary geometric argument shows that w can only be represented by a simple closed curve if it has one of the following forms:
Series An algorithm for simple closed curves on surfaces. J Birman, C , J. London Math. Soc. 2J. Birman, C. Series An algorithm for simple closed curves on surfaces J. London Math. Soc. (2) 29 (1984) 331-342.
D Calegari, arXiv:1112.1791v1Certifying incompressibility of non-injective surfaces with SCL Preprint. D. Calegari Certifying incompressibility of non-injective surfaces with SCL Preprint. arXiv:1112.1791v1
Mannng Non-faithful representations of surface groups into SL(2, C) which kill no simple closed curve. D Cooper, J F , arXiv:1104.4492v1Preprint.D. Cooper, J. F. Mannng Non-faithful representations of surface groups into SL(2, C) which kill no simple closed curve. Preprint. arXiv:1104.4492v1
Kent Surface groups are frequently faithful. J Deblois, R , Duke Math. J. 1312J. DeBlois, R. Kent Surface groups are frequently faithful. Duke Math. J. 131 no. 2 (2006) 351-362.
The simple loop conjecture. D Gabai, J. Differential Geom. 211D. Gabai The simple loop conjecture. J. Differential Geom, 21 no.1 (1985) 143-149
Minimal surfaces in manifolds with S 1 actions and the simple loop conjecture for seifert fibered spaces. J Hass, Proc. Amer. Math. Soc. 992J. Hass Minimal surfaces in manifolds with S 1 actions and the simple loop conjecture for seifert fibered spaces. Proc. Amer. Math. Soc., 99 no.2 (1987) 383-388
M Kapovich, Hyperbolic Manifolds and Discrete Groups. Birkhäuser. BostonM. Kapovich Hyperbolic Manifolds and Discrete Groups. Birkhäuser, Boston, 2001.
Problems in low-dimensional topology. R. Kriby, ed. Geometric TopolgyAthens, GAProblems in low-dimensional topology. in R. Kriby, ed. Geometric Topolgy. Athens, GA, 1993.
Simple loop conjecture for limit groups. L Louder, arXiv:1106.1350v1Preprint.L. Louder Simple loop conjecture for limit groups. Preprint. arXiv:1106.1350v1
Wang π 1 injective surfaces in graph manifolds. J H Rubenstein, S , Comment Math. Helv. 734J. H. Rubenstein, S. Wang π 1 injective surfaces in graph manifolds. Comment Math. Helv., 73 no.4 (1998) 499-515
|
[] |
[
"S-DOD-CNN: DOUBLY INJECTING SPATIALLY-PRESERVED OBJECT INFORMATION FOR EVENT RECOGNITION",
"S-DOD-CNN: DOUBLY INJECTING SPATIALLY-PRESERVED OBJECT INFORMATION FOR EVENT RECOGNITION"
] |
[
"Hyungtae Lee \nBooz Allen Hamilton Inc\n\n",
"Sungmin Eum \nBooz Allen Hamilton Inc\n\n",
"Heesung Kwon ",
"\nArmy Research Laboratory\nUS\n\n"
] |
[
"Booz Allen Hamilton Inc\n",
"Booz Allen Hamilton Inc\n",
"Army Research Laboratory\nUS\n"
] |
[] |
We present a novel event recognition approach called Spatiallypreserved Doubly-injected Object Detection CNN (S-DOD-CNN), which incorporates the spatially preserved object detection information in both a direct and an indirect way. Indirect injection is carried out by simply sharing the weights between the object detection modules and the event recognition module. Meanwhile, our novelty lies in the fact that we have preserved the spatial information for the direct injection. Once multiple regions-of-intereset (RoIs) are acquired, their feature maps are computed and then projected onto a spatially-preserving combined feature map using one of the four RoI Projection approaches we present. In our architecture, combined feature maps are generated for object detection which are directly injected to the event recognition module. Our method provides the state-of-the-art accuracy for malicious event recognition.
|
10.1109/icassp40776.2020.9052953
|
[
"https://arxiv.org/pdf/1902.04051v2.pdf"
] | 60,440,551 |
1902.04051
|
5042b13bcb805b5d7b1bf7d82ae61ce5346ae643
|
S-DOD-CNN: DOUBLY INJECTING SPATIALLY-PRESERVED OBJECT INFORMATION FOR EVENT RECOGNITION
Hyungtae Lee
Booz Allen Hamilton Inc
Sungmin Eum
Booz Allen Hamilton Inc
Heesung Kwon
Army Research Laboratory
US
S-DOD-CNN: DOUBLY INJECTING SPATIALLY-PRESERVED OBJECT INFORMATION FOR EVENT RECOGNITION
Index Terms-IOD-CNNDOD-CNNmalicious crowd datasetmalicious event classificationmulti-task CNN
We present a novel event recognition approach called Spatiallypreserved Doubly-injected Object Detection CNN (S-DOD-CNN), which incorporates the spatially preserved object detection information in both a direct and an indirect way. Indirect injection is carried out by simply sharing the weights between the object detection modules and the event recognition module. Meanwhile, our novelty lies in the fact that we have preserved the spatial information for the direct injection. Once multiple regions-of-intereset (RoIs) are acquired, their feature maps are computed and then projected onto a spatially-preserving combined feature map using one of the four RoI Projection approaches we present. In our architecture, combined feature maps are generated for object detection which are directly injected to the event recognition module. Our method provides the state-of-the-art accuracy for malicious event recognition.
Fig. 1
: S-DOD-CNN Framework. Red, blue, and green arrows indicate the computational flow responsible for event recognition (e), rigid object detection (r), and non-rigid object detection (n), respectively. For rigid and non-rigid object detection, a combined feature map is constructed by combining per-RoI feature maps while preserving the spatial locations of the RoIs within the original image.
DOD-CNN consists of three connected networks responsible for event recognition, rigid object detection, and non-rigid object detection. Three networks are co-trained while object detection information is indirectly passed onto event recognition via the shared portion of the architecture.
DOD-CNN achieves further performance improvement by directly passing intermediate output of the rigid and non-rigid object detection onto the event recognition module. More specifically, each of the two feature maps from rigid and non-rigid object detection is generated by pooling multiple per-RoI feature maps (i.e., feature maps for each region-of-interest) via batch pooling. The two feature maps are then directly injected into the event recognition module at the end of the last convolutional layer. Note that the batch pooling simply aggregates multiple feature maps along the batch direction without considering their spatial coordinates in the original image.
In this paper, we present an approach to generate a single combined feature map which safely preserves the original spatial location of the per-RoI feature maps provided by the object detection process. Per-RoI feature maps are first projected onto separate projected feature maps using a novel RoI Projection which are then aggregated into a single combined feature map. In the RoI projection, each per-RoI feature map is weighted by its object detection probability. Although our approach follows the spirit of DOD-CNN by incorporating the object detection information in two-ways (i.e., doubly injecting), the rigid and non-rigid object detection information is used in a different way by preserving the spatial context for each of the per-RoI feature map. Therefore, we call our new architecture Spatially-Preserved and Doubly-injected Object Detection CNN (S-DOD-CNN) which is depicted in Figure 1.
When projecting the per-RoI feature maps into one single projected feature map, we adopt two interpolation methods which are MAX interpolation and Linear interpolation. In MAX interpolation, a maximum value among the input points is projected into the output point. In Linear interpolation, a linearly interpolated value of the four nearest input points is projected into the output. These interpolation methods can be applied with either class-specific or class-agnostic RoI selection. While class-specific selection carries out the RoI projection for each set of object class, the class-agnostic selection considers only a small subset of RoIs among all the RoIs disregarding the object classes. Therefore, the RoI projection can be conducted in four different combinations.
In order to prove the effectiveness of using a spatiallypreserved object detection feature maps for event recognition, we conducted several experiments on the malicious event classification [12]. We have validated that all four combinations of the novel RoI projection within S-DOD-CNN provide higher accuracy than all the baselines. [14] consists of five shared convolutional layers (C 1 , · · · , C 5 ), one RoI pooling layer, and three separate modules, each responsible for event recognition, rigid object detection, and non-rigid object detection, respectively. Each module consists of two convolutional layers (C 6 , C 7 ), one average pooling layer (AV G), and one fully connected layer (F C), where the output dimension of the last layer is set to match the number of events or objects.
DOD-CNN takes one image and multiple RoIs (approximatedly 2000 for rigid objects and 5 for non-rigid objects per image) as input. Selective search [15] and multi-scale sliding windows [16,17,18,19] are used to generate the RoIs for rigid and non-rigid objects, respectively. For each RoI, per-RoI feature map is computed via RoI pooling and then fed into its corresponding task-specific module.
For rigid or non-rigid object detection, the output of the last convolutional layer (denoted as per-RoI C 7 feature map) is pooled into a single map along the batch direction, which is referred to as a batch pooling. The two single feature maps are then concatenated with the output of the last convolution layer of the event recognition. The concatenated map is fed into the remaining event recognition layers which are the average pooling and fully connected layer.
Batch pooling does not preserve the spatial information of the feature maps since these maps are aligned and pooled without the consideration of their spatial coordinates in the The combined feature map is max-pooled with multiple projected feature maps that are projected from original feature maps (2×2 bins in this example) w.r.t. their original spatial coordinates in the image. original input image. For instance, consider selecting feature points at a same location, from two different feature maps which are aligned for batch pooling. These points do not necessarily correspond to the same location in the input image as each feature map is tied with a different RoI.
S-DOD-CNN. We introduce a novel method that aggregates multiple feature maps which come from different regions in the input image while preserving the spatial information. The spatial information for each per-RoI C 7 feature map is preserved by projecting each feature map onto a location on a projected feature map which corresponds to its original spatial location within the input image. Figure 2 illustrates how per-RoI C 7 feature maps are processed through RoI Projection (RoIProj) to generate corresponding projected feature maps. Note that before the per-RoI C 7 feature maps are fed into RoIProj, they are multiplied by its detection probability to incorporate the reliability for each detection result. The projected feature maps are then maxpooled to build a combined feature map. In our experiment, five per-RoI C 7 feature maps with the highest probability values are chosen to build the combined feature map.
Our network generates two separate combined feature maps, one for rigid and another for non-rigid object detection. These two combined feature maps are concatenated with the event recognition feature map as in Figure 1. The two combined feature maps share the same-sized and aligned receptive field with the event recognition feature map, and thus they are 'spatially-preserved'. The event recognition feature map is the output of C 5 layer right before RoI pool- ing. The event recognition module intakes this concatenated map to compute the event recognition probability. As we are constructing our network based on the DOD-CNN, but with 'spatially-preserved' object detection information for event recognition, we call it Spatially-preserved DOD-CNN (S-DOD-CNN).
RoI Projection. When projecting the per-RoI C 7 feature maps into one projected feature map (denoted as RoIProj in Figure 2), we adopt one of the two interpolation methods: MAX interpolation or Linear interpolation. Examples of the two interpolations are shown in Figure 3. When multiple points on an input map is being projected onto a single point on an output map, the point is filled with a maximum (MAX) or a linearly interpolated value of four nearest input points (Linear).
The RoI projection can be performed in two different ways. These two methods differ based on how a subset of RoIs (the RoIs that are actually used for projection) is selected from the overall set of RoIs. Both of the selection methods utilize N probability scores which are generated for each RoI after AVG & FC (see Figure 2), where N is the number of classes. For class-specific selection, 5 RoIs with highest probability scores are chosen for each class. For class-agnostic selection, 5 RoIs with highest probability scores are chosen from all the RoIs without regard to which classes they come from. Therefore, the number of executions for RoI projection is either N times or just once, based on which selection method is chosen. In addition, if per-RoI C 7 feature map has k channels, the dimension of the channel for the combined map under the class-specific selection becomes k × N , while it remains as k under the class-agnostic selection. Overall, the RoI projection can be performed as one of the four combinations as there are two different interpolations methods (MAX/Linear) and two different RoI selection methods (class-specific/agnostic).
Training
S-DOD-CNN is trained by using a mini-batch stochastic gradient descent (SGD) optimization approach. Event recognition and rigid object detection modules are optimized by min-imizing their softmax loss while cross entropy loss is used for non-rigid object detection optimization. Each batch contains two images consisting of one malicious image and one benign image. For event recognition and non-rigid object detection, 1 and 5 RoIs are generated for each image, respectively. For rigid object detection, a batch takes 64 RoIs randomly selected from approximately 2000 RoIs generated by selective search. Accordingly, we need to prepare a large number of batches to cover the entire RoI set for training rigid object detection. A batch (which contains 2 images) consists of 2, 128, and 10 RoIs for event recognition, rigid object detection, and non-rigid object detection, respectively.
In preparing the positive and negative samples for training, we have used 0.5 and 0.1 as the rigid and non-rigid object detection thresholds for the intersection-over-union (IOU) metric, respectively. Any RoI whose IOU with respect to the ground truth bounding box is larger than the threshold is treated as a positive example. RoIs whose IOU is lower than 0.1 are treated as negative examples.
The weights in C 1 , · · · , C 5 are initially inherited from the pre-trained AlexNet [20] trained on a large-scale Places dataset [21] and the remaining layers (C 6 , C 7 and F C layers for all three modules) are initialized according to Gaussian distribution with 0 mean and 0.01 standard deviation.
Two-stage Cascaded Optimization. To allow more batches for training rigid object detection, we use a two-stage cascaded optimization strategy. In the first stage, only the layers used to perform rigid object detection are trained. Then, in the second stage, all three tasks are jointly optimized in an end-toend fashion. Figure 4 shows the second stage of the training process. For each training iteration in the second stage, two processes ((a) and (b) in Figure 4) are executed in order. In process (a), all the layers of the two object detection modules are optimized with a batch containing 128 RoIs of rigid object and 10 RoIs of non-rigid object. After the process (a) is done, full set of RoIs (i.e. approximately 4000 RoIs for rigid object, 10 RoIs for non-rigid object) is fed into the object detection modules. The resulting combined feature maps are injected into the event recognition module for optimization. We set the learning rate of 0.001, 50k iterations, and the step size Table 1: Event recognition average precision (AP). All networks use the same backbone consisting of five per-image convolution layers and three sets of two convolutional layers and one fully connected layer corresponding to three tasks.
of 30k for the first stage and the learning rate of 0.0001, 20k iterations, and the step size of 12k for the second stage.
EXPERIMENTS
Dataset
Malicious Crowd Dataset [12,22] is selected as it provides the appropriate components to evaluate the effects of using object information for event recognition. It contains 1133 images and is equally divided into malicious classes and benign classes. Half of the dataset is used for training and the rest is used for testing. In addition to the label of the event class, bounding box annotations of three rigid objects (police, helmet, car) and two non-rigid objects (fire, smoke) are provided. [12] provides details on how these objects are selected.
Performance Evaluation
To demonstrate the effectiveness of our approach, we compared the event recognition accuracy of S-DOD-CNN with two baselines: DOD-CNN which does not include direct injection and DOD-CNN which incorporates both direct and indirect injection. The accuracy is measured with average precision (AP) as shown in Table 1. S-DOD-CNN, which adopts one of the four RoI projections, provides at least 1.1% higher accuracy than both of the baselines. This verifies the effectiveness of using object detection information spatially preserved via RoI projection. RoI projection using linear interpolation and class-specific RoI selection shows the highest accuracy among all the methods but the differences are marginal.
In Table 2, we also analyzed how the each task performs when they are individually optimized (Single-task) or cooptimized. For No Direct Injection and DOD-CNN cases, non-rigid object detection performs better when optimized simultaneously with other tasks. However, in S-DOD-CNN, the performance of the two sub-tasks (rigid and non-rigid object detection) was degraded. This indicates that the two tasks are sacrificed to solely improve event recognition performance.
Ablation Study: Location of Building and Injecting Combined Feature Map
Applying a convolutional layer after the concatenation may not be effective if the combined feature maps (coming from object detection) are not aligned properly with the event recognition feature map. One advantage achieved by constructing combined feature maps using our approach is that the map can be injected at any position in event recognition. Table 3 shows the performance that varies according to the location of building and injection of the combined feature map. DOD-CNN, which loses RoI's spatial information during building a feature map, shows performance degradation when the injection location is placed before any convolutional layer (i.e., C 6 in Table 3). In contrast, S-DOD-CNN does not lose any performance regardless of the injection position. The performance of S-DOD-CNN depends greatly on the building location of the combined feature map. The best accuracy is achieved when it is constructed after C 7 . Letting the input image go through more number of convolutional layers before building the combined feature maps may have provided a richer representation.
CONCLUSION
We have devised an event recognition approach referred to as S-DOD-CNN where the object detection is exploited while preserving the spatial information. Multiple per-RoI feature maps within an object detection module are projected onto a combined feature map using one of the newly presented RoI Projections preserving the spatial location of each RoI with respect to the original image. These maps are then injected to the event recognition module. Our approach provides the state-of-the-art accuracy for malicious event recognition.
CNN. DOD-CNN
Fig. 2 :
2Overall Process of Building a Combined Feature Map.
Fig. 3 :
3RoI Projection.
:
Forward-backward for rigid object detection : Forward-backward for non-rigid object detection : Forward-backward for per-image conv. : Forward only for rigid object detection : Forward only for non-rigid object detection : Share layers
Fig. 4 :
4The Second Stage of Training Process.
Per-Image ConvC1, ..., C5RoI
Pool
Rigid Object
Batch
Sampler
C6,r
C7,r
FC r
C6,n
C7,n
FC n
AVG
AVG
Selective Search
|Rr| ≈ 2000 (for one image)
Multi-Scale Sliding Window
|Rn| = 5 (for one image)
RoI
Pool
C6,e
C7,e
FC e
AVG
C6,r
C7,r
FC r
C6,n
C7,n
FC n
AVG
AVG
Map Build
Map Build
Rigid Object
Softmax Prob.
Non-rigid Object
Sigmoid Prob.
Pool
Rigid Object Classification Loss
Non-rigid Object Classification Loss
Event Classification Loss
Maps
Concat.
|Br| = 128
Task Single-task No Direct Injection DOD-CNN S-DOD-CNNE
89.9
90.7
94.6
95.8
R
8.1
7.8
7.8
7.7
N
30.4
37.2
37.2
22.5
Table 2 :
2Single task versus multitask performance. Task: E: Event Recognition, R: Rigid Object Detection, N: Non-rigid Object Detection. For S-DOD-CNN, RoI Projection with linear interpolation and class-specific selection was chosen.Method
Build
Inject
C5
C6
C7
DOD-CNN [14]
C7
·
91.4
94.6
S-DOD-CNN
RoIPool
90.5
90.6
90.5
C6
94.8
94.8
94.7
C7
95.8
95.7
95.5
Table 3 :
3Performance comparison w.r.t. location of building and injection of combined feature maps. RoI projection with linear interpolation and class-specific selection used for S-DOD-CNN.
Object-scene convolutional neural networks for event recognition in images. Limin Wang, Zhe Wang, Wenbin Du, Yu Qiao, CVPRWLimin Wang, Zhe Wang, Wenbin Du, and Yu Qiao, "Object-scene convolutional neural networks for event recognition in images," in CVPRW, 2015.
Better exploiting OS-CNNs for better event recognition in image. Limin Wang, Zhe Wang, Sheng Guo, Yu Qiao, ICCVWLimin Wang, Zhe Wang, Sheng Guo, and Yu Qiao, "Bet- ter exploiting OS-CNNs for better event recognition in image," in ICCVW, 2015.
What, where and who? classifying event by scene and object recognition. Li-Jia Li, Li Fei-Fei, ICCV. Li-Jia Li and Li Fei-Fei, "What, where and who? classi- fying event by scene and object recognition," in ICCV, 2007.
Detection bank: An object detection based video representation for multimedia event recognition. Tim Althoff, Hyun Oh Song, Trevor Darrell, ACM MM. Tim Althoff, Hyun Oh Song, and Trevor Darrell, "De- tection bank: An object detection based video represen- tation for multimedia event recognition," in ACM MM, 2012.
Human-autonomy sensor fusion for rapid object detection. Ryan M Robinson, Hyungtae Lee, Michael Mccourt, R Amar, Heesung Marathe, Chau Kwon, William D Ton, Nothwang, IROS. Ryan M. Robinson, Hyungtae Lee, Michael McCourt, Amar R. Marathe, Heesung Kwon, Chau Ton, and William D. Nothwang, "Human-autonomy sensor fu- sion for rapid object detection," in IROS, 2015.
What do 15,000 object categories tell us about classifying and localizing actions?. Mihir Jain, Jan C Van Gemert, G M Cees, Snoek, CVPR. Mihir Jain, Jan C. van Gemert, and Cees G. M. Snoek, "What do 15,000 object categories tell us about classi- fying and localizing actions?," in CVPR, 2015.
Task-conversions for integrating human and machine perception in a unified task. Hyungtae Lee, Heesung Kwon, Ryan M Robinson, Daniel Donavanik, William D Nothwang, Amar R Marathe, IROS. Hyungtae Lee, Heesung Kwon, Ryan M. Robinson, Daniel Donavanik, William D. Nothwang, and Amar R. Marathe, "Task-conversions for integrating human and machine perception in a unified task," in IROS, 2016.
An efficient fusion approach for combining human and machine decisions. Hyungtae Lee, Heesung Kwon, Ryan M Robinson, William D Nothwang, Amar R Marathe, SPIE Defense+Commercial Sensing (DCS). Hyungtae Lee, Heesung Kwon, Ryan M. Robinson, William D. Nothwang, and Amar R. Marathe, "An ef- ficient fusion approach for combining human and ma- chine decisions," in SPIE Defense+Commercial Sensing (DCS), 2016.
Dynamic belief fusion for object detection. Hyungtae Lee, Heesung Kwon, Ryan M Robinson, William D Nothwang, Amar R Marathe, WACVHyungtae Lee, Heesung Kwon, Ryan M. Robinson, William D. Nothwang, and Amar R. Marathe, "Dy- namic belief fusion for object detection," in WACV, 2016.
IOD-CNN: Integrating object detection networks for event recognition. * Sungmin Eum, Hyungtae Lee, * , Heesung Kwon, David Doermann, ICIP. indicates an equal contribution.Sungmin Eum*, Hyungtae Lee*, Heesung Kwon, and David Doermann, "IOD-CNN: Integrating object detec- tion networks for event recognition," in ICIP, 2017, (* indicates an equal contribution.).
Enhanced object detection via fusion with prior beliefs from image classification. Yilun Cao, * , Hyungtae Lee, * , Heesung Kwon, ICIP. * indicates equal contribution.Yilun Cao*, Hyungtae Lee*, and Heesung Kwon, "En- hanced object detection via fusion with prior beliefs from image classification," in ICIP, 2017, (* indicates equal contribution.).
Exploitation of semantic keywords for malicious event classification. Hyungtae Lee, * , Sungmin Eum, * , Joel Levis, * , Heesung Kwon, James Michaelis, Michael Kolodny, ICASSP. indicates an equal contribution.Hyungtae Lee*, Sungmin Eum*, Joel Levis*, Heesung Kwon, James Michaelis, and Michael Kolodny, "Ex- ploitation of semantic keywords for malicious event classification," in ICASSP, 2018, (* indicates an equal contribution.).
DBF: Dynamic belief fusion for combining multiple object detectors. Hyungtae Lee, Heesung Kwon, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). Hyungtae Lee and Heesung Kwon, "DBF: Dynamic belief fusion for combining multiple object detectors," IEEE Transactions on Pattern Analysis and Machine In- telligence (TPAMI), 2019.
DOD-CNN: Doubly-injecting object information for event recognition. Hyungtae Lee, Sungmin Eum, Heesung Kwon, ICASSP. Hyungtae Lee, Sungmin Eum, and Heesung Kwon, "DOD-CNN: Doubly-injecting object information for event recognition," in ICASSP, 2019.
Selective search for object recognition. J R R Uijlings, K E A Van De Sande, T Gevers, A W M Smeulders, International Journal on Computer Vision (IJCV). 1042J.R.R. Uijlings, K.E.A. van de Sande, T. Gevers, and A.W.M. Smeulders, "Selective search for object recog- nition," International Journal on Computer Vision (IJCV), vol. 104, no. 2, pp. 154-171, February 2013.
Rapid object detection using a boosted cascade of simple features. Paul Viola, Michael Jones, CVPR. Paul Viola and Michael Jones, "Rapid object detection using a boosted cascade of simple features," in CVPR, 2001.
Histogram of oriented gradients for human detection. Navneet Dalal, Bill Triggs, CVPR. Navneet Dalal and Bill Triggs, "Histogram of oriented gradients for human detection," in CVPR, 2005.
Object detection with discriminatively trained part based models. Pedro Felzenszwalb, Ross Girshick, David Mcallester, Deva Ramanan, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). 329Pedro Felzenszwalb, Ross Girshick, David McAllester, and Deva Ramanan, "Object detection with discrimi- natively trained part based models," IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 32, no. 9, pp. 1627-1645, September 2010.
Qualitative pose estimation by discriminative deformable part models. Hyungtae Lee, Vlad I Morariu, Larry S Davis, ACCVHyungtae Lee, Vlad I. Morariu, and Larry S. Davis, "Qualitative pose estimation by discriminative de- formable part models," in ACCV, 2012.
ImageNet classification with deep convolutional neural networks. Alex Krizhevsky, Ilya Sutskever, Geoffrey Hinton, NIPS. Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, "ImageNet classification with deep convolutional neural networks," in NIPS, 2012.
Learning deep features for scene recognition using Places database. Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, Aude Oliva, NIPS. Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Anto- nio Torralba, and Aude Oliva, "Learning deep features for scene recognition using Places database," in NIPS, 2014.
Going deeper with CNN in malicious crowd event classification. Sungmin Eum, Hyungtae Lee, Heesung Kwon, SPIE Defense+Commercial Sensing (DCS). Sungmin Eum, Hyungtae Lee, and Heesung Kwon, "Going deeper with CNN in malicious crowd event classification," in SPIE Defense+Commercial Sensing (DCS), 2018.
|
[] |
[
"ACCEPTED BY IEEE TRANSACTIONS ON MEDICAL IMAGING 1 MulViMotion: Shape-aware 3D Myocardial Motion Tracking from Multi-View Cardiac MRI",
"ACCEPTED BY IEEE TRANSACTIONS ON MEDICAL IMAGING 1 MulViMotion: Shape-aware 3D Myocardial Motion Tracking from Multi-View Cardiac MRI",
"ACCEPTED BY IEEE TRANSACTIONS ON MEDICAL IMAGING 1 MulViMotion: Shape-aware 3D Myocardial Motion Tracking from Multi-View Cardiac MRI",
"ACCEPTED BY IEEE TRANSACTIONS ON MEDICAL IMAGING 1 MulViMotion: Shape-aware 3D Myocardial Motion Tracking from Multi-View Cardiac MRI"
] |
[
"Qingjie Meng ",
"Chen Qin ",
"Wenjia Bai ",
"Tianrui Liu ",
"Antonio De Marvao ",
"Declan P O'regan ",
"Fellow, IEEEDaniel Rueckert ",
"Qingjie Meng ",
"Chen Qin ",
"Wenjia Bai ",
"Tianrui Liu ",
"Antonio De Marvao ",
"Declan P O'regan ",
"Fellow, IEEEDaniel Rueckert "
] |
[] |
[] |
Recovering the 3D motion of the heart from cine cardiac magnetic resonance (CMR) imaging enables the assessment of regional myocardial function and is important for understanding and analyzing cardiovascular disease. However, 3D cardiac motion estimation is challenging because the acquired cine CMR images are usually 2D slices which limit the accurate estimation of through-plane motion. To address this problem, we propose a novel multi-view motion estimation network (MulViMotion), which integrates 2D cine CMR images acquired in short-axis and long-axis planes to learn a consistent 3D motion field of the heart. In the proposed method, a hybrid 2D/3D network is built to generate dense 3D motion fields by learning fused representations from multi-view images. To ensure that the motion estimation is consistent in 3D, a shape regularization module is introduced during training, where shape information from multi-view images is exploited to provide weak supervision to 3D motion estimation. We extensively evaluate the proposed method on 2D cine CMR images from 580 subjects of the UK Biobank study for 3D motion tracking of the left ventricular myocardium. Experimental results show that the proposed method quantitatively and qualitatively outperforms competing methods.
|
10.1109/tmi.2022.3154599
|
[
"https://export.arxiv.org/pdf/2208.00034v1.pdf"
] | 247,108,049 |
2208.00034
|
1591c6bbabea5ef5379ef0ba1c345b7da143c53e
|
ACCEPTED BY IEEE TRANSACTIONS ON MEDICAL IMAGING 1 MulViMotion: Shape-aware 3D Myocardial Motion Tracking from Multi-View Cardiac MRI
Qingjie Meng
Chen Qin
Wenjia Bai
Tianrui Liu
Antonio De Marvao
Declan P O'regan
Fellow, IEEEDaniel Rueckert
ACCEPTED BY IEEE TRANSACTIONS ON MEDICAL IMAGING 1 MulViMotion: Shape-aware 3D Myocardial Motion Tracking from Multi-View Cardiac MRI
Index Terms-Multi-view3D motion trackingshape regular- izationcine CMRdeep neural networks
Recovering the 3D motion of the heart from cine cardiac magnetic resonance (CMR) imaging enables the assessment of regional myocardial function and is important for understanding and analyzing cardiovascular disease. However, 3D cardiac motion estimation is challenging because the acquired cine CMR images are usually 2D slices which limit the accurate estimation of through-plane motion. To address this problem, we propose a novel multi-view motion estimation network (MulViMotion), which integrates 2D cine CMR images acquired in short-axis and long-axis planes to learn a consistent 3D motion field of the heart. In the proposed method, a hybrid 2D/3D network is built to generate dense 3D motion fields by learning fused representations from multi-view images. To ensure that the motion estimation is consistent in 3D, a shape regularization module is introduced during training, where shape information from multi-view images is exploited to provide weak supervision to 3D motion estimation. We extensively evaluate the proposed method on 2D cine CMR images from 580 subjects of the UK Biobank study for 3D motion tracking of the left ventricular myocardium. Experimental results show that the proposed method quantitatively and qualitatively outperforms competing methods.
Cine cardiac magnetic resonance (CMR) imaging supports motion analysis by acquiring sequences of 2D images in different views. Each image sequence covers the complete cardiac cycle containing end-diastolic (ED) and end-systolic (ES) phases [24]. Two types of anatomical views are identified, including (1) short-axis (SAX) view and (2) long-axis (LAX) view such as 2-chamber (2CH) view and 4-chamber (4CH) view (Fig. 1). The SAX sequences typically contain a stack of 2D slices sampling from base to apex in each frame (e.g., 9-12 slices). The LAX sequences contain a single 2D slice that is approximately orthogonal to the SAX plane in each frame. These acquired images have high temporal resolution, high signal-to-noise ratio as well as high contrast between the blood pool and myocardium. With these properties, cine CMR imaging has been utilized in recent works for 2D myocardial motion estimation, e.g., [5,39,40,56,58].
2D myocardial motion estimation only considers motion in either the SAX plane or LAX plane and does not provide complete 3D motion information for the heart. This may lead to inaccurate assessment of cardiac function. Therefore, 3D motion estimation that recovers myocardial deformation in the X, Y and Z directions is important. However, estimating 3D motion fields from cine CMR images remains challenging because (1) SAX stacks have much lower through-plane resolution (typically 8 mm slice thickness) than in-plane resolution (typically 1.8 x 1.8 mm), (2) image quality can be negatively affected by slice misalignment in SAX stacks as only one or two slices are acquired during a single breath-hold, and (3) high-resolution 2CH and 4CH view images are too spatially sparse to estimate 3D motion fields on their own.
In this work, we take full advantage of both SAX and LAX (2CH and 4CH) view images, and propose a multiview motion estimation network for 3D myocardial motion tracking from cine CMR images. In the proposed method, a hybrid 2D/3D network is developed for 3D motion estimation. This hybrid network learns combined representations from multi-view images to estimate a 3D motion field from the ED frame to any t-th frame in the cardiac cycle. To guarantee an accurate motion estimation, especially along the longitudinal direction (i.e., the Z direction), a shape regularization module is introduced to leverage anatomical shape information for motion estimation during training. This module encourages the estimated 3D motion field to correctly transform the 3D shape of the myocardial wall from the ED frame to the t-th frame. Here anatomical shape is represented by edge maps that show the contour of the cardiac anatomy. During inference, the hybrid network generates a sequence of 3D motion fields between paired frames (ED and t-th frames), which represents the myocardial motion across the cardiac cycle. The main contributions of this paper are summarized as follows:
• We develop a solution to a challenging cardiac motion tracking problem: learning 3D motion fields from a set of 2D SAX and LAX cine CMR images. We propose an endto-end trainable multi-view motion estimation network (MulViMotion) for 3D myocardial motion tracking. • The proposed method enables accurate 3D motion tracking by combining multi-view images using both latent information and shape information: (1) the representations of multi-view images are combined in the latent space for the generation of 3D motion fields; (2) the complementary shape information from multi-view images is exploited in a shape regularization module to provide explicit constraint on the estimated 3D motion fields. • The proposed method is trained in a weakly supervised manner which only requires sparsely annotated data in different 2D SAX and LAX views and requires no ground truth 3D motion fields. The 2D edge maps from the corresponding SAX and LAX planes provide weak supervision to the estimated 3D edge maps for guiding 3D motion estimation in the shape regularization module. • We perform extensive evaluations for the proposed method on 580 subjects from the UK Biobank study. We further present qualitative analysis on the CMR images with severe slice misalignment and we explore the applicability of our method for wall thickening measurement.
II. RELATED WORK 1) Conventional motion estimation methods: A common method for quantifying cardiac motion is to track noninvasive markers. CMR myocardial tagging provides tissue markers (stripe-like darker tags) in myocardium which can deform with myocardial motion [57]. By tracking the deformation of markers, dense displacement fields can be retrieved in the imaging plane. Harmonic phase (HARP) technique is the most representative approach for motion tracking in tagged images [14,30,33]. Several other methods have been proposed to compute dense displacement fields from dynamic myocardial contours or surfaces using geometrical and biomechanical modeling [34,53]. For example, Papademetris et al. [34] proposed a Bayesian estimation framework for myocardial motion tracking from 3D echocardiography. In addition, image registration has been applied to cardiac motion estimation in previous works. Craene et al. [19] introduced continuous spatio-temporal B-spline kernels for computing a 4D velocity field, which enforced temporal consistency in motion recovery. Rueckert et al. [43] proposed a free form deformation (FFD) method for general non-rigid image registration. This method has been used for cardiac motion estimation in many recent works, e.g., [5,7,12,37,38,44,45,50]. Thirion [49] built a demons algorithm which utilizes diffusing models for image matching and further cardiac motion tracking. Based on this work, Vercauteren et al. [52] adapted demons algorithm to provide non-parametric diffeomorphic transformation and McLeod et al. [32] introduced an elastic-like regularizer to improve the incompressibility of deformation recovery.
2) Deep learning-based motion estimation methods: In recent years, deep convolutional neural networks (CNNs) have been successfully applied to medical image analysis, which has inspired the exploration of deep learning-based cardiac motion estimation approaches. Qin et al. [39] proposed a multi-task framework for joint estimation of segmentation and motion. This multi-task framework contains a shared feature encoder which enables a weakly-supervised segmentation. Zheng et al. [58] proposed a method for cardiac pathology classification based on cardiac motion. Their method utilizes a modified U-Net [42] to generate flow maps between ED frame and any other frame. For cardiac motion tracking in multiple datasets, Yu et al. [56] considered the distribution mismatch problem and proposed a meta-learning-based online model adaption framework. Different from these methods which estimate motion in cine CMR, Ye et al. [55] proposed a deep learning model for tagged image motion tracking. In their work, the motion field between any two consecutive frames is first computed, followed by estimating the Lagrangian motion field between ED frame and any other frame. Most of these existing deep learning-based methods aim at 2D motion tracking by only using SAX stacks. In contrast, our method focuses on 3D motion tracking by fully combining multiple anatomical views (i.e., SAX, 2CH and 4CH), which is able to estimate both in-plane and through-plane myocardial motion.
3) Multi-view based cardiac analysis: Different anatomical scan views usually contain complementary information and the combined multiple views can be more descriptive than a single Fig. 2: An overview of MulViMotion. We use a hybrid 2D/3D network to estimate a 3D motion field Φ t from the input multi-view images. In the hybrid network, FeatureNet learns multi-view motion feature F M and multi-view shape feature F S from the input, followed by MotionNet which generates Φ t based on F M . A shape regularization module leverages anatomical shape information for 3D motion estimation. It encourages the predicted 3D edge maps of the myocardial wallÊ 0 /Ê t (predicted from F S using ShapeNet) and the warped 3D edge mapÊ 0→t (warped from ED frame to the t-th frame by Φ t ) to be consistent with the ground truth 2D edge maps defined on multi-view images. Shape regularization is only used during training.
! " U MotionNet (3D CNNs) #$%%"& #'$ T " #( ) #( ShapeNet * T % " % ) % )→" Slicing { % ) ' , % " ' , % )→" ' } ',{#(,/0&,10&} { ) ' , " ' } ',% )→" ' FeatureNet (2D CNNs) SAX 2CH 4CH { ! "# , ! $%& , ! '%& } { ( "# , ( $%& , ( '%& } ED
view. Chen et al. [13] utilized both SAX and LAX views for 2D cardiac segmentation, where the features of multi-view images are combined in the bottleneck of 2D U-Net. Puyol-Antón et al. [37] introduced a framework that separately uses multiview images for myocardial strain analysis. In their method, the SAX view is used for radial and circumferential strain estimation while the LAX view is used for longitudinal strain estimation. Abdelkhalek et al. [1] proposed a 3D myocardial strain estimation framework, where the point clouds from SAX and LAX views are aligned for surface reconstruction. Attar et al. [3] proposed a framework for 3D cardiac shape prediction, in which the features of multi-view images are concatenated in CNNs to predict the 3D shape parameters.
In this work, we focus on using multi-view images for 3D motion estimation. Compared to most of these existing works which only combine the features of multi-view images in the latent space (e.g., [3,13]), our method additionally combines complementary shape information from multiple views to predict anatomically plausible 3D edge map of myocardial wall on different time frames, which provides guidance for 3D motion estimation.
III. METHOD
Our goal is to estimate 3D motion fields of the LV myocardium from multi-view 2D cine CMR images. We formulate our task as follows: Let I SA = {I sa t ∈ R H×W ×D |0 t T − 1} be a SAX sequence which contains stacks of 2D images (D slices) and
I LA = {I 2ch t ∈ R H×W , I 4ch t ∈ R H×W |0 t T − 1} be LAX se-
quences which contain 2D images in the 2CH and 4CH views. H and W are the height and width of each image and T is the number of frames. We want to train a network to estimate a 3D motion field Φ t ∈ R H×W ×D×3 by using the multi-view images of the ED frame ({I sa 0 , I 2ch 0 , I 4ch 0 }) and of any t-th frame ({I sa t , I 2ch t , I 4ch t }). Φ t describes the motion of the LV myocardium from ED frame to the t-th frame. For each voxel in Φ t , we estimate its displacement in the X, Y , Z directions.
To solve this task, we propose MulViMotion that estimates 3D motion fields from multi-view images with shape regularization. The schematic architecture of our method is shown in Fig. 2. A hybrid 2D/3D network that contains FeatureNet (2D CNNs) and MotionNet (3D CNNs) is used to predict Φ t from the input multi-view images. FeatureNet learns multiview multi-scale features and is used to extract multi-view motion feature F M and multi-view shape feature F S from the input. MotionNet generates Φ t based on F M . A shape regularization module is used to leverage anatomical shape information for 3D motion estimation during training. In this module, 3D edge maps of the myocardial wall are predicted from F S using ShapeNet and warped from ED frame to the tth frame by Φ t . The sparse ground truth 2D edge maps derived from the multi-view images provide weak supervision to the predicted and warped 3D edge maps, and thus encourage an accurate estimation of Φ t , especially in the Z direction. Here, a slicing step is used to extract corresponding multi-view planes from the 3D edge maps in order to compare 3D edge maps with 2D ground truth. During inference, a 3D motion field is directly generated from the input multi-view images by the hybrid network, without using shape regularization.
! !"# " #$ % #$ " &'( % &'( " )'( % )'( ! $"# ! %& % = { %_+ , %_, } " Multi-view concatenation C %_, - + - , - %_+ - " - % -= { %_+ -, %_, -} " #$ " &'( " )'( C " %_+ #$ %_+ &'( %_+ )'( C %_+ %_, #$ %_, &'( %_, )'( C %_, 2D
A. 3D motion estimation 1) Multi-view multi-scale feature extraction (FeatureNet):
The first step of 3D motion estimation is to extract internal representations from the input 2D multi-view images {I sa j , I 2ch j , I 4ch j |j = {0, t}}. We build FeatureNet to simultaneously learn motion and shape feature from the input because the motion and shape of the myocardial wall are closely related and can provide complementary information to each other [15,39,48]. FeatureNet consists of (1) multi-scale feature fusion and (2) multi-view concatenation (see Fig. 3).
In the multi-scale feature fusion ( Fig. 3 (a)), the input multiview images are unified to D-channel 2D feature maps by applying 2D convolution on 2CH and 4CH view images. Then three 2D encoders {f ψi |i = {sa, 2ch, 4ch}} are built to extract motion and shape features from each anatomical view,
{F i M , F i S } = f ψi (I i 0 , I i t ), i = {sa, 2ch, 4ch}.(1)
Here, i represents anatomical views and ψ i refers to the network parameters of f ψi . F i M and F i S are the learned motion feature and shape feature, respectively. As these encoders aim to extract the same type of information (i.e., shape and motion information), the three encoders share weights to learn representations that are useful and related to different views.
In each encoder, representations at different scales are fully exploited for feature extraction. {f ψi |i = {sa, 2ch, 4ch}} consists of (1) a Siamese network that extracts features from both ED frame and t-th frame, and (2) feature-fusion layers that concatenate multi-scale features from pairs of frames ( Fig. 3 (b)). From the Siamese network, the last feature maps of the two streams are used as shape feature of the ED frame (F i S 0 ) and the t-th frame (F i S t ), respectively, and
F i S = {F i S 0 , F i S t }.
All features across different scales from both streams are combined by feature-fusion layers to generate motion feature F i M . In detail, these multi-scale features are upsampled to the original resolution by a convolution and upsampling operation and then combined using a concatenation layer.
With the obtained {F i M , F i S |i = {sa, 2ch, 4ch}}
, a multiview concatenation generates the multi-view motion feature F M and the multi-view shape feature F S via channel-wise concatenation C(·, ·, ·) (see Fig. 3 (c)),
F M = C(F sa M , F 2ch M , F 4ch M ), F S j = C(F sa S j , F 2ch S j , F 4ch S j ). (2) Here j = {0, t} and F S = {F S 0 , F S t }.
The FeatureNet model is composed of 2D CNNs which learns 2D features from the multi-view images and interslice correlation from SAX stacks. The obtained F M is first unified to D-channels using 2D convolution and then is used to predict Φ t in the next step. The obtained F S is used for shape regularization in Sec. III-B.
2) Motion estimation (MotionNet): In this step, we introduce MotionNet to predict the 3D motion field Φ t by learning 3D representations from the multi-view motion feature F M . MotionNet is built with a 3D encoder-decoder architecture. Φ t is predicted by MotionNet with
Φ t = g θ (U (F M )),(3)
where g θ represents MotionNet and θ refers to the network parameters of g θ . The function U (·) denotes an un-squeeze operation which changes F M from a stack of 2D feature maps to a 3D feature map by adding an extra dimension.
3) Spatial transform (Warping): Inspired by the successful application of spatial transformer networks [10,27], the SAX stack of the ED frame (I sa 0 ) can be transformed to the tth frame using the motion field Φ t . For voxel with location p in the transformed SAX stack (I sa 0→t ), we compute the corresponding location p in I sa 0 by p = p + Φ t (p). As image values are only defined at discrete locations, the value at p in I sa 0→t is computed from p in I sa 0 using trillinear interpolation 2 . 4) Motion loss: As true dense motion fields of paired frames are usually unavailable in real practice, we propose an unsupervised motion loss L mov to evaluate the 3D motion estimation model using only the input SAX stack (I sa t ) and the generated 3D motion field (Φ t ). L mov consists of two components: (1) an image similarity loss L sim that penalizes appearance difference between I sa t and I sa 0→t , and (2) a local smoothness loss L smooth that penalizes the gradients of Φ t ,
L mov = L sim + λL smooth .(4)
Here λ is a hyper-parameter, L sim is defined by voxel-wise mean squared error and L smooth is the Huber loss used in [10,39] which encourages a smooth Φ t ,
L sim = 1 N N i=1 (I sa t (p i ) − I sa 0→t (p i )) 2 ,(5)L smooth = + N i=1 Φ t (p i ) 2 , Φ t (p i ) = ( ∂Φ t (p i ) ∂x , ∂Φ t (p i ) ∂y , ∂Φ t (p i ) ∂z ).(6)
Here ∂Φt(pi)
∂x ≈ Φ t (p ix + 1, p iy , p iz ) − Φ t (p ix , p iy , p iz )
and we use the same approximation to ∂Φt(pi) ∂y and ∂Φt(pi) ∂z . Same to [10,39], is set to 0.01. In Eq. 5 and Eq. 6, p i is the ith voxel and N denotes the number of voxels.
Note that L sim is only applied to SAX stacks because 2D images in 2CH and 4CH views typically consist of only one slice and can not be directly warped by a 3D motion field.
B. Shape regularization
The motion loss (L mov ) on its own is not sufficient to guarantee motion estimation in the Z direction due to the low through-plane resolution in SAX stacks. To address this problem, we introduce a shape regularization module which ensures the 3D edge map of the myocardial wall is correct before and after Φ t warping, and thus enables an accurate estimation of Φ t . Here, the ground truth 2D edge maps derived from the multi-view images provide weak supervision to the predicted and warped 3D edge maps.
1) Shape estimation (ShapeNet): ShapeNet is built to generate the 3D edge map of the myocardial wall in the ED frame (Ê 0 ) and the t-th frame (Ê t ) from F S = {F S 0 , F S t },
E 0 = h 1 (F S 0 ),Ê t = h 2 (F S t ).(7)
Here h 1 and h 2 are the two branches in ShapeNet which contain shared 2D decoders and 3D convolutional layers in order to learn 3D edge maps from 2D features for all frames (Fig. 4). The dimension ofÊ 0 andÊ t are H ×W ×D. With the spatial transform in Sec. III-A3,Ê 0 is warped to the t-th frame by Φ t , which generates the transformed 3D edge mapÊ 0→t . ThenÊ 0 ,Ê t andÊ 0→t are weakly supervised by ground truth 2D edge maps.
2) Slicing: To compare the 3D edge maps with 2D ground truth, we use 3D masks {M sa , M 2ch , M 4ch } to extract SAX, 2CH and 4CH view planes fromÊ 0 ,Ê t andÊ 0→t witĥ
E i 0 = M i Ê 0 ,Ê i t = M i Ê t ,Ê i 0→t = M i Ê 0→t ,(8)
where i = {sa, 2ch, 4ch} represents anatomical views and refers to element-wise multiplication. These 3D masks describe the locations of multi-view images in SAX stacks and are generated based on the input during image preprocessing. 3) Shape loss:
The sliced 2D edge maps {Ê i 0 ,Ê i t ,Ê i 0→t |i = {sa, 2ch, 4ch}} are compared to 2D ground truth {E i 0 , E i t |i = {sa, 2ch, 4ch}} by a shape loss L shape , L shape = L S 0 + L S t + L S 0→t .(9)
For each component in L shape , we utilize cross-entropy loss (CE(·, ·)) to measure the similarity of edge maps, e.g.,
L S 0 = i={sa,2ch,4ch} CE(Ê i 0 , E i 0 ).(10)
Same to Eq. 10,
L S t is computed by {Ê i t , E i t } and L S 0→t is computed by {Ê i 0→t , E i t }.
C. Optimization
Our model is an end-to-end trainable framework and the overall objective is a linear combination of all loss functions
min{L sim + λL smooth + βL shape },(11)
where λ and β are hyper-parameters chosen experimentally depending on the dataset. We use the Adam optimizer (learning rate = 10 −4 ) to update the parameters of MulViMotion. Our model is implemented by Pytorch and is trained on a NVIDIA Tesla T4 GPU with 16 GB of memory.
IV. EXPERIMENTS
We demonstrate our method on the task of 3D myocardial motion tracking. We evaluate the proposed method using quantitative metrics such as Dice, Hausdorff distance, volume difference and Jacobian determinant. Geometric mesh is used to provide qualitative results with 3D visualization. We compared the proposed method with other state-of-the-art motion estimation methods and performed extensive ablation study. In addition, we show the effectiveness of the proposed method on the subjects with severe slice misalignment. We further explore the applicability of the proposed method for wall thickening measurement. We show the key results in the main paper. More results (e.g., dynamic videos) are shown in the Appendix.
A. Experiment setups 1) Data: We performed experiments on randomly selected 580 subjects from the UK Biobank study 3 . All participants gave written informed consent [9]. The participant characteristics are shown in Table I (3) to cover the whole LV as the ROI, based on the center of the LV in the middle slice, the resampled SAX stacks were cropped to a size of 128 × 128 × 64 (note that we computed the center of the LV based on the LV myocardium segmentation of the middle slice of the SAX stack), (4) 2CH and 4CH view images were cropped to 128 × 128 based on the center of the intersecting line between the middle slice of the cropped SAX stack and the 2CH/4CH view image, (5) each frame was independently normalized to zero mean and unitary standard deviation, and (6) 3D masks (Eq. 8) were computed by a coordinate transformation using DICOM image header information of SAX, 2CH and 4CH view images. Note that 2D SAX slices used in the shape regularization module were unified to 9 adjacent slices for all subjects, including the middle slice and 4 upper and lower slices. With this image preprocessing, the input SAX, 2CH and 4CH view images cover the whole LV in the center. 3D high-resolution segmentations of these subjects were automatically generated using the 4Dsegment tool [22] based on the resampled SAX stacks, followed by manual quality control. The obtained segmentations have been shown to be useful in clinical applications (e.g., [7]), and thus we use them to generate ground truth 2D edge maps (Fig. 1) in this work. In detail, we utilize the obtained 3D masks to extract SAX, 2CH and 4CH view planes from these 3D segmentations and then use contour extraction to obtain {E i 0 , E i t |i = {sa, 2ch, 4ch}} used in Sec. III-B2. Note that we use 3D segmentation(s) to refer to the 3D segmentations obtained by [22] in this section.
We split the dataset into 450/50/80 for train/validation/test and train MulViMotion for 300 epochs. The hyper-parameters in Eq. 11 are selected as λ = 0.005, β = 5.
2) Evaluation metrics: We use segmentations to provide quantitative evaluation to the estimated 3D motion fields. This is the same evaluation performed in other cardiac motion tracking literature [39,56,58]. Here, 3D segmentations obtained by [22] are used in the evaluation metrics. The framework in [22] performs learning-based segmentation, followed by an atlas-based refinement step to ensure robustness towards potential imaging artifacts. The generated segmentations are anatomically meaningful and spatially consistent. As our work aims to estimate real 3D motion of the heart from the acquired CMR images, such segmentations that approximate the real shape of the heart can provide a reasonable evaluation. In specific, on test data, we estimate the 3D motion field Φ ES from ED frame to ES frame, which shows large deformation. Then we warp the 3D segmentation of the ED frame (S ED ) to ES frame by Φ ES . Finally, we compared the transformed 3D segmentation (S ED→ES ) with the ground truth 3D segmentation of the ES frame (S ES ) using following metrics. Note that the ES frame is identified as the frame with the least image intensity similarity to the ED frame.
Dice score and Hausdorff distance (HD) are utilized to respectively quantify the volume overlap and contour distance between S ES and S ED→ES . A high value of Dice and a low value of HD represent an accurate 3D motion estimation.
Volume difference (VD) is computed to evaluate the volume preservation, as incompressible motion is desired within the myocardium [30,32,40,45]
. V D = |V (S ED ) − V (S ED→ES )|/V (S ED ), where V (·)
computes the number of voxels in the segmentation volume. A low value of VD means a good volume preservation ability of Φ ES .
The Jacobian determinant det(J Φ ES ) (J Φ ES = Φ ES ) is employed to evaluate the local behavior of Φ ES : A negative Jacobian determinant det(J Φ ES (p)) < 0 indicates that the motion field at position p results in folding and leads to nondiffeomorphic transformations. Therefore, a low number of points with det(J Φ ES (p)) < 0 corresponds to an anatomically plausible deformation from ED frame to ES frame and thus indicates a good Φ ES . We count the percentage of voxels in the myocardial wall with det(J Φ ES (p)) < 0 in the evaluation.
3) Baseline methods: We compared the proposed method with three cardiac motion tracking methods, including two conventional methods and one deep learning method. The first baseline is a B-spline free form deformation (FFD) algorithm [43] which has been used in many recent cardiac motion tracking works [5,7,37,38,50]. We use the FFD approach implemented in the MIRTK toolkit 4 . The second baseline is a diffeomorphic Demons (dDemons) algorithm [52] which has been used in [40] for cardiac motion tracking. We use a Sim-pleITK software package as the dDemons implementation 5 . In addition, the UNet architecture has been used in many recent works for image registration [6,48,54], and thus our third baseline is a deep learning method with 3D-UNet [16]. The input of 3D-UNet baseline is paired frames (I sa 0 , I sa t ) and output is a 3D motion field. Eq. 4 is used as the loss function for this baseline. We implemented 3D-UNet based on its online code 6 . For the baseline methods with hyperparameters, we evaluated several sets of parameter values. The hyper-parameters that achieve the best Dice score on the validation set are selected.
B. 3D myocardial motion tracking 1) Motion tracking performance: For each test subject, MulViMotion is utilized to estimate 3D motion fields in the full cardiac cycle. With the obtained {Φ t |t = [0, 49]}, we warp the 3D segmentation of ED frame (t = 0) to the tth frame. Fig. 5 (a) shows that the estimated Φ t enables the warped 3D segmentation to match the myocardial area in images from different anatomical views. In addition, we warp the SAX stack of the ED frame (I sa 0 ) to the t-th frame. Fig. 5 (b) shows the effectiveness of Φ t by comparing the warped and the ground truth SAX view images. By utilizing the warped 3D segmentation, we further compute established clinical biomarkers. Fig. 6 volume over time. The shape of the curve are consistent with reported results in the literature [18,39].
We quantitatively compared MulViMotion with baseline methods in Table II. With the 3D motion fields generated by different methods, the 3D segmentations of ED frame are warped to ES frame and compared with the ground truth 3D segmentations of ES frame by using metrics introduced in Sec. IV-A2. From this table, we observe that MulViMotion outperforms all baseline methods for Dice and Hausdorff distance, demonstrating the effectiveness of the proposed method on estimating 3D motion fields. MulViMotion achieves the lowest volume difference, indicating that the proposed method is more capable of preserving the volume of the myocardial wall during cardiac motion tracking. Compared to a diffeomorphic motion tracking method (dDemons [52]), the proposed method has a similar number of voxels with a negative Jacobian determinant. This illustrates that the learned motion field is smooth and preserves topology.
We further qualitatively compared MulViMotion with baseline methods in Fig. 7. A geometric mesh is used to provide 3D visualization of the myocardial wall. Specifically, 3D segmentations of ED frame are warped to any t-th frame in the cardiac cycle and geometric meshes are reconstructed from these warped 3D segmentations. Red meshes in Fig. 7 demonstrate that in contrast to all baseline methods which only show motion within SAX plane (i.e., along the X and Y directions), MulViMotion is able to estimation through-plane motion along the longitudinal direction (i.e., the Z direction) in the cardiac cycle, e.g., the reconstructed meshes of t = 20 frame is deformed in the X, Y , Z directions compared to t = 0 and t = 40 frames. In addition, white meshes in Fig. 7 illustrate that compared to all baseline methods, the 3D motion field generated by MulViMotion performs best in warping ED frame to ES frame and obtains the reconstructed mesh of ES frame which is most similar to the ground truth (GT) ES frame mesh (blue meshes). These results demonstrate the effectiveness of MulViMotion for 3D motion tracking, especially for estimating through-plane motion.
2) Runtime: Table II shows runtime results of MulViMotion and baseline methods using Intel Xeon E5-2643 CPU dDemons [52] 3D-UNet [16] MulViMotion ED frame (GT) t=0 t=10 t=20 t=30 t=40 ES frame (warped) ES frame (GT) Fig. 7: 3D visualization of motion tracking results using the baseline methods and MulViMotion. Column 1 (blue) shows the ground truth (GT) meshes of ED frame. Columns 2-6 (red) show 3D motion tracking results across the cardiac cycle. These meshes are reconstructed from the warped 3D segmentations (warped from ED frame to different time frames). Column 7 (white) additionally shows the reconstructed meshes of ES frame from the motion tracking results and Column 8 (blue) shows the ground truth meshes of ES frame. and NVIDIA Tesla T4 GPU. The average inference time for a single subject is reported. FFD [43] and dDemons [52] are only available on CPUs while the 3D-UNet [16] and MulViMotion are available on both CPU and GPU. The results show that our method achieves similar runtime to 3D-UNet [16] on GPU and at least 5 times faster than baseline methods on CPU.
3) Ablation study: For the proposed method, we explore the effects of using different anatomical views and the importance of the shape regularization module. We use evaluation metrics in Sec. IV-A2 to show quantitative results. Table III shows the motion tracking results using different anatomical views. In particular, M1 only uses images and 2D edge maps from SAX view to train the proposed method, M2 uses those from both SAX and 2CH views and M3 uses those from both SAX and 4CH views. M2 and M3 outperforms M1, illustrating the importance of LAX view images. In addition, MulViMotion (M) outperforms other variant models. This might be because more LAX views can introduce more high- resolution 3D anatomical information for 3D motion tracking. In Table IV, the proposed method is trained using all three anatomical views but optimized by different combination of losses. A1 optimizes the proposed method without shape regularization (i.e., without L shape in Eq. 11). A2 introduces basic shape regularization on top of A1, which adds L S 0 and L S 0→t for L shape . MulViMotion (M) outperforms A1, illustrating the importance of shape regularization. MulViMotion also outperforms A2. This is likely because L S 0 and L S t are both needed to guarantee the generation of distinct and correct 3D edge maps for all frames in the cardiac cycle. These results show the effectiveness of all proposed components in L shape . Fig. 8 shows motion estimation performance using different strength of shape regularization. In detail, the proposed method is trained by three anatomical views and all loss components, but the shape loss (L shape ) is computed by different percentage of training subjects (20%, 40%, 60%, 80%, 100%). From Fig. 8, we observe that motion estimation performance is improved with an increased percentage of subjects.
4) The influence of hyper-parameters: Fig. 9 presents Dice and Hausdorff distance (HD) on the test data for various smoothness loss weight λ and shape regularization weight β (Eq. 11). The Dice scores and HDs are computed according to Sec. IV-A2. We observe that a strong constraint on motion field smoothness may scarify registration accuracy (see Fig. 9 (a)). Moreover, registration performance improves as β increases from 1 to 5 and then deteriorates with a further increased β (from 5 to 9). This might be because a strong shape regularization can enforce motion estimation to focus mainly on the few 2D planes which contain sparse labels.
5) The performance on subjects with slice misalignment: Acquired SAX stacks may contain slice misalignment due to poor compliance with breath holding instructions or the change of position during breath-holding acquisitions [8]. This leads to an incorrect representation of cardiac volume and result in difficulties for accurate 3D motion tracking. Fig. 10 compares the motion tracking results of 3D-UNet [16], MulViMotion and MulViMotion without L shape for the test subject with the severe slice misalignment (e.g., Fig. 10 (a) middle column). Fig. 10 (b) shows that in contrast to 3D-UNet, the motion fields generated by MulViMotion enables topology preservation of the myocardial wall (e.g., mesh of t = 17). MulViMotion outperforms MulViMotion without L shape , which indicates the importance of the shape regularization module for reducing negative effect of slice misalignment. These results demonstrate the advantage of integrating shape information from multiple views and shows the effectiveness of the proposed method on special cases. 6) Wall thickening measurement: We have computed regional and global myocardial wall thickness at ED frame and ES frame based on ED frame segmentation and warped ES frame segmentation 7 , respectively. The global wall thickness at ED frame is 6.6 ± 0.9mm, which is consistent with results obtained by [5] (5.5 ± 0.8mm). The wall thickness at the ES frame for American Heart Association 16-segments are shown in Table V. In addition, we have computed the fractional wall thickening between ED frame and ES frame by (ES −ED)/ED * 100%. The results in Table V shows that the regional and global fractional wall thickening are comparable with results reported in literature [21,51].
V. DISCUSSION
In this paper, we propose a deep learning-based method for estimating 3D myocardial motion from 2D multi-view cine CMR images. A naïve alternative to our method would be to train a fully unsupervised motion estimation network using high-resolution 3D cine CMR images. However, such 3D images are rarely available because (1) 3D cine imaging requires long breath holds during acquisition and are not commonly used in clinical practice, and (2) recovering highresolution 3D volumes purely from 2D multi-view images is challenging due to the sparsity of multi-view planes. Our focus has been on LV myocardial motion tracking because it is important for clinical assessment of cardiac function. Our model can be easily adapted to 3D right ventricular myocardial motion tracking by using the corresponding 2D edge maps in the shape regularization module during training.
In shape regularization, we use edge maps to represent anatomical shape, i.e., we predict 3D edge maps of the myocardial wall and we use 2D edge maps defined in the multiview images to provide shape information. This is because (1) the contour of the myocardial wall is more representative of anatomical shape than the content, (2) compared to 3D dense segmentation, 3D edge maps with sparse labels are more likely to be estimated by images from sparse multiview planes, and (3) using edge maps offers the potential of using automatic contour detection algorithms to obtain shape information directly from images. An automated algorithm is utilized to obtain 2D edge maps for providing shape information in the shape regularization module. This is because manual data labeling is time-consuming, costly and usually unavailable. The proposed method can be robust to these automatically obtained 2D edge maps since the 2D edge maps only provides constraint to spatially sparse planes for the estimated 3D edge maps.
We use the aligned 2D edge maps of SAX stacks to train MulViMotion. This is reasonable because aligned SAX ground truth edge maps can introduce correct shape information of the heart, and thus can explicitly constrain the estimated 3D motion field to reflect the real motion of the heart. Nevertheless, we further test the effectiveness of the proposed method by utilizing unaligned SAX edge maps during training. In specific, MulViMotion* uses the algorithm in [4] to predict the 2D segmentation of myocardium for each SAX slice independently without accounting for the inter-slice misalignment. The contour of this 2D segmentation is used as the SAX ground truth edge map during training. LAX ground truth edge maps are still generated based on [22]. Table VI and Fig. 11 (e.g., t = 20) show that the proposed method is capable of estimating 3D motion even if it is trained with unaligned SAX edge maps. This indicates that the LAX 2CH and 4CH view images that provides correct longitudinal anatomical shape information can compensate the slice misalignment in the SAX stacks and thus makes a major contribution to the improved estimation accuracy of through-plane motion.
In the proposed method, a hybrid 2D/3D network is built to estimate 3D motion fields, where the 2D CNNs combine multiview features and the 3D CNNs learn 3D representations from the combined features. Such a hybrid network can occupy less GPU memory compared to a pure 3D network. In particular, the number of parameters in this hybrid network is 21.7 . Moreover, this hybrid network is able to take full advantage of 2D multiview images because it enables learning 2D features from each anatomical view before learning 3D representations.
In the experiment, we use 580 subjects for model training and evaluation. This is mainly because our work tackles 3D data and the number of training subjects is limited by the cost of model training. In specific, we used 500 subjects to train our model for 300 epochs with a NVIDIA Tesla T4 GPU, which requires ∼ 60 hours of training for each model. In addition, this work focus on developing the methodology for multi-view motion tracking and this sample size align with other previous cardiac analysis work for method development [13,39,55,56]. A population-based clinical study for the whole UK Biobank (currently ∼ 50, 000 subjects) still requires future investigation.
With the view planning step in standard cardiac MRI acquisition, the acquired multi-view images are aligned and thus are able to describe a heart from different views [31]. In order to preserve such spatial connection between multiple separate anatomical views, data augmentations (e.g., rotation and scaling) that used in some 2D motion estimation works are excluded in this multi-view 3D motion tracking task.
We use two LAX views (2CH and 4CH) in this work for 3D motion estimation but the number of anatomical views is not a limitation of the proposed method. More LAX views (e.g., 3-chamber view) can be integrated into MulViMotion by adding extra encoders in FeatureNet and extra views in L shape for shape regularization. However, each additional anatomical view can lead to an increased occupation of GPU memory and extra requirement of image annotation (i.e., 2D edge maps).
The data used in the experiment is acquired by a 1.5 Tesla (1.5T) scanner but the proposed method can be applied on 3T CMR images. The possible dark band artifacts in 3T CMR images may affect the image similarity loss (L sim ). However, the high image quality of 3T CMR and utilizing high weights for the regularization terms (e.g., shape regularization and the local smoothness loss) may potentially reduce the negative effect of these artifacts. We utilize the ED frame and the t-th frame (t = 0, 1, ..., T , T is the number of frames) as paired frames to estimate the 3D motion field. This is mainly because the motion estimated from such frame pairing is needed for downstream tasks such as strain estimation [23,37,46]. In the cardiac motion tracking task, the reference frame is commonly chosen as the ED or ES frame [56]. Such frame pairing can often be observed in other cardiac motion tracking literature, e.g., [39,56,58].
In this work, apart from two typical and widely used conventional algorithms, we also compared the proposed method with a learning-based method [42] which can represent most of the recent image registration works. In specific, the architecture of [42] has been used in many recent works, e.g., [6,48,54], and many other recent works, e.g., [6,20,29], are similar to [42] where only single view images are utilized for image registration. Nevertheless, we further thoroughly compared the proposed method with another recent and widely used learning-based image registration method [6] (VoxelMorph 8 ). We train VoxelMorph following the optimal architecture and hyper-parameters suggested by the authors (VM) and we also train VoxelMorph with a bigger architecture 9 (VM † ). For fair comparison, 2D ground truth edge maps (E sa 0 , E sa t in Eq. 8) are used to generate the segmentation of SAX stacks for adding auxiliary information. Table VI shows that the proposed method outperforms VoxelMorph for 3D cardiac motion tracking. This is expected because SAX segmentation used in VoxelMorph has low through-plane resolution and thus can hardly help improve 3D motion estimation. Moreover, VoxelMorph only uses single view images while the proposed method utilizes information from multiple views.
VI. CONCLUSION
In this paper, we propose multi-view motion estimation network (MulViMotion) for 3D myocardial motion tracking.
The proposed method takes full advantage of routinely acquired multi-view 2D cine CMR images to accurately estimate 3D motion fields. Experiments on the UK Biobank dataset demonstrate the effectiveness and practical applicability of our method compared with other competing methods.
APPENDIX
A. Examples of 3D masks Fig. 12 shows the examples of 3D masks used in the shape regularization module of MulViMotion. These 3D masks identify the locations of multi-view images in the SAX stack. We generate these 3D masks in image preprocessing step by a coordinate transformation using DICOM image header information.
B. The dynamic videos of motion tracking results
The dynamic videos of motion tracking results of different motion estimation methods have been attached as "Dynamic videos.zip" in the supplementary material. This file contains four MPEG-4 movies where "FFD.mp4", "dDemons.mp4", "3D-UNet.mp4" are the results of the corresponding baseline methods and "MulViMotion.mp4" is the result of the proposed method. All methods are applied on the same test subject. The Codecs of these videos is H.264. We have opened the uploaded videos in computers with (1) Win10 operating system, Movies&TV player, (2) Linux Ubuntu 20.04 operating system, Videos player, and (3) Mac OS, QuickTime Player. However, if there is any difficulty to open the attached videos, the same dynamic videos can be found in https://github.com/qmeng99/dynamic videos/blob/main/README.md C. Additional 3D motion tracking results Fig. 13 shows the additional 3D motion tracking results on a test subject with slice misalignment. This test subject is the same subject used in Fig. 10 in the main paper. These more results further demonstrate that the proposed method is able to reduce the negative effect of slice misalignment on 3D motion tracking. In addition, we have computed more established clinical biomarkers. Fig. 14 shows the temporal ejection fraction across the cardiac cycle. along three orthogonal directions, namely radial, circumferential and longitudinal. Here, we evaluate the performance of the proposed method by estimating the three strains based on the estimated 3D motion field Φ t . The myocardial mesh at the ED frame is warped to the t-th frame using a numeric method and vertex-wise strain is calculated using the Lagrangian strain tensor formula [36] (implemented by https://github.com/Marjola89/3Dstrain analysis). Subsequently, global strain is computed by averaging across all the vertices of the myocardial wall. [11,23,28], i.e., radial peak strain is ∼ 20% to ∼ 70%, circumferential peak strain is ∼ −15% to ∼ −22% and longitudinal peak strain is ∼ −8% to ∼ −20%.
To get more reference strains, we have separately computed global longitudinal and circumferential strains on the 2D LAX and SAX slices according to the algorithm in [5]. On the test set, global longitudinal peak strain is −18.55% ± 2.74% (ours is −9.72% ± 2.49%) while global circumferential peak strain is −22.76%±3.31% (ours is −27.38%±9.63%). It is possible that our strains are different from these strains. This is because these strains in [5] are computed only on sparse 2D slices by 2D motion field estimation, and in contrast, we compute global strains by considering the whole myocardium wall with 3D motion fields.
Compared to echocardiograpy, another widely used imaging modality for strain estimation, the average circumferential peak strain reported in our work (−27.38%) is consistent with those typically reported in echocardiograpy (∼ −22% to ∼ −32% [2]). The average longitudinal peak strain in our study (−9.72%) is lower than that reported in echocardiograpy (∼ −20% to ∼ −25% [2]). This difference is likely due to the higher spatial and temporal resolution of echocardiography (e.g., 0.2 − 0.3mm for spatial resolution and 40 − 60 frames/s for temporal resolution) compared to CMR (e.g., our data has ∼ 1.8mm in-plane resolution, ∼ 10mm through-plane resolution and 50 frames/heart-beat temporal resolution) [2,35].
For strain estimation, our results are in general consistent with the value ranges reported in [11,23,28]. However, it has to be noted that we calculate the strain based on 3D motion fields, whereas most existing strain analysis methods or software packages are based on 2D motion fields, i.e. only accounting for in-plane motion within SAX or LAX views. This may result in difference between our estimated strain values and the reported strain values in literature. In addition, there is still a lack of agreement of strain value ranges (in particular for radial strains) even among mainstream commercial software packages [11]. This is because strain value ranges can vary depending on the vendors, imaging modalities, image quality and motion estimation techniques [2,11]. It still requires further investigations to set up a reference standard for strain evaluation and to carry out clinical association studies using the reported strain values. Moreover, when manual segmentation is available, it could be used to provide more perfect and accurate shape constraint, which may further improve 3D motion estimation and thus strain estimation.
Fig. 1 :
1Examples of 2D cine CMR scans of a healthy subject. Cine CMR scans are acquired from short-axis (SAX) view and two long-axis (LAX) views. The SAX view contains a stack of 2D images while each LAX view contains a single 2D image. (a) XY -plane of the SAX stack. (b) XZ-plane of the SAX stack. (c) LAX 2-chamber (2CH) view. (d) LAX 4-chamber (4CH) view. Red and green contours 1 show the epicardium and endocardium, respectively. The area between these contours is the myocardium of the left ventricle. We show the enddiastolic (ED) frame (top row) and the end-systolic (ES) frame (bottom row) of the cine CMR image sequence.
Fig. 3 :
3An overview of FeatureNet. FeatureNet takes multi-view images as input and extracts multi-view motion feature F M and multi-view shape feature F S . Panel (a) describes multi-scale feature fusion. Panel (b) shows the 2D encoder f ψi , where i = {sa, 2ch, 4ch} refers to SAX, 2CH and 4CH views. Panel (c) describes the combination of multi-view features.
Fig. 4 :
4An overview of ShapeNet. ShapeNet predicts the 3D edge maps of the LV myocardial wall in ED frame and the t-th frame from the corresponding shape features F S 0 and F S t .
Fig. 5 :
5The ground truth vs. the warped SAX stacks Examples of motion tracking results. 3D motion fields generated by MulViMotion are used to warp 3D segmentations and SAX stacks from ED frame to the t-th frame. (a) The warped segmentations overlaid on SAX, 2CH and 4CH view images. (b) The ground truth (GT) and the warped SAX stacks as well as their difference maps (i.e., GT−Warped).
Fig. 9 :
9Effects of varied hyper-parameters on Dice and Hausdorff distance. (a) shows the results of using various λ under β = 5. (b) shows the results of using various β under λ = 0.005.
Fig. 10 :
10Motion tracking results on the test subject with slice misalignment. The first three columns in (a) are the three orthogonal planes of the SAX stack and the last two columns are 2CH and 4CH view images, respectively. (b) presents examples of motion tracking results using 3D-UNet[16], MulViMotion and MulViMotion without L shape . The yellow arrow shows an example of slice misalignment while green arrows show examples of motion tracking failures using 3D-UNet. Note that we show the results in frame t = 17 for a more distinct comparison.
Fig. 11 :
113D visualization of motion tracking results using 3D-UNet and MulViMotion*. MulViMotion* uses unaligned SAX ground truth edge maps during training millions, much less than 3D-UNet (41.5 millions)
Fig. 12 :
12Examples of 3D masks used in the shape regularization module of MulViMotion. The top row show the 2D images from different anatomical views in the space of the SAX stack. The bottom row show the 3D masks which represent the locations of these 2D images in the SAX stack. (a) The 2D images from SAX view (9 slices). (b) The single 2D image from 2CH view. (c) The single 2D image from 4CH view.
Fig. 13 :
13Motion tracking results on the test subject with slice misalignment. using 3D-UNet [16], MulViMotion, and Mul-ViMotion without L shape . (a) The warped 3D segmentation overlaid on SAX view. (b) The 3D visualization of the motion tracking results. The green arrows show examples of motion tracking failures using 3D-UNet. Note that we show results in frame t = 17 for a more distinct comparison. D. Applications 1) Strain estimation: Myocardial strain provide a quantitative evaluation for the total deformation of a region of tissue during the heartbeat. It is typically evaluated (a) A single test subject (b) All test subjects Fig. 14: The results of temporal ejection fraction across the cardiac cycle. (a) Results on a randomly selected test subject. (b) Results on all test subjects (mean values and confidence interval are presented).
Fig. 15 :
15Global strains across the cardiac cycle which are estimated base on MulViMotion. (a) Results on a randomly selected test subject. (b) Results on all test subjects (mean values and confidence interval are presented).
Fig. 15
15shows the estimated global strain curves on test subjects. Both the shapes of the curves and the value ranges of peak strains are consistent with reported results in the literature
TABLE I :
IParticipant characteristics. Data are mean±standard deviation for continuous variables and number of participant for categorical variable.Parameter
Value (Subject number is 580)
Age (years)
64±8
Sex (Female/Male)
325 / 255
Ejection fraction (%)
60±6
Weight (kg)
74±15
Height (cm)
169±9
Body mass index (kg/m 2 )
26±4
Diastolic blood pressure (mm Hg)
79±10
Systolic blood pressure (mm Hg)
138±19
. The CMR images of all subjects are acquired by a 1.5 Tesla scanner (MAGNETOM Aera, Syngo Platform VD13A, Siemens Healthcare, Erlangen, Germany). Each subject contains SAX, 2CH and 4CH view cine CMR sequences and each sequence contains 50 frames. More CMR acquisition details for UK Biobank study can be found in[35]. For image preprocessing, (1) SAX view images were resampled by linear interpolation from a spacing of ∼ 1.8×1.8×10mm to a spacing of 1.25×1.25×2mm while 2CH and 4CH view images were resampled from ∼ 1.8 × 1.8mm to 1.25 × 1.25mm, (2) by keeping the middle slice of the resampled SAX stacks in the center, zero-padding was used on top or bottom if necessary to reshape the resampled SAX stacks to 64 slices,
demonstrates the curve of LV 6 https://github.com/wolny/pytorch-3dunet(a) A single test subject
(b) All test subjects
Fig. 6: The results of LV volume across the cardiac cycle.
(a) Results on a randomly selected test subject. (b) Results
on all test subjects (mean values and confidence interval are
presented). Note that, for each subject in (b), we normalized
LV volume (dividing LV volume in all time frames by that in
the ED frame) and show the average results of all test subjects.
TABLE II :
IIComparison of other cardiac motion tracking methods. ↑ indicates the higher value the better while ↓ indicates the lower value the better. Results are reported as "mean (standard deviation)" for Dice, Hausdorff distance (HD), volume difference (VD) and negative Jacobian determinant (det(J Φ ES ) < 0). CPU and GPU runtimes are reported as the average inference time for a single subject. Best results in bold.Methods
Anatomical views
Dice ↑
HD (mm) ↓
VD (%) ↓
det(J Φ ES ) < 0 (%) ↓ Times CPU (s) ↓ Times GPU (s) ↓
FFD [43]
SAX
0.7250 (0.0511)
20.1138 (5.1130)
14.45 (6.87)
11.94 (5.01)
15.91
-
dDemons [52]
SAX
0.7219 (0.0422)
18.3945 (3.5650)
14.46 (6.38)
0.13 (0.17)
28.32
-
3D-UNet [16]
SAX
0.7382 (0.0293)
17.4785 (3.1030)
30.97 (9.89)
0.95 (1.05)
16.88
1.09
MulViMotion
SAX, 2CH, 4CH
0.8200 (0.0348) 14.5937 (4.2449)
8.62 (4.85)
0.93 (0.94)
3.55
1.15
FFD [43]
TABLE III :
III3D motion tracking with different anatomical views. M1 and M2 are variants of the proposed method and M refers to MulViMotion. Results are reported the same way asTable II. Best results in bold.Anatomical views
Dice ↑
HD (mm) ↓
VD (%) ↓
SAX 2CH 4CH
M1
√
0.7780 (0.0275)
18.2564 (3.4031)
30.66 (7.73)
M2
√
√
0.7964 (0.0273)
18.1014 (3.7146)
24.05 (5.24)
M3
√
√
0.7904 (0.0305)
19.2265 (3.2441)
17.50 (4.55)
M
√
√
√
0.8200 (0.0348) 14.5937 (4.2449)
8.62 (4.85)
TABLE IV :
IV3D motion tracking with different combination of loss functions. A1 optimizes the proposed method without shape regularization (without L shape in Eq. 11). A2 adds basic shape regularization on top of A1. M refers to MulViMotion. All models are trained by three anatomical views. Results are reported the same way asTable II. Best results in bold. 40%, 60%, 80%, 100%). The left column is Dice score and the right column is Hausdorff distance.L shape
Dice ↑
HD (mm) ↓
VD (%) ↓
L S
0
L S
t
L S
0→t
A1
0.7134 (0.0316)
18.9555 (3.1054)
33.93 (10.27)
A2
√
√
0.7294 (0.0295)
17.5047 (3.7485)
12.51 (4.28)
M
√
√
√
0.8200 (0.0348) 14.5937 (4.2449)
8.62 (4.85)
Fig. 8: 3D motion tracking with different strength
of shape regularization, where the shape loss (L shape )
is computed by different percentage of training subjects
(20%,
TABLE V :
VWall thickness at the ES frame and fractional wall thickening between ED and ES frames. Results are reported as "mean (standard deviation)".Segments
Wall thickness (mm) Fractional wall thickening (%)
Basal
Anterior (1)
9.7 (2.7)
34.0 (39.5)
Anteroseptal (2)
5.7 (2.9)
-24.4 (38.7)
Inferoseptal (3)
5.5 (2.0)
-17.3 (30.2)
Inferior (4)
9.0 (1.7)
47.8 (28.5)
Inferolateral (5)
11.0 (2.0)
72.8 (25.9)
Anterolateral (6)
10.9 (1.8)
62.0 (23.8)
Mid-ventricle
Anterior (7)
10.9 (1.5)
79.9 (21.0)
Anteroseptal (8)
11.9 (1.6)
76.2 (21.4)
Inferoseptal (9)
10.8 (1.4)
39.8 (12.3)
Inferior (10)
10.9 (1.3)
62.5 (15.5)
Inferolateral (11)
11.2 (1.5)
73.3 (17.1)
Anterolateral (12)
10.5 (1.2)
63.9 (15.6)
Apical
Anterior (13)
10.8 (1.1)
86.3 (23.2)
Septal (14)
10.9 (1.4)
76.7 (20.5)
Inferior (15)
10.6 (1.4)
76.2 (15.1)
Lateral (16)
11.1 (1.4)
84.3 (18.9)
Global
10.1 (2.5)
55.9 (40.6)
TABLE VI :
VIQuantitative comparison between 3D-UNet and MulViMotion* on test set. MulViMotion* uses unaligned SAX ground truth edge maps during training. Results are reported the same way asTable II. Best results in bold.Methods
Dice ↑
HD (mm) ↓
VD (%) ↓
3D-UNet [16]
0.7382 (0.0293)
17.4785 (3.1030)
30.97 (9.89)
MulViMotion*
0.7856 (0.0295) 16.0028 (3.9749)
21.35 (5.32)
3D-UNet [16]
MulViMotion*
t=0
t=10
t=20
t=30
t=40
TABLE VII :
VIIQuantitative comparison between VoxelMorph (VM)[6] and MulViMotion on test set. VM follows the optimal architecture and hyper-parameters suggested by the authors. VM † uses a bigger architecture9 . Results are reported the same way asTable II. Best results in bold.Methods
Dice ↑
HD (mm) ↓
VD (%) ↓
VM [16]
0.7115 (0.0339)
15.3277 (2.7690)
34.71 (11.84)
VM † [16]
0.7147 (0.0307)
17.6747 (4.3181)
31.75 (10.80)
MulViMotion
0.8200 (0.0348) 14.5937 (4.2449)
8.62 (4.85)
The contours are generated based on[22] and a manual quality control. Detailed information is shown in Sec. IV-A.
This is implemented by Pytorch function grid sample().
Application number 40616, https://www.ukbiobank.ac.uk/
http://mirtk.github.io/ 5 https://github.com/InsightSoftwareConsortium/SimpleITK-Notebooks/blob/master/Python/66 Registration Demons.ipynb
Implemented based on https://github.com/baiwenjia/ukbb cardiac
https://github.com/voxelmorph/voxelmorph 9 Filters in encoder are [64, 128, 256, 512] while filters in decoder are [512, 512, 256, 256, 128, 64, 64]. The weight of the smoothness loss is chosen with grid search (λ ∈ {0.5, 0.6, 0.7, 0.8, 0.9, 1}) and we select the value with the best result on validation data λ = 0.7. The weight for auxiliary segmentation is chosen from γ ∈ {0.1, 0.3, 0.5, 0.7, 1} and we select γ = 0.5.
Enhanced 3D myocardial strain estimation from multi-view 2D CMR imaging. M Abdelkhalek, H Aguib, M Moustafa, K Elkhodary, arXiv:2009.12466arXiv preprintM. Abdelkhalek, H. Aguib, M. Moustafa, and K. Elkhodary. Enhanced 3D myocardial strain estimation from multi-view 2D CMR imaging. arXiv preprint arXiv:2009.12466, 2020.
Myocardial strain imaging: review of general principles, validation, and sources of discrepancies. M S Amzulescu, M De Craene, H Langet, A Pasquet, D Vancraeynest, A C Pouleur, J L Vanoverschelde, B L Gerber, Eur Heart J Cardiovasc Imaging. 206M. S. Amzulescu, M. De Craene, H. Langet, A. Pasquet, D. Vancraeynest, A. C. Pouleur, J. L. Vanoverschelde, and B. L. Gerber. Myocardial strain imaging: review of general principles, validation, and sources of discrepan- cies. Eur Heart J Cardiovasc Imaging, 20(6):605-619, 2019.
3D cardiac shape prediction with deep neural networks: Simultaneous use of images and patient metadata. R Attar, M Pereañez, C Bowles, S Piechnik, S Neubauer, S Petersen, A F Frangi, MICCAI. R. Attar, M. Pereañez, C. Bowles, S. Piechnik, S. Neubauer, S. Petersen, and A. F. Frangi. 3D cardiac shape prediction with deep neural networks: Simultane- ous use of images and patient metadata. In MICCAI, 2019.
Automated cardiovascular magnetic resonance image analysis with fully convolutional networks. W Bai, M Sinclair, G Tarroni, O Oktay, M Rajchl, G Vaillant, A M Lee, N Aung, E Lukaschuk, M M Sanghvi, F Zemrak, K Fung, J M Paiva, V Carapella, Y J Kim, H Suzuki, B Kainz, P M Matthews, S E Petersen, S K Piechnik, S Neubauer, B Glocker, D Rueckert, J Cardiovasc Magn Reson. W. Bai, M. Sinclair, G. Tarroni, O. Oktay, M. Rajchl, G. Vaillant, A. M. Lee, N. Aung, E. Lukaschuk, M. M. Sanghvi, F. Zemrak, K. Fung, J. M. Paiva, V. Carapella, Y. J. Kim, H. Suzuki, B. Kainz, P. M. Matthews, S. E. Petersen, S. K. Piechnik, S. Neubauer, B. Glocker, and D. Rueckert. Automated cardiovascular magnetic reso- nance image analysis with fully convolutional networks. J Cardiovasc Magn Reson, 2018.
A population-based phenome-wide association study of cardiac and aortic structure and function. W Bai, H Suzuki, J Huang, C Francis, S Wang, G Tarroni, F Guitton, N Aung, K Fung, S Petersen, S Piechnik, S Neubauer, E Evangelou, A Dehghan, D O'regan, M Wilkins, Y Guo, P Matthews, D Rueckert, Nat Med. 26W. Bai, H. Suzuki, J. Huang, C. Francis, S. Wang, G. Tarroni, F. Guitton, N. Aung, K. Fung, S. Petersen, S. Piechnik, S. Neubauer, E. Evangelou, A. Dehghan, D. O'Regan, M. Wilkins, Y. Guo, P. Matthews, and D. Rueckert. A population-based phenome-wide associ- ation study of cardiac and aortic structure and function. Nat Med, 26:1654-1662, 2020.
Voxelmorph: A learning framework for deformable medical image registration. G Balakrishnan, A Zhao, M R Sabuncu, J V Guttag, A V Dalca, IEEE Trans Med Imaging. 388G. Balakrishnan, A. Zhao, M. R. Sabuncu, J. V. Guttag, and A. V. Dalca. Voxelmorph: A learning framework for deformable medical image registration. IEEE Trans Med Imaging, 38(8):1788-1800, 2019.
Deep learning cardiac motion analysis for human survival prediction. G Bello, T Dawes, J Duan, C Biffi, M D M A Simoes, L Howard, S Gibbs, M Wilkins, S Cook, D Rueckert, D O'regan, Nat Mach Intell. 1G. Bello, T. Dawes, J. Duan, C. Biffi, M. d. M. A. Simoes, L. Howard, S. Gibbs, M. Wilkins, S. Cook, D. Rueckert, and D. O'Regan. Deep learning cardiac motion analysis for human survival prediction. Nat Mach Intell, 1:95-104, 2019.
3D highresolution cardiac segmentation reconstruction from 2D views using conditional variational autoencoders. C Biffi, J J Cerrolaza, G Tarroni, A De Marvao, S A Cook, D P O'regan, D Rueckert, ISBI. C. Biffi, J. J. Cerrolaza, G. Tarroni, A. de Marvao, S. A. Cook, D. P. O'Regan, and D. Rueckert. 3D high- resolution cardiac segmentation reconstruction from 2D views using conditional variational autoencoders. In ISBI, 2019.
The UK Biobank resource with deep phenotyping and genomic data. C Bycroft, C Freeman, D Petkova, G Band, L T Elliott, K Sharp, A Motyer, D Vukcevic, O Delaneau, J O'connell, Nature. 5627726C. Bycroft, C. Freeman, D. Petkova, G. Band, L. T. Elliott, K. Sharp, A. Motyer, D. Vukcevic, O. Delaneau, J. O'Connell, et al. The UK Biobank resource with deep phenotyping and genomic data. Nature, 562(7726):203- 209, 2018.
Real-time video super-resolution with spatio-temporal networks and motion compensation. J Caballero, C Ledig, A Aitken, A Acosta, J Totz, Z Wang, W Shi, CVPR. J. Caballero, C. Ledig, A. Aitken, A. Acosta, J. Totz, Z. Wang, and W. Shi. Real-time video super-resolution with spatio-temporal networks and motion compensation. In CVPR, 2017.
A comparison of both dense and feature tracking techniques with tagging for the cardiovascular magnetic resonance assessment of myocardial strain. J J Cao, N Ngai, L J Duncanson, J Cheng, K Gliganic, Q Chen, J Cardiovasc Magn Reson. 20J. J. Cao, N. Ngai, L. J. Duncanson, J. Cheng, K. Gli- ganic, and Q. Chen. A comparison of both dense and feature tracking techniques with tagging for the cardio- vascular magnetic resonance assessment of myocardial strain. J Cardiovasc Magn Reson, 20, 2018.
Analysis of 3-D myocardial motion in tagged MR images using nonrigid image registration. R Chandrashekara, R Mohiaddin, D Rueckert, IEEE Trans Med Imaging. 2310R. Chandrashekara, R. Mohiaddin, and D. Rueckert. Analysis of 3-D myocardial motion in tagged MR images using nonrigid image registration. IEEE Trans Med Imaging, 23(10):1245-1250, 2004.
Learning shape priors for robust cardiac MR segmentation from multi-view images. C Chen, C Biffi, G Tarroni, S E Petersen, W Bai, D Rueckert, MICCAI. C. Chen, C. Biffi, G. Tarroni, S. E. Petersen, W. Bai, and D. Rueckert. Learning shape priors for robust cardiac MR segmentation from multi-view images. In MICCAI, pages 523-531, 2019.
Automated 3D motion tracking using gabor filter bank, robust point matching, and deformable models. T Chen, X Wang, S Chung, D Metaxas, L Axel, IEEE Trans Med Imaging. 29T. Chen, X. Wang, S. Chung, D. Metaxas, and L. Axel. Automated 3D motion tracking using gabor filter bank, robust point matching, and deformable models. IEEE Trans Med Imaging, 29:1-11, 2009.
Segflow: Joint learning for video object segmentation and optical flow. J Cheng, Y.-H Tsai, S Wang, M.-H Yang, J. Cheng, Y.-H. Tsai, S. Wang, and M.-H. Yang. Segflow: Joint learning for video object segmentation and optical flow. In ICCV, 2017.
3D U-net: Learning dense volumetric segmentation from sparse annotation. Ö Içek, A Abdulkadir, S Lienkamp, T Brox, O Ronneberger, MICCAI. Ö. Ç içek, A. Abdulkadir, S. Lienkamp, T. Brox, and O. Ronneberger. 3D U-net: Learning dense volumetric segmentation from sparse annotation. In MICCAI, pages 424-432, 2016.
Tissue tracking technology for assessing cardiac mechanics: Principles, normal values, and clinical applications. P Claus, A M S Omar, G Pedrizzetti, P P Sengupta, E Nagel, JACC Cardiovasc Imaging. 812P. Claus, A. M. S. Omar, G. Pedrizzetti, P. P. Sengupta, and E. Nagel. Tissue tracking technology for assessing cardiac mechanics: Principles, normal values, and clinical applications. JACC Cardiovasc Imaging, 8(12):1444- 1460, 2015.
Global and local interpretability for cardiac mri classification. J Clough, I Oksuz, E Anton, B Ruijsink, A King, J Schnabel, MICCAI. J. Clough, I. Oksuz, E. Puyol Anton, B. Ruijsink, A. King, and J. Schnabel. Global and local interpretabil- ity for cardiac mri classification. In MICCAI, 2019.
Temporal diffeomorphic free-form deformation: Application to motion and strain estimation from 3D echocardiography. M D Craene, G Piella, O Camara, N Duchateau, E Silva, A Doltra, J , J Brugada, M Sitges, A F Frangi, Med Imag Anal. 162M. D. Craene, G. Piella, O. Camara, N. Duchateau, E. Silva, A. Doltra, J. D'hooge, J. Brugada, M. Sitges, and A. F. Frangi. Temporal diffeomorphic free-form deformation: Application to motion and strain estimation from 3D echocardiography. Med Imag Anal, 16(2):427- 450, 2012.
A deep learning framework for unsupervised affine and deformable image registration. B D De Vos, F F Berendsen, M A Viergever, H Sokooti, M Staring, I Išgum, Med Imag Anal. 522B. D. de Vos, F. F. Berendsen, M. A. Viergever, H. Sokooti, M. Staring, and I. Išgum. A deep learning framework for unsupervised affine and deformable image registration. Med Imag Anal, 52(2):128-143, 2019.
Left ventricular wall thickness and regional systolic function in patients with hypertrophic cardiomyopathy: A three-dimensional tagged magnetic resonance imaging study. S J Dong, J H Macgregor, A P Crawley, E R Mcveigh, I Belenkie, E R Smith, J V Tyberg, R Beyar, Circulation. 90S. J. Dong, J. H. MacGregor, A. P. Crawley, E. R. McVeigh, I. Belenkie, E. R. Smith, J. V. Tyberg, and R. Beyar. Left ventricular wall thickness and regional systolic function in patients with hypertrophic cardiomy- opathy: A three-dimensional tagged magnetic resonance imaging study. Circulation, 90:1200-1209, 1994.
. J Duan, G Bello, J Schlemper, W Bai, T J W Dawes, C Biffi, A De Marvao, G Doumoud, D P , J. Duan, G. Bello, J. Schlemper, W. Bai, T. J. W. Dawes, C. Biffi, A. de Marvao, G. Doumoud, D. P.
Automatic 3D bi-ventricular segmentation of cardiac images by a shape-refined multitask deep learning approach. D O'regan, Rueckert, IEEE Trans Med Imaging. 389O'Regan, and D. Rueckert. Automatic 3D bi-ventricular segmentation of cardiac images by a shape-refined multi- task deep learning approach. IEEE Trans Med Imaging, 38(9):2151-2164, 2019.
. E Ferdian, A Suinesiaputra, K Fung, N Aung, E Lukaschuk, A Barutcu, E Maclean, J M Paiva, S K , E. Ferdian, A. Suinesiaputra, K. Fung, N. Aung, E. Lukaschuk, A. Barutcu, E. Maclean, J. M. Paiva, S. K.
Fully automated myocardial strain estimation from cardiovascular MRI-tagged images using a deep learning framework in the UK Biobank. S Piechnik, S E Neubauer, A A Petersen, Young, Radiol Cardiothorac Imaging. 2Piechnik, S. Neubauer, S. E. Petersen, and A. A. Young. Fully automated myocardial strain estimation from car- diovascular MRI-tagged images using a deep learning framework in the UK Biobank. Radiol Cardiothorac Imaging, 2, 2020.
Cardiac imaging: Part 1, MR pulse sequences, imaging planes, and basic anatomy. D Ginat, M Fong, D Tuttle, S Hobbs, R C Vyas, AJR Am J Roentgenol. 1974D. Ginat, M. Fong, D. Tuttle, S. Hobbs, and R. C. Vyas. Cardiac imaging: Part 1, MR pulse sequences, imaging planes, and basic anatomy. AJR Am J Roentgenol, 197(4):808-815, 2011.
Myocardial tagging by cardiovascular magnetic resonance: evolution of techniques-pulse sequences, analysis algorithms, and applications. J Cardiovasc Magn Reson. E.-S H Ibrahim, 13E.-S. H. Ibrahim. Myocardial tagging by cardiovascular magnetic resonance: evolution of techniques-pulse se- quences, analysis algorithms, and applications. J Car- diovasc Magn Reson, 13(36), 2011.
Stochastic feedback and the regulation of biological rhythms. P C Ivanov, L A N Amaral, A L Goldberger, H E Stanley, Europhysics Letters (EPL). 434P. C. Ivanov, L. A. N. Amaral, A. L. Goldberger, and H. E. Stanley. Stochastic feedback and the regula- tion of biological rhythms. Europhysics Letters (EPL), 43(4):363-368, aug 1998.
Spatial transformer networks. M Jaderberg, K Simonyan, A Zisserman, K Kavukcuoglu, NeurIPS. M. Jaderberg, K. Simonyan, A. Zisserman, and K. Kavukcuoglu. Spatial transformer networks. In NeurIPS, 2015.
Normal values for cardiovascular magnetic resonance in adults and children. N Kawel-Boehm, A Maceira, E Valsangiacomo-Buechel, J Vogel-Claussen, E Turkbey, R Williams, S Plein, M Tee, J Eng, D Bluemke, J Cardiovasc Magn Reson. 17N. Kawel-Boehm, A. Maceira, E. Valsangiacomo- Buechel, J. Vogel-Claussen, E. Turkbey, R. Williams, S. Plein, M. Tee, J. Eng, and D. Bluemke. Normal values for cardiovascular magnetic resonance in adults and children. J Cardiovasc Magn Reson, 17, 2015.
Learning a probabilistic model for diffeomorphic registration. J Krebs, H Delingette, B Mailhé, N Ayache, T Mansi, IEEE Transactions on Medical Imaging. 38J. Krebs, H. Delingette, B. Mailhé, N. Ayache, and T. Mansi. Learning a probabilistic model for diffeo- morphic registration. IEEE Transactions on Medical Imaging, 38:2165-2176, 2019.
Incompressible deformation estimation algorithm (IDEA) from tagged MR images. X Liu, K Z Abd-Elmoniem, M Stone, E Z Murano, J Zhuo, R P Gullapalli, J L Prince, IEEE Trans Med Imaging. 312X. Liu, K. Z. Abd-Elmoniem, M. Stone, E. Z. Murano, J. Zhuo, R. P. Gullapalli, and J. L. Prince. Incompressible deformation estimation algorithm (IDEA) from tagged MR images. IEEE Trans Med Imaging, 31(2):326-340, 2011.
Automatic view planning for cardiac MRI acquisition. X Lu, M Jolly, B Georgescu, C Hayes, P Speier, M Schmidt, X Bi, R Kroeker, D Comaniciu, P Kellman, E Mueller, J Guehring, MICCAI. X. Lu, M. Jolly, B. Georgescu, C. Hayes, P. Speier, M. Schmidt, X. Bi, R. Kroeker, D. Comaniciu, P. Kell- man, E. Mueller, and J. Guehring. Automatic view planning for cardiac MRI acquisition. In MICCAI, pages 479-86, 2011.
An incompressible log-domain demons algorithm for tracking heart tissue. K Mcleod, A Prakosa, T Mansi, M Sermesant, X Pennec, MICCAI workshop STACOM. K. McLeod, A. Prakosa, T. Mansi, M. Sermesant, and X. Pennec. An incompressible log-domain demons algorithm for tracking heart tissue. In MICCAI workshop STACOM, 2011.
Imaging heart motion using harmonic phase MRI. N Osman, E Mcveigh, J Prince, IEEE Trans Biomed Eng. 193N. Osman, E. McVeigh, and J. Prince. Imaging heart motion using harmonic phase MRI. IEEE Trans Biomed Eng, 19(3):186-202, 2000.
Estimation of 3D left ventricular deformation from echocardiography. X Papademetris, A J Sinusas, D P Dione, J S Duncan, Med Imag Anal. 51X. Papademetris, A. J. Sinusas, D. P. Dione, and J. S. Duncan. Estimation of 3D left ventricular deformation from echocardiography. Med Imag Anal, 5(1):17-28, 2001.
UK Biobank's cardiovascular magnetic resonance protocol. S Petersen, P Matthews, J Francis, M Robson, F Zemrak, R Boubertakh, A Young, S Hudson, P Weale, S Garratt, R Collins, S Piechnik, S Neubauer, J Cardiovasc Magn Reson. 18S. Petersen, P. Matthews, J. Francis, M. Robson, F. Zem- rak, R. Boubertakh, A. Young, S. Hudson, P. Weale, S. Garratt, R. Collins, S. Piechnik, and S. Neubauer. UK Biobank's cardiovascular magnetic resonance protocol. J Cardiovasc Magn Reson, 18, 2015.
Assessment of myocardial function: a review of quantification methods and results using tagged MRI. C Petitjean, N Rougon, P Cluzel, J Cardiovasc Magn Reson. 7C. Petitjean, N. Rougon, and P. Cluzel. Assessment of myocardial function: a review of quantification methods and results using tagged MRI. J Cardiovasc Magn Reson, 7:501-516, 2005.
Fully automated myocardial strain estimation from cine MRI using convolutional neural networks. E Puyol-Antón, B Ruijsink, W Bai, H Langet, M De Craene, J A Schnabel, P Piro, A P King, M Sinclair, In ISBI. E. Puyol-Antón, B. Ruijsink, W. Bai, H. Langet, M. De Craene, J. A. Schnabel, P. Piro, A. P. King, and M. Sinclair. Fully automated myocardial strain estimation from cine MRI using convolutional neural networks. In ISBI, 2018.
Regional multi-view learning for cardiac motion analysis: Application to identification of dilated cardiomyopathy patients. E Puyol-Antón, B Ruijsink, B Gerber, M S Amzulescu, H Langet, M De Craene, J A Schnabel, P Piro, A P King, IEEE Trans Biomed Eng. 664E. Puyol-Antón, B. Ruijsink, B. Gerber, M. S. Amzulescu, H. Langet, M. De Craene, J. A. Schnabel, P. Piro, and A. P. King. Regional multi-view learning for cardiac motion analysis: Application to identification of dilated cardiomyopathy patients. IEEE Trans Biomed Eng, 66(4):956-966, 2019.
Joint learning of motion estimation and segmentation for cardiac MR image sequences. C Qin, W Bai, J Schlemper, S E Petersen, S K Piechnik, S Neubauer, D Rueckert, MICCAI. C. Qin, W. Bai, J. Schlemper, S. E. Petersen, S. K. Piechnik, S. Neubauer, and D. Rueckert. Joint learning of motion estimation and segmentation for cardiac MR image sequences. In MICCAI, 2018.
Biomechanics-informed neural networks for myocardial motion tracking in MRI. C Qin, S Wang, C Chen, H Qiu, W Bai, D Rueckert, MICCAI. C. Qin, S. Wang, C. Chen, H. Qiu, W. Bai, and D. Rueck- ert. Biomechanics-informed neural networks for myocar- dial motion tracking in MRI. In MICCAI, 2020.
Prognostic implications of global longitudinal strain by feature-tracking cardiac magnetic resonance in stelevation myocardial infarction. M Reindl, C Tiller, M Holzknecht, I Lechner, A Beck, D Plappert, M Gorzala, M Pamminger, A Mayr, G Klug, A Bauer, B Metzler, S J Reinstadler, Circ Cardiovasc Imaging. 12119404M. Reindl, C. Tiller, M. Holzknecht, I. Lechner, A. Beck, D. Plappert, M. Gorzala, M. Pamminger, A. Mayr, G. Klug, A. Bauer, B. Metzler, and S. J. Reinstadler. Prognostic implications of global longitudinal strain by feature-tracking cardiac magnetic resonance in st- elevation myocardial infarction. Circ Cardiovasc Imag- ing, 12(11):e009404, 2019.
U-Net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, MICCAI. O. Ronneberger, P.Fischer, and T. Brox. U-Net: Convo- lutional networks for biomedical image segmentation. In MICCAI, pages 234-241, 2015.
Nonrigid registration using free-form deformations: application to breast MR images. D Rueckert, L Sonoda, C Hayes, D Hill, M Leach, D Hawkes, IEEE Trans Med Imaging. 188D. Rueckert, L. Sonoda, C. Hayes, D. Hill, M. Leach, and D. Hawkes. Nonrigid registration using free-form deformations: application to breast MR images. IEEE Trans Med Imaging, 18(8):712-721, 1999.
Consistent estimation of cardiac motions by 4D image registration. D Shen, H Sundar, Z Xue, Y Fan, H Litt, MICCAI. D. Shen, H. Sundar, Z. Xue, Y. Fan, and H. Litt. Consistent estimation of cardiac motions by 4D image registration. In MICCAI, 2005.
A comprehensive cardiac motion estimation framework using both untagged and 3-D tagged MR images based on nonrigid registration. W Shi, X Zhuang, H Wang, S Duckett, D V N Luong, C Tobon-Gomez, K.-P Tung, P Edwards, K Rhode, R Razavi, S Ourselin, D Rueckert, IEEE Trans Med Imaging. 31W. Shi, X. Zhuang, H. Wang, S. Duckett, D. V. N. Luong, C. Tobon-Gomez, K.-P. Tung, P. Edwards, K. Rhode, R. Razavi, S. Ourselin, and D. Rueckert. A compre- hensive cardiac motion estimation framework using both untagged and 3-D tagged MR images based on nonrigid registration. IEEE Trans Med Imaging, 31:1263-1275, 2012.
Myocardial strain computed at multiple spatial scales from tagged magnetic resonance imaging: Estimating cardiac biomarkers for crt patients. M Sinclair, D Peressutti, E Puyol-Antón, W Bai, S Rivolo, J Webb, S Claridge, T Jackson, D Nordsletten, M Hadjicharalambous, E Kerfoot, C A Rinaldi, D Rueckert, A P King, Medical Image Analysis. 43M. Sinclair, D. Peressutti, E. Puyol-Antón, W. Bai, S. Rivolo, J. Webb, S. Claridge, T. Jackson, D. Nord- sletten, M. Hadjicharalambous, E. Kerfoot, C. A. Rinaldi, D. Rueckert, and A. P. King. Myocardial strain computed at multiple spatial scales from tagged magnetic resonance imaging: Estimating cardiac biomarkers for crt patients. Medical Image Analysis, 43:169-185, 2018.
Physics of the human cardiovascular system. A Stefanovska, Contemporary Physics. 40A. Stefanovska. Physics of the human cardiovascular system. Contemporary Physics, 40(1):31-55, 1999.
A semi-supervised joint network for simultaneous left ventricular motion tracking and segmentation in 4D echocardiography. K Ta, S S Ahn, J C Stendahl, A J Sinusas, J S Duncan, MICCAI. K. Ta, S. S. Ahn, J. C. Stendahl, A. J. Sinusas, and J. S. Duncan. A semi-supervised joint network for simultane- ous left ventricular motion tracking and segmentation in 4D echocardiography. In MICCAI, 2020.
Image matching as a diffusion process: an analogy with Maxwell's demons. J.-P Thirion, Med Imag Anal. 23J.-P. Thirion. Image matching as a diffusion process: an analogy with Maxwell's demons. Med Imag Anal, 2(3):243-260, 1998.
Benchmarking framework for myocardial tracking and deformation algorithms: An open access database. C Tobon-Gomez, M De Craene, K Mcleod, L Tautz, W Shi, A Hennemuth, A Prakosa, H Wang, G Carr-White, S Kapetanakis, A Lutz, V Rasche, T Schaeffter, C Butakoff, O Friman, T Mansi, M Sermesant, X Zhuang, S Ourselin, H.-O Peitgen, X Pennec, R Razavi, D Rueckert, A Frangi, K Rhode, Med Imag Anal. 176C. Tobon-Gomez, M. De Craene, K. McLeod, L. Tautz, W. Shi, A. Hennemuth, A. Prakosa, H. Wang, G. Carr- White, S. Kapetanakis, A. Lutz, V. Rasche, T. Scha- effter, C. Butakoff, O. Friman, T. Mansi, M. Serme- sant, X. Zhuang, S. Ourselin, H.-O. Peitgen, X. Pen- nec, R. Razavi, D. Rueckert, A. Frangi, and K. Rhode. Benchmarking framework for myocardial tracking and deformation algorithms: An open access database. Med Imag Anal, 17(6):632-648, 2013.
Normal values for wall thickening by magnetic resonance imaging. J Ubachs, E Heiberg, K Steding, H Arheden, J Cardiovasc Magn Reson. 11J. Ubachs, E. Heiberg, K. Steding, and H. Arheden. Normal values for wall thickening by magnetic resonance imaging. J Cardiovasc Magn Reson, 11, 2009.
Non-parametric diffeomorphic image registration with the demons algorithm. T Vercauteren, X Pennec, A Perchant, N Ayache, MICCAI. T. Vercauteren, X. Pennec, A. Perchant, and N. Ayache. Non-parametric diffeomorphic image registration with the demons algorithm. In MICCAI, 2007.
Fast LV motion estimation using subspace approximation techniques. Y.-P Wang, Y Chen, A Amini, IEEE Transactions on Medical Imaging. 206Y.-P. Wang, Y. Chen, and A. Amini. Fast LV motion esti- mation using subspace approximation techniques. IEEE Transactions on Medical Imaging, 20(6):499-513, 2001.
Adversarial uni-and multi-modal stream networks for multimodal image registration. Z Xu, J Luo, J Yan, R Pulya, X Li, W Wells, J Jagadeesan, MICCAI. Z. Xu, J. Luo, J. Yan, R. Pulya, X. Li, W. Wells, and J. Jagadeesan. Adversarial uni-and multi-modal stream networks for multimodal image registration. In MICCAI, pages 222-232, 2020.
Deeptag: An unsupervised deep learning method for motion tracking oncardiac tagging magnetic resonance images. M Ye, M Kanski, D Yang, Q Chang, Z Yan, Q Huang, L Axel, D Metaxas, CVPR. 2021M. Ye, M. Kanski, D. Yang, Q. Chang, Z. Yan, Q. Huang, L. Axel, and D. Metaxas. Deeptag: An unsupervised deep learning method for motion tracking oncardiac tagging magnetic resonance images. In CVPR, 2021.
FOAL: Fast online adaptive learning for cardiac motion estimation. H Yu, S Sun, H Yu, X Chen, H Shi, T S Huang, T Chen, CVPR. H. Yu, S. Sun, H. Yu, X. Chen, H. Shi, T. S. Huang, and T. Chen. FOAL: Fast online adaptive learning for cardiac motion estimation. In CVPR, 2020.
Human heart: tagging with MR imaging-a method for noninvasive assessment of myocardial motion. E Zerhouni, D Parish, W Rogers, A Yang, E Shapiro, Radiology. 169E. Zerhouni, D. Parish, W. Rogers, A. Yang, and E. Shapiro. Human heart: tagging with MR imaging-a method for noninvasive assessment of myocardial mo- tion. Radiology, 169:59-63, 1988.
Explainable cardiac pathology classification on cine MRI with motion characterization by semi-supervised learning of apparent flow. Q Zheng, H Delingette, N Ayache, Med Imag Anal. 56Q. Zheng, H. Delingette, and N. Ayache. Explainable cardiac pathology classification on cine MRI with motion characterization by semi-supervised learning of apparent flow. Med Imag Anal, 56:80-95, 2019.
|
[
"https://github.com/qmeng99/dynamic",
"https://github.com/Marjola89/3Dstrain",
"https://github.com/wolny/pytorch-3dunet(a)",
"https://github.com/InsightSoftwareConsortium/SimpleITK-Notebooks/blob/master/Python/66",
"https://github.com/baiwenjia/ukbb",
"https://github.com/voxelmorph/voxelmorph"
] |
[
"Pion correlations in Nuclear Matter",
"Pion correlations in Nuclear Matter"
] |
[
"P K Panda \nCentro de Física Computacional\nDepartamento de Física\nUniversidade de Coimbra\nP-3004 -516CoimbraPortugal\n",
"S Sarangi \nICFAI Institute of Science & Technology\nBhubaneswar-751 010India\n",
"J Da Providência \nCentro de Física Computacional\nDepartamento de Física\nUniversidade de Coimbra\nP-3004 -516CoimbraPortugal\n"
] |
[
"Centro de Física Computacional\nDepartamento de Física\nUniversidade de Coimbra\nP-3004 -516CoimbraPortugal",
"ICFAI Institute of Science & Technology\nBhubaneswar-751 010India",
"Centro de Física Computacional\nDepartamento de Física\nUniversidade de Coimbra\nP-3004 -516CoimbraPortugal"
] |
[] |
The saturation properties of the nuclear matter taking pion correlations into account is studied.We construct a Bogoliubov transformations for the pion pair operators and calculate the energy associated with the pion pairs. The pion dispersion relation is investigated. We next study the correlation energy due to one pion exchange in nuclear matter and neutron matter at random phase approximation using the generator coordinate method. The techniques of the charged pion correlations are discussed in the neutron matter calculations. We observe that there is no sign of the pion condensation in this model.
|
10.1142/s0218301311017338
|
[
"https://arxiv.org/pdf/0910.3318v1.pdf"
] | 119,295,461 |
0910.3318
|
50c582f876d9220ab2981b7a2f3e6333fa86f0c0
|
Pion correlations in Nuclear Matter
17 Oct 2009
P K Panda
Centro de Física Computacional
Departamento de Física
Universidade de Coimbra
P-3004 -516CoimbraPortugal
S Sarangi
ICFAI Institute of Science & Technology
Bhubaneswar-751 010India
J Da Providência
Centro de Física Computacional
Departamento de Física
Universidade de Coimbra
P-3004 -516CoimbraPortugal
Pion correlations in Nuclear Matter
17 Oct 2009numbers: 2165-f2165Mn2165Jk2160Jz
The saturation properties of the nuclear matter taking pion correlations into account is studied.We construct a Bogoliubov transformations for the pion pair operators and calculate the energy associated with the pion pairs. The pion dispersion relation is investigated. We next study the correlation energy due to one pion exchange in nuclear matter and neutron matter at random phase approximation using the generator coordinate method. The techniques of the charged pion correlations are discussed in the neutron matter calculations. We observe that there is no sign of the pion condensation in this model.
I. INTRODUCTION
The understanding of the nuclear force at a microscopic level is an important problem since it shall be the basis for the nuclear matter and finite nuclei calculations. The interactions of nucleons which may arise as a residual interaction due to their substructure of quarks and gluons is technically not solvable. The alternative approach is to tackle the problem through meson interactions.
In recent years relativistic mean field theory (RMF) [1,2] has been quite successful in describing nuclear matter and finite nuclei properties. The NN dynamics arises in this model from the exchange of a Lorentz scalar isoscalar meson, σ, which provides the mid range attraction and an isoscalar vector meson, ω, which provides the repulsion. Also, in this model the inclusion of the ρ-meson takes care of the neutron-proton asymmetry. With a small number of parameters, the RMF model reproduces the nuclear matter saturation and describes the bulk and the single particle properties for the finite nuclei reasonably well.
Despite the success of the RMF model, several open questions still remain unanswered.
Firstly, the meson fields are classical and secondly the attractive part of the nuclear force is mediated through the hypothetical σ meson which could be an effect of multi-pion exchanges [3,4,5,6,7]. The σ can not be interpreted as the representation of a physical particle, since such a particle or resonant state remains still to be confirmed in the experiment. Thus on aesthetic as well as phenomenological ground, alternative approaches will add to our understandings. The original Walecka model at the Hartree approximation does not contain a dynamical description of the pion fields. However the importance of the pions in NN dynamics can not be ignored. Realizing the essential role of pion in the description of the nuclear medium an alternative approach for the nuclear matter [4], deuteron [5] and for 4 He [6] has been developed. In this method, it has been studied the description of nuclear matter using pion pairs through a squeezed coherent state type of construction [4,5,6,7]. This simulates the effects of the σ-meson and is a very natural quantum mechanical formalism for the classical fields.
The generator coordinate method (GCM) is a technique of great physical appeal which has been developed [8] to describe collective oscillations in nuclei. Besides being extensively used in nuclear structure physics, it often finds application in various other branches of physics [9]. In this work we investigate the pion condensation problem in nuclear matter and neutron matter using the generator coordinate method. In a simplified model, the problem had been studied in Ref. [10] where a coherent state description for the pions was used. Using the gaussian overlap and harmonic approximation, the Hamiltonian may be diagonalized by a randam phase approximation (RPA) like canonical transformation [11]. In this methodology, one can go beyond the coherent state description. Again the calculation can be carried out exactly, without further approximation. The present analysis is an extension of the mean field approach of Walecka where classical fields are replaced by quantum coherent states for the pion pairs. This report has also an advantage that the one pion exchange correlation contributions are considered at RPA level using similar Bogoliubov transformations. In the present model, we have observed no sign of the pion condensations in the RPA modes.
The outline of the paper follows. In section 2, we derive a pion nucleon Hamiltonian in a non-relativistic limit. We then construct a Bogoliubov transformations for the pion pair operators and calculate the energy associated with the pion pairs. In section 3, we calculate the correlation energies due to one pion exchange in nuclear matter and neutron matter at random phase approximation (RPA). Section 4 consists of the discussions of the saturation properties of nuclear matter, the pion dispersion relation in the medium and a concluding remarks.
II. FORMALISM
A. Non-relativistic Hamiltonian
The Lagrangian for the pion nucleon system is taken as
L =ψ (iγ µ ∂ µ − M + Gγ 5 φ) ψ − 1 2 ∂ µ ϕ i ∂ µ ϕ i − m 2 ϕ i ϕ i ,(1)
where ψ = The representations of γ matrices are
γ = 0 σ − σ 0 , γ 0 = 1 0 0 −1 , γ 5 = 0 −i −i 0 .
From the above Lagrangian, the equation of motions are
(E − M)ψ I − ( σ · p + iGφ)ψ II = 0 ,(2)(E + M)ψ II − ( σ · p − iGφ)ψ I = 0 ,(3)
where, E = i(∂/∂t) and p = i(∂/∂ x). Eliminating the small component ψ II from equation (2) and (3) we have
(E 2 − M 2 ) − (E + M)( σ · p + iGφ)(E + M) −1 ( σ · p − iGφ) ψ I = 0 .(4)
Equation (4) can be rewritten as
E 2 − M 2 − p 2 + iG[( σ · p), φ] − G 2 φ · φ ψ I = 0 .(5)
From equation (5), we can immediately identify the effective Hamiltonian for the nucleons as
H N = ψ † I ( x) p 2 + M 2 − iG(( σ · p)φ) + G 2 φ 2 1/2 ψ I ( x) ≃ ψ † I ( x) ǫ x − iG 2ǫ x (( σ · p)φ) + G 2 2ǫ x φ 2 ψ I ( x) = H 0 N (x) + H int (x) ,(6)
where the single particle nucleon energy operator ǫ x is given by ǫ x = (M 2 − ∇ 2 x ) 1/2 . In the non-relativistic assumption, we have to replace ǫ x by M, when in a denominator and by M + p 2 2M when not in a denominator. Now the effective Hamiltonian becomes
H(x) = H 0 N (x) + H int (x) + H M (x),(7)
where the free nucleon part H 0 N (x) is given by
H 0 N (x) = ψ † (x) M + ∇ 2 x 2M ψ(x) ,(8)
the πN interaction Hamiltonian is provided by
H int (x) = ψ † (x) − iG 2M ((σ · p) φ) + G 2 2M φ 2 ψ(x) ,(9)
and the free meson part H M (x) is defined as
H M (x) = 1 2 φ 2 i + (∇ϕ i ) · (∇ϕ i ) + m 2 ϕ 2 i .(10)
We expand the pion field operator ϕ i (x) in terms of the creation and annihilation operators of off-mass shell pions satisfying equal time algebra as
ϕ i (x) = 1 √ 2ω x (a i (x) † + a i (x)),φ i (x) = i ω x 2 (a i (x) † − a i (x)) ,(11)
with energy ω x = (m 2 − ∇ 2 x ) 1/2 .
B. Correlation energy associated with two pions and Bogoliubov Transformation
The quadratic terms in the pion field in eq. (9) provide a isoscalar scalar interaction of nucleons and thus would simulate the effects of σ-mesons of the Walecka model.
A pion-pair creation operator given as
B † = 1 2 k f k a † ki a † −ki ,(12)
is then constructed with the creation and annihilation operators in momentum space and the ansatz function f (k). We then define the unitary transformation U as
U = e (B † −B)(13)
and note that U, operating on vacuum, creates an arbitrarily large number of scalar isospin singlet pairs of pions corresponding to squeezed coherent states. We will show that this is the appropriate transformation to diagonalize the pion part of the Hamiltonian. The "pion dressing" of nuclear matter is then introduced through the state
|Ψ = U|0 = e (B † −B) |0 .(14)
We obtainã
ki = U † a ki U = (cosh f k ) a ki + (sinh f k ) a † −ki ,(15)
which is a Bogoliubov transformation. Here U is a unitary and hermitian operator. The psedo-pionsã ki are the results of the unitary transformation. It can also be easily checked that the operatorã ki satisfies the standard bosonic commutation relations:
[ã ki ,ã † k ′ j ] = δ ij δ k,k ′ , [ã † ki ,ã † k ′ j ] = [ã ki ,ã k ′ j ] = 0 .(16)
and alsoã
ki |Ψ = 0 (17)
The reverse transformation:
a ki = (cosh f k )ã ki − (sinh f k )ã † −ki ≡ x kãki − y kã † −ki .(18)
In momentum space the effective Hamiltonian (7) may be re-written as
H ≈ p,αη ε p c † p,αη c p,αη + q,j ω q a † q,j a q,j − pq,jαα ′ ηη ′ G 2M ω q V c † p+q,αη c p,α ′ η ′ (iσ.q) αα ′ τ j (a q,j + a † −q,j ) + pq,jαη G 2 2Mω q V c † p,αη c p,αη a † q,j a † −q,j + a q,j a −q,j + 2a † q,j a q,j .(19)
Here p, α and η are respectively, the momentum, spin and iso-spin quantum numbers of the nucleon and q, j are the momentum and isospin labels of the pion. p = |p|, q = |q|. c † p,αη is the creation operator for nucleon with momentum p, spin α and isospin η. The contribution of the quadratic term in the pion field coming from the above Hamiltonian as
H 2π = q,j ω q a † q,j a q,j + pq,jαη G 2 2Mω q c † p,αη c p,αη a † q,j a † −q,j + a q,j a −q,j + 2a † q,j a q,j = q,j ω q a † q,j a q,j + q,j G 2 ρ 2Mω q a † q,j a † −q,j + a q,j a −q,j + 2a † q,j a q,j = q,j ω q + G 2 ρ Mω q a † q,j a q,j + q,j G 2 ρ 2Mω q a † q,j a † −q,j + a q,j a −q,j = q,j ω ′ q a † q,j a q,j + q,j g ′ 2 a † q,j a † −q,j + a q,j a −q,j(20)
where
ω ′ q = ω q + G 2 ρ M ωq = ω q + g ′ with g ′ = G 2 ρ M ωq .
Now the equation of motion for the pions becomes
[H 2π ,ã † q,j ] =ω qã † q,j .(21)
This gives
ω ′ q x q a † q,j + g ′ x q a q,j − ω ′ q y q a q,j − g ′ y q a † q,j =ω q (x q a qj + y q a † −qj ).(22)
The characteristic equation is
(ω ′ q −ω q ) −g ′ g ′ −(ω ′ q +ω q ) =ω 2 q − ω ′ q 2 + g ′ 2 = 0,(23)
which givesω
q = ω ′ q 2 − g ′ 2 with x q = ω ′ q +ω q 2ω q y q = ω ′ q −ω q 2ω q .(24)
Now
H 2π = q,jω qã † q,jãq,j + 3 2 q (ω q − ω ′ q ) .(25)
We now have to include a term which corresponds to a phenomenological repulsion energy between the pions of a "pair" in the above Hamiltonian H 2π and is given by
H R m = A q,j e R 2 π q 2 a † q,j a q,j(26)
where the two parameters A and R π correspond to the strength and length scale, respectively, of the repulsion and will be determined phenomenologically. This term amounts to imposing a cut off on the momentum q which accounts to the fact that momenta larger than k f are not dynamically meaningful. With this repulsion term, we now have
ω ′ q = ω q + A e R 2 π q 2 + G 2 ρ Mω q = ω q + A e R 2 π q 2 + g ′ (27) with g ′ = G 2 ρ Mω q , ω q = q 2 + m 2 andω q = ω ′ q 2 − g ′ 2 .(28)
After transformation the Hamiltonian in equation (19) becomes
H ≃ p,αη ε p c † p,αη c p,αη + q,jω qã † q,jãq,j + 3 2 q (ω q − ω ′ q ) − pq,jαα ′ ηη ′ g q ω q V c † p+q,αη c p,α ′ η ′ (iσ · q) αα ′ τ j (ã q,j +ã † −q,j )(29)
where
g q = G (x q − y q ) 2M .(30)
In the next section, we will consider RPA and calculate the correlation energy associated with one pion exchange for nuclear matter and neutron matter.
III. CORRELATION ENERGY ASSOCIATED WITH ONE PION EXCHANGE
A. nuclear matter
We consider a Slater determinant of plane waves
|Φ = αη,|p|≤p F c † p,αη |0 ,(31)
with |0 is the absolute vacuum, p F is the Fermi momentum and
c p,α,η |0 = 0 .(32)
Excitations with momentum transfer q are coupled to excitations with momentum transfer −q. Thus, the wave function |Ψ which describes such excitations should read
|Ψ = exp S|Φ ,(33)
where
S qj = U q N q α,α ′ ,η,η ′ p ∈ Ωq α, η|(σ · q)τ j |α ′ , η ′ c † p+q,α,η c p,α ′ ,η ′ + U −q N q α,α ′ ,η,η ′ p ∈ Ω −q α, η| − (σ · q)τ j |α ′ , η ′ c † p−q,α,η c p,α ′ ,η ′ = U q B † q,j + U −q B † −q,j .(34)
In the above, N q is the normalization factor insuring
N q αα ′ ηη ′ ,p ∈ Ωq | α, η| σ · qτ j |α ′ , η ′ | 2 = 4N q p ∈ Ωq q 2 = 1(35)
and the domain Ω q is defined by |p + q| > p F , |p| ≤ |p F . Only positive energy states are occupied. Here, |α, η denotes the spin iso-spin eigenstate, σ 3 |α, η = α|α, η and τ 3 |α, η = η|α, η . The determination of the N q is now very simple. All we need is the volume of the intersection of 2 spheres of radius p F , theirs centers being a distance q apart. With this assumptions, we have
N −1 q = V (2π) 3 4πq 3 p 2 F − q 2 12 .(36)
The transformed unperturbed Hamiltonian becomes
H 0 = p,αη ε p c † p,αη c p,αη + q,jω qã † q,jãq,j + ∆ , where ∆ = 3 2 q (ω q − ω ′ q ) .(37)
The pion nucleon coupling reads,
H int = − pq,jαα ′ ηη ′ αη|i( σ · q)τ j |α ′ η ′ g q ω q V c † p+q,αη c p,α ′ η ′ (ã q,j +ã † −q,j ) .(38)
In order to proceed, it is convenient to bosonize the Hamiltonian H, restricted to the subspace S. This is done by the replacement S q,j → B q,j satisfy boson commutation relations.
The bosonized Hamiltonian reads as
H B = E 0 + q,j ε q B † q,j B q,j − Q q (B † −q,j + B q,j )(ã q,j +ã † −q,j ) +ω qã † q,jãq,j .(39)
The parameters of H B , namely ε q and Q q are fixed by expectation values such as
Φ|B qj HB † qj |Φ = E HF + ε q , Φ|ã qj HB † qj |Φ = Q q ,
and Φ|ã qj B −qj H|Φ = Q q .
The above Hamiltonian is diagonalized by a Bogoliubov transformation of the type
Θ †(n) qj = x (n) 1 B † qj + x (n) 2ã † qj + y (n) 1 B −qj + y (n) 2ã−qj , n = 1, 2(41)
which leads to excitation energies and the correlation energy.
In the above
ε q = 4 N q q 2 p∈ Ωq 1 2M (( p + q) 2 − p 2 ) = 4N q q 2 2M V (2π) 3 4πq 2 p 3 F 3 = 1 2M 4qp 3 F 3(p 2 F − q 2 /12) ,(42)
and
Q q = N −1 q 2ω q V g q .(43)
The eigen frequencies are
Ω (±) q = 1 √ 2 ε 2 q +ω 2 q ± (ε 2 q −ω 2 q ) 2 + 16ε qωq Q 2 q .(44)
The correlation energy becomes
E corr = 3 2 q Ω (+) q + Ω (−) q − ε q −ω q .(45)
B. neutron matter
The correlated Fermion wave function may be written, τ = −1 for neutron and τ = 1 for
proton |Ψ = exp S|Φ , |Φ = α,|p|≤p F c † p,α,−1 |0 ,(46)
where the correlation operator reads
S = U q B † q,0 + U −q B † −q,0 + V q B † q,+ + V −q B † −q,+ ,(47)
with the quasi boson operators
B † q,0 = N q α,α ′ ,p ∈ Ωq α, −1|(σ · q)τ 0 |α ′ , −1 c † p+q,α,−1 c p,α ′ ,−1 (48) B † q,+ = N ′ q α,α ′ ,|p| ≤ p F α, 1|(σ · q)τ + |α ′ , −1 c † p+q,α,1 c p,α ′ ,−1 ,(49)
where τ 0 = τ 3 and τ + = (τ 1 + iτ 2 )/2. Here, N q and N ′ q are normalization factors insuring
N q α,α ′ ,p ∈ Ω −q | α, −1| σ · qτ j |α ′ , −1 | 2 = 2N q p ∈ Ωq q 2 = 1 ,(50)N ′ q α,α ′ ,|p| ≤p F | α, 1| σ · qτ j |α ′ , −1 | 2 = 2N ′ q p ≤p F q 2 = 1 .(51)
As earlier the domain Ω q is defined by |p + q| > p F , |p| ≤ |p F . The determination of the normalization, N q and N ′ q , is now very simple. To compute N q , all we need is the volume of the intersection of two spheres of radius p F , theirs centers being a distance q apart. We found
N −1 q = V (2π) 3 2πq 3 p 2 F − q 2 12 , N ′ −1 q = V (2π) 3 8π 3 q 2 p 3 F .(52)
The kinetic energy for the particle-hole pairs with momentum q become
ε q = 2 N q q 2 p∈ Ωq 1 2M (( p + q) 2 − p 2 ) = 2N q q 2 2M V (2π) 3 4πq 2 p 3 F 3 = 1 2M 2qp 3 F 3(p 2 F − q 2 /12) ,(53)ε ′ q = 2 N q q 2 p≤p F 1 2M (( p + q) 2 − p 2 ) = q 2 2M .(54)
The pion nucleon interaction becomes
H int = − pq,αα ′ ηη ′ αη|i( σ · q)τ 0 |α ′ η ′ g q ω q V c † p+q,αη c p,α ′ η ′ (ã q,0 +ã † −q,0 ) − pq,αα ′ ηη ′ αη|i( σ · q)τ + |α ′ η ′ g q ω q V c † p+q,αη c p,α ′ η ′ (ã q,+ +ã † −q,− ) − pq,αα ′ ηη ′ αη|i( σ · q)τ − |α ′ η ′ g q ω q V c † p+q,αη c p,α ′ η ′ (ã q,+ +ã † −q,− ) .(55)
The effective bosonized Hamiltonian containing two pion exchange becomes
H B = E 0 + q ε q B † q,0 B q,0 + ε ′ q B † q,+ B q,+ +ω q jã † q,jãq,j − q Q q (B † q,0 + B q,0 )(ã q,0 +ã † −q,0 ) − q Q ′ q B † q,+ (ã q,+ +ã † −q,− ) + B q,+ (ã −q,− +ã † q,+ ) ,(56)
where
Q q = N −1 q 2ω q V g q , Q ′ q = N ′ −1 q 2ω q V g q .(57)
The RPA equations are easily obtained. For the modes with charge,
[H B , (X q B † q,+ + ζ qã † q,+ + η qã−q,− )] = B † q,+ (X q ε ′ q + ζQ ′ q − η q Q ′ q +ã † q,+ (X q Q ′ q + ζ qωq ) +ã −q,− (X q Q ′ q − η qωq ) = Ω q (X q B † q,+ + x qã † q,+ + y qã−q,− ).(58)
The characteristic is a cubic and becomes
(ε ′ q − Ω q ) Q ′ q −Q ′ q Q ′ qω q − Ω q 0 Q ′ q 0 −ω q − Ω q = −Ω 3 q + ε ′ q Ω 2 q +ω 2 q Ω q + 2Q ′ 2 qω q − ε ′ qω 2 q = 0(59)
where the solutions are
Ω (1) q = ε ′ 3 + 2 1/3 −ε ′ 2 − 3ω 2 3 −2ε ′ 3 − 54Q ′ 2ω + 18ε ′ω2 + 4 −ε ′ 2 − 3ω 2 3 + −2ε ′ 3 − 54Q ′ 2ω + 18ε ′ω2 2 1/3 − 1 3 × 2 1/3 −2ε ′ 3 − 54Q ′ 2ω + 18ε ′ω2 + 4 −ε ′ 2 − 3ω 2 3 + −2ε ′ 3 − 54Q ′ 2ω + 18ε ′ω2 2 1/3 (60) Ω (2) q = ε ′ 3 − (1 + i √ 3) −ε ′ 2 − 3ω 2 3 × 2 2/3 −2ε ′ 3 − 54Q ′ 2ω + 18ε ′ω2 + 4 −ε ′ 2 − 3ω 2 3 + −2ε ′ 3 − 54Q ′ 2ω + 18ε ′ω2 2 1/3 + 1 6 × 2 1/3 (1 − i √ 3) −2ε ′ 3 − 54Q ′ 2ω + 18ε ′ω2 + 4 −ε ′ 2 − 3ω 2 3 + −2ε ′ 3 − 54Q ′ 2ω + 18ε ′ω2 2 1 (6 Ω (3) q = ε ′ 3 − (1 − i √ 3) −ε ′ 2 − 3ω 2 3 × 2 2/3 −2ε ′ 3 − 54Q ′ 2ω + 18ε ′ω2 + 4 −ε ′ 2 − 3ω 2 3 + −2ε ′ 3 − 54Q ′ 2ω + 18ε ′ω2 2 1/3 + 1 6 × 2 1/3 (1 + i √ 3) −2ε ′ 3 − 54Q ′ 2ω + 18ε ′ω2 + 4 −ε ′ 2 − 3ω 2 3 + −2ε ′ 3 − 54Q ′ 2ω + 18ε ′ω2 2 1 (6 Similarly [H B , (Y q B −q,+ +ζ qã † −q,+ +η qãq,− )] = Ω q (Y q B −q,+ +ζ qã † −q,+ +η qãq,− ) .(63)
This leads to a similar equation with Ω q replaced by −Ω q , so that the eigenfrequencies occur in pairs, ±Ω q . The correlation energy for the charged modes becomes
E ′ corr = 1 2 q (|Ω (1) q | + |Ω (2) q | + |Ω (3) q | − ε ′ q − 2ω q ) .(64)
The eigen frequencies of uncharged modes are
Ω (±) q = 1 √ 2 ε 2 q +ω 2 q ± (ε 2 q −ω 2 q ) 2 + 16ε qωq Q 2 q .(65)
The correlation energy becomes
E corr = 1 2 q Ω (+) q + Ω (−) q − ε q −ω q .(66)
which are given in earlier section.
IV. RESULTS AND DISCUSSION
We first proceed to describe the binding energy for nuclear matter. We obtain the free nucleon kinetic energy density
h f = Φ|T r[ρ N H N (x)]|Φ = γk 3 f 6π 2 M + 3 10 k 2 f M .(67)
In the above equation, spin degeneracy factor γ = 4 (2) for nuclear matter (neutron matter)
and , k f represents the Fermi momenta of the nucleons. The Fermi momenta k f and the nucleon densities are related by k f = (6π 2 ρ/γ) 1 3 . It is well known that the short range interaction plays a crucial role in determining the saturation density which is mediated by the iso-scalar vector ω mesons. Here we introduce the energy of repulsion by the simple form [3,4] h
ω = λ ω ρ 2 ,(68)
where the parameter λ ω is to be fixed using the saturation properties of nuclear matter as described in Ref. [7]. Thus we finally write down the binding energy per nucleon E B of the symmetric nuclear matter (SNM):
E B = E 0 ρ − M(69)
where
E 0 = h f + 3 2 qω q + h ω .(70)
In the above, E 0 is the energy density of nuclear matter or neutron matter without one pion correlation. The expression for E 0 contains the three model parameters a, R π , and λ ω as We now discuss the results obtained in our calculation. We first construct a Bogoliubov transformation for the pion pairs operators and calculate the energy associated with it.
We next calculate the correlation energy due to the one pion exchange in nuclear matter and neutron matter at RPA using the generator coordinate method. The binding energy per nucleon E B as a function of the density of the system is often refered as the nuclear equation of state (EOS). In figure 1, we present the EOS with and without correlation for the nuclear matter and for neutron matter. The correlation is related to one pion exchange.
As expected, the binding energy for nuclear matter with and without correlation initially decreases with density and reaches a minimum at ρ/ρ 0 = 1 and then increases.
In figure 2, we show the variation of the correlation energy, E corr as a function of density for symmentric nuclear matter (SNM) and for the pure neutron matter (PNM). The correlation energy initially decreases with density and then increses after the saturation density. we have plotted Ω − for smaller densities. At small densities, Ω − increases monotonically with momentum k and corresponds to zero sound modes.
In figure 5, we have shown the dispersion relation for the RPA modes, Ω + , versus momentum at different densities for nuclear matter. It is found the the magnitude of the Ω + is larger compared to Ω − . In lower panel of the figure 5, we plotted Ω + for smaller densities,
showing an increase with density of the effective mass of the pions.
We next study the RPA modes for the neutron matter. There are three modes for the charge pions. In figure 6, we plot the RPA frequencies versus momentum k for neutron matter. The |Ω 1 | and |Ω 2 | are equal. In the upper pannel, we show |Ω 1 | versus momentum k. All the three RPA frequencies increase with density. In the lower panel, we plot |Ω 3 | versus momentum k which corresponds to zero sound modes. It is found that there is no sign of pion condensation in higher densities with the RPA modes.
In conclusion, we have derived a pion nucleon Hamiltonian in a non-relativistic limit.
We then have constructed a Bogoliubov transformations for the pion pair operators and calculate the energy associated with the pion pairs for the nuclear matter and neutron matter. This is an extension of the mean field approach of Walecka where the classical fields are replaced by the quantum coherent states for the pion pairs. We then calculated the correlation energies due to one pion exchange in nuclear matter and neutron matter at RPA using generator coordinate method. It is found that there is no sign of pion condensation in higher densities with the RPA modes.
the doublet of the nucleon field with mass M, ϕ i 's are pion fields and φ = τ i ϕ i represents the off-mass shell isospin triplet pion field with mass m. G is the pion-nucleon coupling constant. Repeated indices indicate summation.
introduced in the earlier section. These parameters are determined self-consistently through the saturation properties of nuclear matter at saturation density ρ 0 = 0.15 fm −3 with and without the correlations. While pressure P vanishes at saturation density for symmetric nuclear matter, the values of binding energy per nucleon are chosen to be −16 MeV. In the numerical calculations, we have used the nucleon mass M = 940 MeV, the pion masses m = 140 MeV and the omega meson mass, m ω = 783 MeV, and the π −N coupling constant G 2 /4π = 14.6. FIG. 1: Binding energy of symmetric nuclear matter (SNM) and pure neutron matter (PNM). The correlation is related to one-pion exchange FIG. 2: The correlation energy from one-pion exchange in symmetric nuclear matter (SNM) and in pure neutron matter (PNM)
FIG. 4 :FIG. 5 :
45The correlation energy for the neutron matter gives larger as compared to the nuclear matter at different densities.Dispersion relation of modes with the quantum numbers of the pions in nuclear medium is an interesting aspect. Infigure 3, we plot the dispersion relations arisig from the two pion coherent states versus with the momentum at different densities. The increase of the pion dispersion relation for k/k f > 0.6 is probably an artifact of the repulsion term of equationω FIG. 3: The dispersion relation with the quantum number of the pions,ω for nuclear matter The dispersion relation of the RPA mode with the quantum number of the pions, Ω − , at high densities (upper panel) and for low densities (lower panel) for nuclear matter. This corresponds to zero sound modes. (26). For nuclear matter, we observed two RPA modes with quantum number of the pions, Ω ± . In figure 4, we have shown the dispersion relation for the RPA modes, Ω − , versus momentum at different densities for nuclear matter. At ρ = ρ 0 , Ω − increases very slowly The dispersion relation of the RPA mode with the quantum number of the pions, Ω + , at high densities (upper panel) and at low densities (liwer panel) for nuclear matter. It is showing an increase with density of the effective mass of the pions.
FIG. 6 :
6The dispersion relation of the RPA mode with the quantum number of the pions, |Ω 1 | and |Ω 3 |, for neutron matter with momentum. However at ρ = 4ρ 0 , it increases fast. In the lower panel of the figure 4,
TABLE I :
IParameters of the model obtained self consistently at saturation density.a
R π λ ω
(MeV) (fm) (fm 2 )
14.58 1.45 3.07
AcknowledgementsOne of the author (PKP) thanks the hospitality and the friendly atmosphere provided to him during the stay at Departmento de Fisica, Universidade Coimbra. This work was par-
. J D Walecka, Ann. Phys.(N.Y.). 83491J.D. Walecka, Ann. Phys.(N.Y.) 83, (1974) 491;
. B D Serot, J D Walecka, Int. J.Mod. Phys E. 6515B.D. Serot, J.D. Walecka, Int. J.Mod. Phys E 6, (1997) 515.
. B D Serot, J D Walecka, Adv. Nucl. Phys. 161B.D. Serot and J.D. Walecka, Adv. Nucl. Phys. 16, (1986) 1.
. A Mishra, H Mishra, S P Misra, Int. J. Mod. Phys. A. 73391A. Mishra, H. Mishra and S.P. Misra, Int. J. Mod. Phys. A 7, (1990) 3391.
. H Mishra, S P Misra, P K Panda, B K Parida, Int. J. Mod. Phys. E. 2405H. Mishra, S.P. Misra, P.K. Panda and B. K. Parida, Int. J. Mod. Phys. E 2, (1992) 405.
. P K Panda, S P Misra, R Sahu, Phys. Rev. C. 452079P.K. Panda, S.P. Misra and R. Sahu, Phys. Rev. C 45, (1992) 2079.
. P K Panda, S K Patra, S P Misra, R Sahu, Int. J. Mod. Phys. E. 5575P.K. Panda, S.K. Patra, S.P. Misra and R. Sahu, Int. J. Mod. Phys. E 5, (1996) 575.
. S Sarangi, P K Panda, S K Sahu, L Maharana, ; S Sarangi, P K Panda, S K Sahu, L Maharana, Int. J. Mod. Phys. B. 22to appear in Ind. J. PhysicsS. Sarangi, P.K. Panda, S.K. Sahu and L. Maharana, Int. J. Mod. Phys. B 22, (2008) 4524, S. Sarangi, P.K. Panda, S.K. Sahu and L. Maharana, to appear in Ind. J. Physics.
. D L Hill, J A Wheeler, Phys. Rev. 891102D.L. Hill and J.A. Wheeler, Phys. Rev. 89 (1953) 1102.
. B Johansson, J Da Provideência, Physica B. 94152B. Johansson and J. da Provideência, Physica B 94 (1978) 152.
. J Da Provideência, Nucl. Phys. A. 290435J. da Provideência, Nucl. Phys. A 290 (1977) 435.
. P Chattopadhyay, J Da Providência, Nucl. Phys. A. 370445P. Chattopadhyay and J. da Providência, Nucl. Phys. A 370 (1981) 445.
|
[] |
[
"N -BODY NETWORKS: A COVARIANT HIERARCHICAL NEURAL NETWORK ARCHITECTURE FOR LEARNING ATOMIC POTENTIALS 1",
"N -BODY NETWORKS: A COVARIANT HIERARCHICAL NEURAL NETWORK ARCHITECTURE FOR LEARNING ATOMIC POTENTIALS 1"
] |
[
"Risi Kondor [email protected] \nDepartments of Computer Science & Statistics\nThe University of Chicago\n\n"
] |
[
"Departments of Computer Science & Statistics\nThe University of Chicago\n"
] |
[] |
We describe N -body networks, a neural network architecture for learning the behavior and properties of complex many body physical systems. Our specific application is to learn atomic potential energy surfaces for use in molecular dynamics simulations. Our architecture is novel in that (a) it is based on a hierarchical decomposition of the many body system into subsytems (b) the activations of the network correspond to the internal state of each subsystem (c) the "neurons" in the network are constructed explicitly so as to guarantee that each of the activations is covariant to rotations (d) the neurons operate entirely in Fourier space, and the nonlinearities are realized by tensor products followed by Clebsch-Gordan decompositions. As part of the description of our network, we give a characterization of what way the weights of the network may interact with the activations so as to ensure that the covariance property is maintained.
| null |
[
"https://arxiv.org/pdf/1803.01588v1.pdf"
] | 3,665,386 |
1803.01588
|
4e2cf22aa9a4cb396a11883efbb02e417d8fd9bd
|
N -BODY NETWORKS: A COVARIANT HIERARCHICAL NEURAL NETWORK ARCHITECTURE FOR LEARNING ATOMIC POTENTIALS 1
Risi Kondor [email protected]
Departments of Computer Science & Statistics
The University of Chicago
N -BODY NETWORKS: A COVARIANT HIERARCHICAL NEURAL NETWORK ARCHITECTURE FOR LEARNING ATOMIC POTENTIALS 1
We describe N -body networks, a neural network architecture for learning the behavior and properties of complex many body physical systems. Our specific application is to learn atomic potential energy surfaces for use in molecular dynamics simulations. Our architecture is novel in that (a) it is based on a hierarchical decomposition of the many body system into subsytems (b) the activations of the network correspond to the internal state of each subsystem (c) the "neurons" in the network are constructed explicitly so as to guarantee that each of the activations is covariant to rotations (d) the neurons operate entirely in Fourier space, and the nonlinearities are realized by tensor products followed by Clebsch-Gordan decompositions. As part of the description of our network, we give a characterization of what way the weights of the network may interact with the activations so as to ensure that the covariance property is maintained.
INTRODUCTION
In principle, quantum mechanics provides a perfect description of the forces governing the behavior of atomic systems such as crystals and biological molecules. However, for systems larger than a few dozen atoms, solving the Schrödinger equation explicitly, on present day computers, is not a feasible proposition. Even Density Functional Theory (DFT) (Hohenberg & Kohn, 1964), a widely used approximation in quantum chemistry, has trouble scaling to more than about a hundred atoms.
Consequently, the majority of practical work in molecular dynamics foregoes modeling electrons explicitly, and falls back on the fundamentally classical (i.e., non-quantum) Born-Oppenheimer approximation, which treats atoms as solid balls that exert forces on nearby balls prescribed by socalled (effective) atomic potentials. Assume that the potential attached to atom i is φ i ( r 1 , . . . , r k ), with r j = r pj − r i , where r i is the position vector of atom i and r pj is the position vector of its j'th neighbor. The total force experienced by atom i is then simply the negative gradient F i = −∇ ri φ i ( r 1 , . . . , r k ). Classically, in molecular dynamics φ i is usually given in terms of a closed form formula with a few tunable parameters. Popular examples of such so-called empirical potentials (empirical force fields) include the CHARMM models (Brooks et al., 1983;2009) and others.
Empirical potentials are fast to evaluate but are crude models of the quantum interactions between atoms, limiting the accuracy of molecular simulation. A little over ten years ago, machine learning entered this field, promising to bridge the gap between the quantum and classical worlds by learning the aggregate force on each atom as a function of the positons of its neighbors from a relatively small number of DFT calculations (Behler & Parrinello, 2007). In the last few years there has been a veritable explosion in the amount of activity in machine learned atomic potentials (MLAP), and molecular dynamics simulations based on this approach are starting to yield results that outperform R. KONDOR other methods (Bartók et al., 2010;Behler, 2015;Shapeev, 2015;Chmiela et al., 2016;Zhang et al., 2017;Schütt et al., 2017).
Much of the arsenal of present day machine learning algorithms has been applied to the MLAP problem, from genetic algorithms, through kernel methods, to neural networks. However, rather than the statistical details of the specific learning algorithm, often what is critically important for problems of this type is the representation of the atomic environment, i.e., the choice of learning features that the algorithm is based on. This situation is by no means unique in the world of applied machine learning: in computer vision and speech recognition, in particular, there is a rich literature of such representational issues. What makes the situation in Physics applications somewhat special is the presence of constraints and invariances that the representation must satisfy not just in an approximate, but in the exact sense. As an example, one might consider rotation invariance. If rotation invariance is not fully respected by an image recognition system, some objects might be less likely to be accurately detected in certain orientations than in others. In a molecular dynamics setting, however, using a potential that is not fully rotationally invariant would not just degrade accuracy, but would likely lead to entirely unphysical molecular trajectories.
1.1. FIXED VS. LEARNED REPRESENTATIONS.
Similarly to other branches of machine learning, in recent years the MLAP community has been shifting from fixed input features towards representations learned from the data itself, in particular, using "deep" neural networks to represent atomic enviroments. Several authors have found that certain concepts from the mainstream neural networks literature, such as convolution and equivariance, can be successfuly repurposed to this domain. In fact, the analogy with computer vision is more than just skin deep. In both domains two competing objectives are critical to success: 1. The ability to capture structure in the input data at multiple different length scales, , i.e., to construct a multiscale representation of the input image or the atomic environment. 2. The above mentioned invariance property with respect to spatial transformations, including translations, rotations, and possibly scaling. There is a rich body of work on addressing these objectives in the neural networks literature. One particularly attractive approach is the scattering networks framework of Mallat and coworkers, which, at least in the limit of an infinite number of neural network layers, provides a representation of functions that is both globally invariant with respect to symmetries and Lipschitz with respect to warpings (Mallat, 2012;Hirn et al., 2017).
Inspired by recent work on neural networks for representing graphs and other structured objects by covariant compositional neural architectures , in this paper we take the idea of learnable multiscale representations one step further, and propose N -body networks, a neural network architecture where the individual "neurons" correspond to physical subsystems endowed with their own internal state. The structure and behavior of the resulting model follows the tradition of coarse graining and representation theoretic ideas in Physics, and provides a learnable and multiscale representation of the atomic environment that is fully covariant to the action of the appropriate symmetries. However, the scope of the underlying ideas is significantly broader, and we believe that N -body networks will also find application in modeling other types of many-body Physical systems, as well.
An even more general contribution of the present work is that it shows how the machinery of group representation theory, specifically the concept of Clebsch-Gordan decompositions, can be used to design neural networks that are covariant to the action of a compact group yet are computationally efficient. This aspect is related to the recent explosion of interest in generalizing the notion of convolutions to graphs (Niepert et al., 2016;Defferrard et al., 2016;Duvenaud et al., 2015;Li et al., 2016;Gilmer et al., 2017;, manifolds (Monti et al., 2016;Masci et al., 2015), and other domains (Bruna & Mallat, 2013;Cohen et al., 2018), as well as the question of generalizing the concept of equivariance (covariance) in general (Cohen & Welling, 2016;. Several of the above works employed generalized Fourier representations of one type or another, but to ensure equivariance the nonlinearity was always applied in the "time domain". Projecting back and forth between the time domain and the frequency domain is a major bottleneck, which we can eliminate because the Clebsch-Gordan transform allows us to compute one type of nonlinearity, tensor products, entirely in the Fourier domain.
REPRESENTING STRUCTURED OBJECTS WITH NEURAL NETS
To put our work in perspective, we begin with reviewing classical feed-forward neural networks, and then describe a relatively new, general purpose neural architecture for representing structured objects called compositional networks.
A prototypical feed-forward neural network consists of some number of neurons {n i } arranged in L+1 distinct layers. Layer = 0 is the input layer, where training and testing data enter the network, while the inputs of the neurons in layers = 1, 2, . . . , L are the outputs {f −1 j } of the neurons in the previous layer. Each neuron computes its output (also called its activation) using a simple rule such as
(1)
f i = σ j w j f −1 j + b ,
where the {w j } weights and {b } biases are learnable parameters, while σ is a fixed nonlinearity, such a sigmoid function or a ReLU operator. The output of the network appears in layer L, is compared with the desired output by means of a loss function, and the gradient of the loss is backpropagated through the network to update the parameters, usually by some variant of stochastic gradient descent.
One of the reasons commonly cited for the spectacular success of feed-forward neural networks (especially "deep", i.e., many layer ones) is their ability to implicitly decompose complex objects into their constituent parts. This is especially true of convolutional neural networks (CNNs), commonly used in computer vision (LeCun et al., 1998). In CNNs, the weights in each layer are tied together, which tends to force the neurons to learn increasingly complex visual features, from simple edge detectors all the way to complex shapes such as human eyes, mouths, faces, and so on.
COMPOSITIONAL NETWORKS
There has been a lot of interest in extending neural networks to learning from structured objects, such as graphs. A range of architectures have been proposed for this purpose, many of them based on various generalizations of the notion of convolution to these domains (Duvenaud et al., 2015;Kearns et al., 2016;Niepert et al., 2016;Gilmer et al., 2017).
One particular architecture, which makes the part-based aspect of neural modeling very explicit, is that of compositional networks (comp-nets), introduced in . To represent a structured object X , comp-nets start with decomposing X into a hierarchy of parts, subparts, subsubparts, and so on, down to some number of elementary parts {e i }, forming a so-called composition scheme 2 . Since each part P i can be a sub-part of more than one higher level part, the composition scheme is not necessarrily a tree, but is rather a DAG (directed acyclic graph), as in Figure 1. The exact definition is as follows.
Definition 1. Let X be a compound object with n elementary parts E = {e 1 , . . . , e n }. A composition scheme D for X is a directed acyclic graph (DAG) in which each node n i is associated with some subset P i of E (these subsets are called the parts of X ) in such a way that 1. If n i is a leaf node, then P i contains a single elementary part e ξ(i) .
2. D has a unique root node n r , which corresponds to the entire set {e 1 , . . . , e n }.
3. For any two nodes n i and n j , if n i is a descendant of n j , then P i ⊂ P j .
A comp-net is essentially just a composition scheme reinterpreted as a feed-forward neural network.
In particular, in a comp-net each "neuron" n i also has an activation f i . For leaf nodes, f i is some simple pre-defined vector representation of the corresponding elementary part e ξ(i) . For internal nodes, f i is computed from the activations f ch1 , . . . , f ch k of the children of n i by the use of some aggregation function Φ(f ch1 , . . . , f ch k ) similar to (1). Finally, the output of the comp-net is the output of the root node n r .
n 1 {e 1 } n 2 {e2} n 3 {e3} n 4 {e4} n 5 {e3, e4} n 6 {e1, e4} n 7 {e2, e3} n 8 {e2, e3, e4}
n 9 n 10 {e1, e2, e4}
n r {e1, e2, e3, e4} f 1 n1 f 2 n2 f 3 n3 f 4 n4 f 5 n5 f 6 n6 f 7 n7 f 8 n8 f 9 n9
f 10 n10 f r nr FIGURE 1. (a) A composition scheme for an object X is a DAG in which the leaves correspond to the elementary parts of X , the internal nodes correspond to sets of elementary parts, and the root corresponds to the entire object. (b) A compositional network is a composition scheme in which each node n i also carries a feature vector (activation) f i , which is computed from the feature vectors of the children of n i . discuss in detail the behavior of comp-nets under transformations of X , in particular, how to ensure that the output of the network is invariant with respect to spurious permutations of the elementary parts, whilst retaining as much information about the combinatorial structure of X as possible. This is especially important in graph learning, the original problem that motivated the introduction of comp-nets, where X is a graph, e 1 , . . . , e n are its vertices, and {P i } are subgraphs of different radii. The proposed solution, covariant compositional networks (CCNs), involves turning the {f i } activations into tensors that transform in prescribed ways with respect to permutations of the elementary parts making up each P i .
COMPOSITIONAL MODELS FOR ATOMIC ENVIRONMENTS
Decomposing complex systems into a hierarchy of interacting subsytems at different scales is a recurring theme in physics, from coarse graining approaches to renormalization group theory. The same approach applied to the atomic neighborhood lends itself naturally to learning force fields. For example, to calculate the aggregate force on the central atom, in a first approximation one might just sum up independent contributions from each of its neighbors. In a second approximation, one would also consider the modifying effect of the local neighborhoods of the neighbors. A third order approximation would involve considering the neighborhoods of the atoms in these neighborhoods, and so on.
The compositional networks formalism is thus a natural framework for force field learning. In particular, we consider comp-nets in which the elementary parts correspond to actual physical atoms, the internal nodes correspond to subsystems P i made up of multiple atoms, and the corresponding activation, which we now denote ψ i , and call the state of P i , is effectively a learned coarse grained representation of P i . What makes physical problems different from, e.g., learning graphs, however is their spatial character. In particular: 1. Each subsystem P i is now also associated with a vector r i ∈ R 3 specifying its spatial position. 2. The interaction between two subsystems P i and P j depends not only on their relative positions, but also on their relative orientation. Therefore, ψ i and ψ j must also have spatial character, somewhat similarly to the terms of the familiar monopole, dipole, quadrupole, etc. expansion. If we rotate the entire the atomic environment around the central atom by some rotation R ∈ SO(3) 3 , the position vectors transform as r i → R r i . Mathematically, the second point above says that the ψ i activations (states) must also transform under rotations in a predictable way, which is expressed by saying that they must be rotationally covariant.
GROUP REPRESENTATIONS AND N -BODY NETWORKS
Just as covariance to permutations is the critical constraint on the graph CCNs, covariance to rotations is the guiding principle behind CCNs for learning atomic force fields. To describe this concept in its general form, we start out by assuming only that any given activation ψ is representable as a d dimensional (complex valued) vector, and that the transformation that ψ undergoes under a rotation R is linear, i.e., ψ → ρ(R)ψ for some matrix ρ(R).
The linearity assumption is sufficient to guarantee that for any R, R ∈ SO(3), ρ(R) ρ(R ) = ρ(RR ). Complex matrix valued functions satisfying this criterion are called representations of the group SO(3). Standard theorems in representation theory tell us that any compact group G (such as SO (3)) has a sequence of so-called inequivalent irreducible representations ρ 0 , ρ 1 , . . . (irreps, for short), and that any other representation µ of G can be reduced into a direct sum of irreps in the sense that there is some invertible matrix C and sequence of integers τ 0 , τ 1 , . . . such that
(2) µ(R) = C −1 τ m=1 ρ (R) C.
Here τ is called the multiplicity of ρ in µ, and τ = (τ 0 , τ 1 , . . .) is called the type of µ. Another nice feature of the representation theory of compact groups is that the irreps can always be chosen to be unitary, i.e., ρ(R −1 ) = ρ(R) −1 = ρ(R) † , where M † denotes the Hermitian conjugate (conjugate transpose) of the matrix M . In the following we will always assume that irreps satisfy this condition. If µ is also unitary, then the transformation matrix C will be unitary too, so we can replace C −1 with C † . For more background in representation theory, the reader is referred to (Serre, 1977).
In the specific case of the rotation group SO (3), the irreps are sometimes called Wigner D-matrices. The = 0 irrep consists of the one dimensional constant matrices ρ 0 (R) = (1), the = 1 irrep (up to conjugation) is equivalent to the rotation matrices themselves, while for general , assuming that (θ, φ, ψ) are the Euler angles of R,
[ρ (R)] m,m = e iψm Y m (θ, φ),
where {Y m } are the well known spherical harmonic functions. In general, the dimensionality of ρ is 2 + 1, i.e., ρ (R) ∈ C (2 +1)×(2 +1) . Definition 2. We say that ψ ∈ C d is an SO(3)-covariant vector of type τ = (τ 0 , τ 1 , τ 2 , . . .) if under the action of rotations it transforms as
(3) ψ → τ m=1 ρ (R) ψ. Setting (4) ψ = τ m=1 ψ m ,
we call ψ m ∈ C 2 +1 the (l, m)-fragment of ψ, and ψ = τ m=1 ψ m the 'th part of ψ. A covariant vector of type τ = (0, 0, . . . , 0, 1), where the single 1 corresponds to τ k , we call an irreducible vector of order k or an irreducible ρ k -vector. Note that a first order irreducible vector is just a scalar.
The motivation behind the above definition is that each fragment ψ m transforms in the very simple way ψ m → ρ (R) ψ m . Note that the words "fragment" and "part" are not standard in the literature, but we find them useful for describing covariant neural architectures. Also note that unlike (2), there is no matrix C in equations (3) and (4). This is because if a given vector ψ transforms according to a general representation µ whose decomposition does include a nontrivial C, this matrix can be easily be factored out by redefining ψ as Cψ. Here ψ is sometimes also called the projection of ψ to the 'th isotypic subspace of the representation space that ψ lives in and ψ = ψ 0 ⊕ ψ 1 ⊕ . . . is called the isotypic decomposition of ψ. With these representation theoretic tools in hand, we define the concept of SO(3)-covariant N -body neural networks as follows.
F r r r r | i | i | i | i r r r | i | i | i r r | i | i | i FIGURE 2.
In a comp-net for learning atomic force fields, the output of each "part" P i is (r i , ψ i ), where r i is the position vector of the corresponding physical subsystem, and ψ i is a vector describing its internal state.
Definition 3. Let S be a physical system made up of n particles ξ 1 , . . . , ξ n . An SO(3)-covariant N-body neural network N for S is a composition scheme D in which 1. Each node n j , which we will sometimes also call a gate, is associated with (a) a physical sybsystem P j of S; (b) a vector r j ∈ R 3 describing the spatial poition of P j ; (c) a vector ψ j that that describes the internal state of P j and is type τ j covariant to rotations. 2. If n j is a leaf node, then ψ j is determined by the corresponding particle ξ j . 3. If n j is a non-leaf node and its children are n ch1 , . . . , n ch k , then ψ j is computed as
(5) ψ j = Φ j ( r ch1 , . . . , r ch k , r ch1 , . . . , r ch k , ψ ch1 . . . , ψ ch k ),
where r chi = r chi − r j and r i = | r i |. We call Φ j the local aggregation rule. 4. D has a unique root n r , and the output of the network, i.e., the learned state of the entire system is ψ r . In the case of learning scalar valued functions, such as the atomic potential, ψ r is just a scalar.
Note that what is described in Definition 3 is a general architecture for learning the state of Nbody physical systems with much wider applicability than just learning atomic potentials. The main technical challenge of the present paper is to define the Φ j aggregation rules in such a way as to guarantee that each ψ j is SO(3)-covariant. This is what is addressed in the following section.
COVARIANT AGGREGATION RULES
To define the aggregation function Φ to be used in SO(3)-covariant comp-nets, all that we assume is that it is a polynomial in the relative positions r ch1 , . . . , r ch k , the constituent state vectors ψ ch1 , . . . , ψ ch k and the inverse distances 1/ r ch1 , . . . 1/ r ch k . Specifically, we say that Φ is a (P, Q, S)-order aggregation function if each component of ψ = Φ( r ch1 , . . . , r ch k , r ch1 , . . . , r ch k , ψ ch1 . . . , ψ ch k ) is a polynomial of order at most p in each component of r chi , a polynomial of at most q in each component of ψ chi , and a polynomial of order at most s in each 1/( r chi ). Any such Φ can be expressed as
(6) Φ(. . .) = L p, q, s r ⊗p1 ch1 ⊗ . . . ⊗ r ⊗p k ch k ⊗ ψ ⊗q1 ch1 ⊗ . . . ⊗ ψ ⊗q k ch k · r −s1 ch1 · . . . · r −s k ch k ,
where p, q and s are multi-indices of positive integers with p i ≤ P , q i ≤ Q and s i ≤ S, and L is a linear function. The tensor products appearing in (6) are formidably large object that in most cases would be impractical to compute explicitly. Rather, this equation is just meant to emphasize that any learnable parameters of the network must be implicit in the linear operator L.
The more stringent requirements on L arise from the covariance criterion. The key to understanding these is the observation that for any sequence ρ 1 , . . . , ρ p of (not necessarily irreducible) representations of a compact group G, their tensor product
ρ(R) = ρ 1 (R) ⊗ ρ 2 (R) ⊗ . . . ⊗ ρ p (R)
is also a representation of G. Consequently, ρ has a decomposition into irreps, similar to (2). As an immediate corollary, any product of SO(3) covariant vectors can be similarly decomposed. In particular, by applying the appropriate unitary matrix C, the sum of tensor products appearing in (6) can be decomposed into a sum of irreducible fragments in the form
L =0 τ m=1 φ m = C p, q, s r ⊗p1 ch1 ⊗ . . . ⊗ r ⊗p k ch k ⊗ ψ ⊗q1 ch1 ⊗ . . . ⊗ ψ ⊗q k ch k · r −s1 ch1 · . . . · r −s k ch k .
To be explicit, we define
(7) φ m = T m p, q, s r ⊗p1 ch1 ⊗ . . . ⊗ r ⊗p k ch k ⊗ ψ ⊗q1 ch1 ⊗ . . . ⊗ ψ ⊗q k ch k · r −s1 ch1 · . . . · r −s k ch k ,
where T 0 1 , . . . , T 0 τ0 , T 1 1 , . . . , T 1 τ2 , . . . , T L τ L is an appropriate sequence of projection operators. The following proposition is a key result of our paper.
Proposition 1. The output of the aggregation function (6) is a τ -covariant vector if and only if L is of the form
(8) L(. . .) = L =0 τ m=1 τ m =1 w m ,m φ m .
Equivalently, collecting all φ m fragments with the same into a matrixF ∈ C (2 +1)×τ , all (w m ,m ) m ,m weights into a matrix W ∈ C τ ×τ , and reinterpreting the output of L as a collection of matrices rather than a single long vector,
(9) L(. . .) = F 0 W 0 ,F 1 W 1 , . . . ,F L W L .
Proposition 1 tell us that L is only allowed to mix φ m fragments with the same , and that fragments can only be mixed in their entirety, rather than picking out their individual components. These are crucial consequences of equivariance. However, there are no further restrictions on the (W ) mixing matrices.
In an N -body neural network the W matrices are shared across (some subsets of) nodes, and it is these mixing (weight) matrices that the network learns from training data. TheF matrices can be regarded as generalized matrix valued activations. Since each W interacts with the F matrices linearly, the network can be trained the usual way by backpropagating gradients of whatever loss function is applied to the output node n r , whose activation is usually scalar valued.
It is important to note that N -body neural networks have no additional nonlinearity outside of Φ, since that would break covariance. In contrast, in most existing neural network architectures, as explained in Section 2, each neuron first takes a linear combination of its inputs weighted by learned weights and then applies a fixed pointwise nonlinearity, σ. In our architecture the nonlinearity is hidden in the way that the φ m fragments are computed, since a tensor product is a nonlinear function of its factors. On the other hand, mixing the resulting fragments with the W weight matrices is a linear operation. Thus, in our case, the nonlinear part of the operation precedes the linear part.
The generic polynomial aggregation function (6) is too general to be used in a practical N -body network, and would be far too costly computationally. Instead, we propose using a few specific types of low order gates, such as those described below.
ZEROTH ORDER INTERACTION GATES
Zeroth order interaction gates aggregate the states of their children and combine them with their relative position vectors, but do not capture interactions between the children. A simple example of such a gate would be one where
(10) Φ(. . .) = L k i=1 (ψ chi ⊗ r chi ), k i=1 r −1 chi (ψ chi ⊗ r chi ), k i=1 r −2 chi (ψ chi ⊗ r chi ) .
Note that the summations in these formulae ensure that the output is invariant with respect to permuting the children and also reduce the generality of (6) because the direct sum is replaced by an explicit summation (this can also be interpreted as tying some of the mixing weights together in a particular way). Let L be the largest for which τ = 0 in the inputs. In the L = 0 case each ψ chi state is a scalar quantity, such as electric charge. In the L = 1 case it is a vector, such as the dipole moment. In the L = 2 case it can encode the quadropole moment, and so on. A gate of the above form can learn how to combine such moments into a single (higher order) moment corresponding to the parent system.
It is instructive to see how many parameters a gate of this type has. Let us assume the simple case that each ψ chi is of type τ = (1, 1, . . . , 1) (up to = L). The type of r chi is (0, 1). According to the Clebsch-Gordan rules (see Section 4.1), the product of two such vectors is a vector of type (1, 3, 2, . . . , 2, 1) (of length L+1). Further assume that desired output type is again τ = (1, 1, . . . , 1) of length L. This means that the = L + 1 fragment does not even have to be computed, and the size of the weight matrices appearing in (9) are
W 0 ∈ C 1×3 W 1 ∈ C 1×9 W 2 ∈ C 1×6 . . . W L ∈ C 1×6 .
The size of these matrices changes dramatically as we allow more "channels". For example, if each of the input states are of type τ = (c, c, . . . , c), the type of ψ chi ⊗ r chi becomes (c, 3c, 2c, . . . , 2c, 1c).
Assuming again an output of type τ = (c, c, . . . , c), the weight matrices become
W 0 ∈ C c×3c W 1 ∈ C c×9c W 2 ∈ C c×6c . . . W L ∈ C c×6c .
In many networks, however, the number of channels increases as we go higher in the network. Allowing the output type to be as rich as possible, without inducing linear redundancies, the output type becomes (3c, 9c, 6c, . . . , 6c, 3c), and
W 0 ∈ C 3c×3c W 1 ∈ C 9c×9c W 2 ∈ C 6c×6c . . . W L ∈ C 6c×6c .
FIRST ORDER INTERACTION GATES
In first order interaction gates each of the children interact with each other, and the parent aggregates these pairwise interactions. A simple example would be computing the total energy of a collection of charged bodies, which might be done with a gate of the form
(11) Φ(. . .) = L k i,j=1 (ψ chi ⊗ ψ chj ⊗ r chi ⊗ r chi ), k i,j=1 r −1 chi r −1 chj (ψ chi ⊗ ψ chj ⊗ r chi ⊗ r chj ), k i,j=1 r −2 chi r −2 chj (ψ chi ⊗ ψ chj ⊗ r chi ⊗ r chj ), k i,j=1 r −3 chi r −3 chj (ψ chi ⊗ ψ chj ⊗ r chi ⊗ r chj ) .
Generalizing (6) slightly, if we know that the interaction only depends on the relative positions of the child systems, we can also use
(12) Φ(. . .) = L k i,j=1 (ψ chi ⊗ ψ chj ⊗ r chi,chj ), k i,j=1 r −1 chi,chj (ψ chi ⊗ ψ chj ⊗ r chi,chj ), k i,j=1 r −2 chi,chj (ψ chi ⊗ ψ chj ⊗ r chi,chj ), k i,j=1 r −3 chi,chj (ψ chi ⊗ ψ chj ⊗ r chi,chj ) ,
where r chi,chj = r chi − r chj and r chi,chj = | r chi,chj |.
It is important to note that in the above electrostatics was used only as an example. There is no need to learn electrostatic interactions because they are perfectly described by classical physics. Rather, we envisage using the zeroth and first order interaction gates as constituents of a larger network for learning more complicated interactions with no simple closed form that nonetheless broadly follow similar scaling laws as classical interactions.
CLEBSCH-GORDAN TRANSFORMS
It remains to explain how the T m projection maps appearing in (7) are computed. This is critical because the nonlinearities in our network are the tensor products, and our architecture hinges on being able to reduce vectors into a direct sum of irreducibles again straight after the tensor product operation.
Fortunately, representation theory provides a clear prescription for how this operation is to be performed. For any compact group G, given two irreducible representations ρ 1 and ρ 2 , the decomposition of ρ 1 ⊗ ρ 2 into a direct sum of irreducibles
(13) ρ 1 (R) ⊗ ρ 2 (R) = C † 1, 2 κτ 1 ,τ 2 ( ) m=1 ρ (R) C 1, 2
is called the Clebsch-Gordan transform. In the specific case of SO (3), the κ multiplicities take on the very simple form (which we already used in Section 4.0.1)
κ 1, 2 ( ) = 1 if | 1 − 2 | ≤ ≤ 1 + 2 0 otherwise,
and the elements of the C 1, 2 matrices can also be computed relatively easily via closed form formulae.
We immediately see that (13) tells us how to reduce the product of covariant vectors into irreducible fragments. Assuming for example that ψ 1 is an irreducible ρ 1 vector and ψ 2 is an irreducible ρ 2 vector, ψ 1 ⊗ ψ 2 decomposes into irreducible fragments in the form
ψ 1 ⊗ ψ 2 = 1+ 2 =| 1− 2| ψ where ψ = C 1, 2, (ψ 1 ⊗ ψ 2 ),
and C 1, 2, is the part of C 1, 2 matrix corresponding to the 'th "block". Thus, in this case the operator T 1 just corresponds to mutiplying the tensor product by C 1, 2, . By linearity, the above relationship also extends to non-irreducible vectors. If ψ 1 is of type τ 1 and ψ 2 is of type τ 2 , then
ψ 1 ⊗ ψ 2 = κτ 1 ,τ 2 ( ) m=1 ψ m where κ τ 1,τ 2 ( ) = 1 2 [τ 1 ] 1 · [τ 2 ] 2 · I [| 1 − 2 | ≤ ≤ 1 + 2 ] ,
and I[·] is the indicator function. Once again, the actual ψ m fragments are computed by applying the appropriate C 1, 2, matrix to the appropriate combination of irreducible fragments of ψ 1 and ψ 2 . It is also clear that the by applying the Clebsch-Gordan decomposition recurisively, we can decompose a tensor product of any order, e.g.,
ψ 1 ⊗ ψ 2 ⊗ ψ 3 ⊗ . . . ⊗ ψ k = ((ψ 1 ⊗ ψ 2 ) ⊗ ψ 3 ) ⊗ . . . ⊗ ψ k .
In an actual computation of such higher order products, however, a considerable amount of thought might have to go into optimizing the order of operations and reusing potential intermediate results to minimize computational cost.
CONCLUSIONS
There is considerable excitement in both the Machine Learning and the Physics/Chemistry communities about the potential of using neural networks to learn to the behavior and properties of complex physical systems. However, phyiscal systems have nontrivial invariance properties (in particular, invariance to translations, rotations and the exchange of identical elementary parts) that must be strictly respected.
In this paper we proposed a new type of generalized convolutional neural network architecture, N -body networks, which provides a flexible framework for modeling interacting systems of various types, while taking into account these invariances (symmetries). The specific motivation for developing N -body networks is to learn atomic potentials (force fields) for molecular dynamics simulations. However, we envisage that they will be used more broadly, for modeling a variety of systems. The closest to our work in certai ways are Moment Tensor Potientials (Shapeev, 2015), although that framework does not have learnable parameters.
N -body networks are distinguished from earlier neural network models for physical systems in that 1. The model is based on a hierarchical (but not necessarily strictly tree-like) decomposition of the system into subsystems at different levels, which is directly reflected in the structure of the neural network. 2. Each subsystem is identified with a "neuron" (or "gate") n i in the network, and the output (activation) ψ i of the neuron becomes a representation of the subsystem's internal state. 3. The ψ i states are tensorial objects with spatial character, in particular they are covariant with rotations in the sense that they transform under rotations according to specific irreducible representations of the rotation group. The gates are specially constructed to ensure that this covariance property is preserved throught the network. 4. Unlike most other neural network architectures, the nonlinearities in N -body networks are not pointwise operations, but are applied in "Fourier space", i.e., directly to the irreducible parts of the state vector objects. This is only possible because (a) the nonlinearities arise as a consequence of taking tensor products of covariant objects (b) the tensor products are decomposed into irreducible parts by the Clebsch-Gordan transform. We believe that the last of these ideas is particularly promising, because it suggests the possibility of constructing neural that operate entirely in Fourier space, and use tensor products combined with Clebsch-Gordan transforms to induce nonlinearities. This might have significance for a range of other applications, as well. Experiments are ongoing to validate our framework on real physical systems.
This note describes the neural network architecture first presented by the author at the "Machine Learning for Molecules and Materials" workshop at the Neural Information Processing Systems Conference (Long Beach, CA) on December 8, 2017.
Note that in the elementary parts are called atoms, but we will avoid this terminology to avoid possible confusion with the physical meaning of the word.
SO(3) denotes the group of rotations in R 3 , i.e., the group of three dimensional orthogonal, unit determinant matrices.
ACKNOWLEDGEMENTSThe author would like to thank Shubhendu Trivedi, Brandon Anderson, Hy Truong Son, Horace Pan, Gábor Csányi and Michele Ceriotti for their input to this work. Financial support for this work was provided in part by DARPA award number D16AP00112.
Gaussian Approximation Potentials: the accuracy of quantum mechanics, without the electrons. P Albert, Bartók, C Michael, Risi Payne, Gábor Kondor, Csányi, Phys Rev Lett. 10413136403Albert P Bartók, Michael C Payne, Risi Kondor, and Gábor Csányi. Gaussian Approximation Poten- tials: the accuracy of quantum mechanics, without the electrons. Phys Rev Lett, 104(13):136403, 2010.
Constructing high-dimensional neural network potentials: A tutorial review. Jörg Behler, Int J Quantum Chem. 11516Jörg Behler. Constructing high-dimensional neural network potentials: A tutorial review. Int J Quantum Chem, 115(16):1032-1050, March 2015.
Generalized neural-network representation of high-dimensional potential-energy surfaces. Jörg Behler, Michele Parrinello, Phys Rev Lett. 9814146401Jörg Behler and Michele Parrinello. Generalized neural-network representation of high-dimensional potential-energy surfaces. Phys Rev Lett, 98(14):146401, 2007.
CHARMM: the biomolecular simulation program. B R Brooks, C L Brooks, A D Mackerell, L Nilsson, R J Petrella, B Roux, Y Won, G Archontis, C Bartels, S Boresch, 1096-987XJournal of Computational Chemistry. 3010B. R. Brooks, C. L. Brooks, A. D. Mackerell, L. Nilsson, R. J. Petrella, B. Roux, Y. Won, G. Archon- tis, C. Bartels, S. Boresch, and et al. CHARMM: the biomolecular simulation program. Journal of Computational Chemistry, 30(10):1545-1614, Jul 2009. ISSN 1096-987X.
CHARMM: A program for macromolecular energy, minimization, and dynamics calculations. Bernard R Brooks, Robert E Bruccoleri, Barry D Olafson, David J States, S Swaminathan, Martin Karplus, 1096-987XJournal of Computational Chemistry. 42Bernard R. Brooks, Robert E. Bruccoleri, Barry D. Olafson, David J. States, S. Swaminathan, and Martin Karplus. CHARMM: A program for macromolecular energy, minimization, and dynamics calculations. Journal of Computational Chemistry, 4(2):187-217, Jun 1983. ISSN 1096-987X.
Invariant scattering convolutional networks. Joan Bruna, Stephane Mallat, IEEE Transactions on Pattern Analysis and Machine Intelligence. 35Joan Bruna and Stephane Mallat. Invariant scattering convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35:1872-1886, August 2013.
. Stefan Chmiela, Alexandre Tkatchenko, Huziel E Sauceda, Igor Poltavsky, T Kristof, Klaus-Robert Schütt, Müller, 2375-2548Machine Learning of Accurate Energy-Conserving Molecular Force Fields. Stefan Chmiela, Alexandre Tkatchenko, Huziel E. Sauceda, Igor Poltavsky, Kristof T. Schütt, and Klaus-Robert Müller. Machine Learning of Accurate Energy-Conserving Molecular Force Fields. (May):1-6, 2016. ISSN 2375-2548.
Group equivariant convolutional networks. S Taco, Max Cohen, Welling, Proceedings of The 33rd International Conference on Machine Learning. The 33rd International Conference on Machine Learning48Taco S. Cohen and Max Welling. Group equivariant convolutional networks. Proceedings of The 33rd International Conference on Machine Learning, 48:2990-2999, 2016.
Steerable CNNs. S Taco, Max Cohen, Welling, Taco S. Cohen and Max Welling. Steerable CNNs. In iclr, 2017.
S Taco, Mario Cohen, Jonas Geiger, Max Köhler, Welling, Spherical CNNs. International Conference on Learning Representations. Taco S. Cohen, Mario Geiger, Jonas Köhler, and Max Welling. Spherical CNNs. International Conference on Learning Representations, 2018.
Convolutional neural networks on graphs with fast localized spectral filtering. Michal Defferrard, Xavier Bresson, Pierre Vandergheynst, Michal Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In nips, 2016.
Convolutional networks on graphs for learning molecular fingerprints. K Duvenaud, D Maclaurin, J Iparraguirre, R Bombarell, T Hirzel, A Aspuru-Guzik, R P Adams, Advances in neural information processing systems. K. Duvenaud, D. Maclaurin, J. Iparraguirre, R. Bombarell, T. Hirzel, A. Aspuru-Guzik, and R. P. Adams. Convolutional networks on graphs for learning molecular fingerprints. In Advances in neural information processing systems, pp. 2224-2232, 2015.
Neural message passing for quantum chemistry. Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, George E Dahl, Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. 2017.
Wavelet scattering regression of quantum chemical energies. M Hirn, S Mallat, N Poilvert, 1540-3459Multiscale Modeling & Simulation. 152M. Hirn, S. Mallat, and N. Poilvert. Wavelet scattering regression of quantum chemical energies. Multiscale Modeling & Simulation, 15(2):827-863, Jan 2017. ISSN 1540-3459.
Inhomogeneous electron gas. P Hohenberg, W Kohn, Phys. Rev. 136P. Hohenberg and W. Kohn. Inhomogeneous electron gas. Phys. Rev., 136:864-871, 1964.
Molecular graph convolutions: moving beyond fingerprints. K Kearns, M Mccloskey, V Brendl, P Pande, Riley, Journal of Computer-Aided Molecular Design. 30Kearns, K. McCloskey, M. Brendl, V. Pande, and P. Riley. Molecular graph convolutions: moving beyond fingerprints. Journal of Computer-Aided Molecular Design, 30:595-608, 2016.
On the generalization of equivariance and convolution in neural networks to the action of compact groups. R Kondor, S Trivedi, R. Kondor and S. Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups. 2018.
Covariant compositional networks for learning graphs. R Kondor, H Truong Son Hy, S Pan, B M Trivedi, Anderson, R. Kondor, Truong Son Hy, H. Pan, S. Trivedi, and B. M. Anderson. Covariant compositional networks for learning graphs. 2018.
Gradient-based learning applied to document recognition. Y Lecun, Y Bengio, P Haffner, Proceedings of the IEEE. the IEEEY. LeCun, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, pp. 2278-2324, 1998.
Gated graph sequence neural networks. Yujia Li, Daniel Tarlow, Marc Brockschmidt, Richard Zemel, In iclrYujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. In iclr, 2016.
S Mallat, Group Invariant Scattering. Technical reportS Mallat. Group Invariant Scattering. Technical report, 2012.
Geodesic convolutional neural networks on riemannian manifolds. Jonathan Masci, Davide Boscaini, Michael M Bronstein, Pierre Vandergheynst, Jonathan Masci, Davide Boscaini, Michael M. Bronstein, and Pierre Vandergheynst. Geodesic con- volutional neural networks on riemannian manifolds. 2015.
Geometric deep learning on graphs and manifolds using mixture model cnn. Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, Michael M Bronstein, Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M. Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnn. 2016.
Learning convolutional neural networks for graphs. M Niepert, M Ahmed, K Kutzkov, Proceedings of the International Conference on Machine Learning. the International Conference on Machine LearningM. Niepert, M. Ahmed, and K. Kutzkov. Learning convolutional neural networks for graphs. In Proceedings of the International Conference on Machine Learning, 2016.
Quantum-chemical insights from deep tensor neural networks. T Kristof, Farhad Schütt, Stefan Arbabzadah, Klaus R Chmiela, Alexandre Tkatchenko, Nature Communications. 8Kristof T. Schütt, Farhad Arbabzadah, Stefan Chmiela, Klaus R. M uller, and Alexandre Tkatchenko. Quantum-chemical insights from deep tensor neural networks. Nature Communications, 8:13890, Jan 2017. ISSN 2041-1723.
Linear Representations of Finite Groups. Jean-Pierre Serre, Graduate Texts in Mathamatics. 42Springer-VerlagJean-Pierre Serre. Linear Representations of Finite Groups, volume 42 of Graduate Texts in Mathamatics. Springer-Verlag, 1977.
V Alexander, Shapeev, Moment Tensor Potentials: a class of systematically improvable interatomic potentials. arXiv. Alexander V Shapeev. Moment Tensor Potentials: a class of systematically improvable interatomic potentials. arXiv, December 2015.
Deep Potential Molecular Dynamics: a scalable model with the accuracy of quantum mechanics. Linfeng Zhang, Jiequn Han, Han Wang, Roberto Car, Weinan E , arXiv:1707.09571arXiv: 1707.09571Linfeng Zhang, Jiequn Han, Han Wang, Roberto Car, and Weinan E. Deep Potential Molecular Dy- namics: a scalable model with the accuracy of quantum mechanics. arXiv:1707.09571 [physics], July 2017. arXiv: 1707.09571.
|
[] |
[
"Optimal Distributed Control for Networked Control Systems with Delays",
"Optimal Distributed Control for Networked Control Systems with Delays"
] |
[
"Zhuwei Wang ",
"Fellow, IEEEXiaodong Wang "
] |
[] |
[] |
In networked control systems (NCS), sensing and control signals between the plant and controllers are typically transmitted wirelessly. Thus, the time delay plays an important role for the stability of NCS, especially with distributed controllers. In this paper, the optimal control strategy is derived for distributed control networks with time delays. In particular, we form the optimal control problem as a non-cooperative linear quadratic game (LQG). Then, the optimal control strategy of each controller is obtained that is based on the current state and the last control strategies. The proposed optimal distributed controller reduces to some known controllers under certain conditions. Moreover, we illustrate the application of the proposed distributed controller to load frequency control in power grid systems.
| null |
[
"https://arxiv.org/pdf/1312.3543v1.pdf"
] | 16,299,819 |
1312.3543
|
561b640734f08a6bb5f5093df2cc7fd3966bf92a
|
Optimal Distributed Control for Networked Control Systems with Delays
12 Dec 2013
Zhuwei Wang
Fellow, IEEEXiaodong Wang
Optimal Distributed Control for Networked Control Systems with Delays
12 Dec 20131Index Terms Networked control systemsdistributed controlnon-cooperative gamedelay
In networked control systems (NCS), sensing and control signals between the plant and controllers are typically transmitted wirelessly. Thus, the time delay plays an important role for the stability of NCS, especially with distributed controllers. In this paper, the optimal control strategy is derived for distributed control networks with time delays. In particular, we form the optimal control problem as a non-cooperative linear quadratic game (LQG). Then, the optimal control strategy of each controller is obtained that is based on the current state and the last control strategies. The proposed optimal distributed controller reduces to some known controllers under certain conditions. Moreover, we illustrate the application of the proposed distributed controller to load frequency control in power grid systems.
I. INTRODUCTION
In recent years, networked control systems (NCS), which consist of computing and physical systems, have received considerable attention [1] due to their wide applications in various areas such as power grids [2], robotic networks [3] and embedded systems [4]. A typical NCS is equipped with sensing, control and communication capabilities. In many cases, the plant and controllers are at different locations. Hence, a communication network, typically a wireless network, is needed to facilitate the data exchange between the plant and controllers. Then the time delay becomes the key factor that affects the system performance and stability.
Z. Wang and X. Wang are with the Electrical Engineering Department, Columbia University, New York, 10027 (e-mail:
[email protected], [email protected]).
Existing works on NCS with time delay focus on the single-controller case; and two important design considerations are the system stability and the optimality with respect to certain criterion.
With full plant state information, the optimal control problem has been investigated. In particular, a suboptimal controller is derived in [5] with time-driven sensor and controller nodes, where the time delay is a multiple of the sampling interval. The optimal controller for an NCS whose network-induced delay is shorter than a sampling period is developed in [6] [7]. And the results are generalized in [8] to the case that network-induced delay is longer than a sampling period.
In [9], the solution to the optimal control problem for a linear system with multiple control input delays is given. On the other hand, when considering the packet loss, partial state information, link/node failure, etc., only system stability can be investigated [10]- [14], since the optimality problem is extremely difficult.
With the advances of NCS, the concept of distributed controllers in large scale systems becomes an important research topic [15]- [17]. A cross-layer framework for the joint design of wireless networks and distributed controllers is proposed in [15], where the centralized control and clock-driven controllers are considered and the total time delay is assumed to be one sample period. The stability of a distributed control strategy is studies in [16], where the network itself acts as a controller, and each node (including the actuator nodes) performs linear combinations of internal state variables of neighboring nodes. The stability of a multicast routing algorithm for a decentralized control system is investigated in [17] assuming no time delay or extremely small delay. Note that the above works all address stability issues of distributed control, but the optimality problem remains unexplored. This paper addresses the optimal control problem for a linear distributed control system with time delays. The form of the performance criterion plays an important role in obtaining the optimal solution: previous studies have mostly focused on the quadratic cost function [5]- [9], which is also used in this work. In this paper, the optimal solution is obtained as a feedback noncooperative control law, which is linear with the current state and the previous control strategies of the distributed controllers. An application of the proposed optimal distributed controller to load frequency control in power grid is also described.
The remainder of this paper is organized as follows. The system model and problem formulation are given in Section II. We then derive the optimal control strategy with two distributed controllers in Section III. Section IV presents the extension to the case of multiple distributed controllers. Numerical results and conclusions are given in Section IV and Section V, respectively.
II. SYSTEM MODEL AND PROBLEM FORMULATION
In this section, we first describe the distributed control system under consideration, and then formulate the optimal control problem as a non-cooperative linear quadratic game (LQG). We consider a networked control system with distributed sensors and controllers as shown in Fig. 1. We assume that the plant is a continuous-time linear time-invariant (LTI) system while all sensors and controllers operate in discrete-time. Sensor measurements and feedback control signals are sent separately through a shared wireless network. The system under consideration has a time-driven sensor system sampled at a constant sampling rate and event-driven controllers and actuator nodes. We assume that there are M sensor nodes and p distributed controller nodes.
A. Distributed Control System
Then, free of perturbations, the continuous-time state and measurement equations are given
respectively by ẋ (t) = A c x (t) + p ∑ i=1 B c i u i (t − τ i ),y (t) = Cx (t) ,(1)
where x is an M-dimensional plant state vector, u i is an N-dimensional i-th control input vector and τ i is the time delay, A c and B c i are M × M and M × N matrices, respectively. For simplicity, we assume that each sensor observes one dimension of x directly so that y is an M-dimensional vector and C is an identity matrix. We assume that there is a wireless communication network from sensors to controllers, and sampling in the sensor nodes are done synchronously with the period h. Upon sampling the measurements are immediately sent to the controller nodes. Thus, the M measurement signals will have individual delays τ sc i, j , j ∈ {1, 2, · · · , M}, to the i-th controller node. When all measurements have arrived at the controller, a new control signal is calculated and sent to the actuator nodes. The time delay from sensors to the i-th controller node is τ sc i = max τ sc i,1 , τ sc i,2 , · · · , τ sc i,M , and the control signals will have delays τ ca i , i = {1, 2, · · · , p}, to the actuator nodes. We assume that all delays in the system are deterministic and known, and as in [6][7] the total time delay τ sc i + τ ca i is assumed to be smaller than one sampling period. The received control signals are then converted to continuous-time signals and directly act on the plant. The signal timings in the control system are illustrated in Fig. 2.
B. Problem Formulation
sc k τ ca k τ ( ) 1 k h − ( ) 1 k h + kh
We assume that all nodes have synchronized clocks. This is needed both for synchronized sampling and time-stamping of signals. By the use of time-stamping we assume that all delays are known to the controller node. Then, discretizing the process in (1) gives
x (k + 1) = Φx (k) + p ∑ i=1 [Γ i,0 u i (k) + Γ i,1 u i (k − 1)] ,(2)
where x (k) and u i (k) represent the state and the control signals at the k-th sampling instant, respectively, and
Φ = e A c h , Γ i,0 = h−τ sc i −τ ca i 0 e A c s dsB c i , Γ i,1 = h h−τ sc i −τ ca i e A c s dsB c i .(3)
Using a quadratic cost function, the design problem is to find a control strategy to drive the plant from its initial state x 0 to minimize the total cost, i.e.,
min u i (k), k=0, 1,···, N−1 i=1, 2,···, p J N = x T (N) Q N x (N) + N−1 ∑ k=0 x T (k) Qx (k) + p ∑ i=1 u T i (k) R i u i (k) , s.t. x (k + 1) = Φx (k) + p ∑ i=1 [Γ i,0 u i (k) + Γ i,1 u i (k − 1)], x (0) = x 0 ,(4)
where N is the total number of sampling instants, Q N 0, Q ≻ 0, and R i ≻ 0 are symmetric positive semi-definite/definite weight matrices.
Since the controllers are distributed and they cannot obtain the current control strategies of each other, we reformulate the optimization problem in (4) as a non-cooperative control game 6 as [18] min u i (k), k=0, 1,···, N−1
J i,N = x T (N) Q i,N x (N) + N−1 ∑ k=0 x T (k) Q i x (k) + u T i (k) R i u i (k) , ∀i, s.t. x (k + 1) = Φx (k) + p ∑ j=1 Γ j,0 u j (k) + Γ j,1 u j (k − 1) , x (0) = x 0 ,(5)
where Q i,N 0 and Q i ≻ 0 are symmetric weight matrices.
In what follows, we first focus on a two-controller distributed system, i.e., p = 2, and then extend the results to the case with multiple distributed controllers.
III. OPTIMAL SOLUTION FOR TWO-CONTROLLER CASE
In this section, we fist derive the optimal linear control strategy for the non-cooperative game in (5) with two distributed controllers, and then consider two special cases, where the obtained optimal controller becomes some existing controllers in the literature.
A. Derivation of Optimal Controllers
Since the two controllers are distributed, at any time, the current control signal of one controller
is not known to the other. We assume that each controller can obtain the other controller's past control signals. Then, the linear control law can be written as
u i (k) = A i (k) x (k) + k ∑ j=1 B i 1, j (k) u 1 (k − j) + B i 2, j (k) u 2 (k − j) , i = 1, 2,(6)
where
A i (k) , B i 1, j (k) , B i 2, j (k) are N × M, N × N, N × N coefficient matrices, respectively.
Taking controller 1 as the desired controller, substituting u 2 (k) in (6) into (5), we have
x (k + 1) = [Φ + Γ 2,0 A 2 (k)] x (k) + Γ 1,1 + Γ 2,0 B 2 1,1 (k) u 1 (k − 1) + Γ 2,1 + Γ 2,0 B 2 2,1 (k) u 2 (k − 1) + Γ 1,0 u 1 (k) + k ∑ i=2 Γ 2,0 B 2 1,i (k) u 1 (k − i) + Γ 2,0 B 2 2,i (k) u 2 (k − i) .(7)
Define
z (k) = x (k) u 1 (k − 1) u 2 (k − 1) · · · u 1 (0) u 2 (0) T .(8)
Then, we can rewrite (7) as
z (k + 1) = C 1 (k) z (k) + D 1 u 1 (k) ,(9)
where
D 1 = Γ 1,0 I 0 . . . 0 , C 1 (k) = Φ + Γ 2,0 A 2 (k) Γ 1,1 +B 2 1,1 (k) Γ 2,1 +B 2 2,1 (k)B 2 1,2 (k)B 2 2,2 (k) · · ·B 2 1,k (k)B 2 2,k (k) 0 0 0 0 0 · · · 0 0 A 2 (k) B 2 1,1 (k) B 2 2,1 (k) B 2 1,2 (k) B 2 2,2 (k) · · · B 2 1,k (k) B 2 2,k (k) 0 I 0 0 0 · · · 0 0 0 0 I 0 0 · · · 0 0 . . . . . . . . . . . . . . . · · · . . . . . . , B 2 i, j (k) = Γ 2,0 B 2 i, j (k) , i = 1, 2; j = 1, 2, · · · , k,(10)
with 0 and I denoting the zero matrix and the identity matrix, respectively, u i (k) , i = 1, 2 are set to be zero when k < 0.
Then, the optimization problem for controller 1 in (5) can be rewritten as
min u 1 (k), k=0, 1,··· , N−1 J 1,N = z T (N) Q 1 N z (N) + N−1 ∑ k=0 z (k) u 1 (k) T Q 1 1,1 0 0 R 1 z (k) u 1 (k) , s.t. z (k + 1) = C 1 (k) z (k) + D 1 u 1 (k) ,(11)
where
Q 1 N = Q 1,N 0 · · · 0 0 0 · · · 0 . . . . . . . . . . . . 0 0 · · · 0 , Q 1 1,1 = Q 1 0 · · · 0 0 0 · · · 0 . . . . . . . . . . . . 0 0 · · · 0 .(12)8 Define V 1 L = min u 1 (k), k=L, L+1,···, N−1 z T (N) Q 1 N z (N) + N−1 ∑ k=L z (k) u 1 (k) T Q 1 1,1 0 0 R 1 z (k) u 1 (k) .(13)
We next derive the expressions for V 1 L for different L. 1) L = N: When L = N, we have
V 1 N = z T (N) S 1 (N) z (N) ,(14)
with (9), (13) and (14), we get
S 1 (N) = Q 1 N . 2) L = N − 1: When L = N − 1, fromV 1 N−1 = min u 1 (N−1) z (N − 1) u 1 (N − 1) T Q 1 1,1 0 0 R 1 z (N − 1) u 1 (N − 1) + z T (N) S 1 (N) z (N) = min u 1 (N−1) z (N − 1) u 1 (N − 1) T P 1 1,1 (N − 1) P 1 1,2 (N − 1) T P 1 1,2 (N − 1) P 1 2,2 (N − 1) z (N − 1) u 1 (N − 1) ,(15)
where
P 1 1,1 (N − 1) = C T 1 (N − 1) S 1 (N)C 1 (N − 1) + Q 1 1,1 , P 1 1,2 (N − 1) = D T 1 S 1 (N)C 1 (N − 1) , P 1 2,2 (N − 1) = D T 1 S 1 (N) D 1 + R 1 .(16)
The optimal solution to (15) is given by [20]
u 1 (N − 1) = −L 1 (N − 1) z (N − 1) ,(17)
where
L 1 (N − 1) = P 1 2,2 (N − 1) −1 P 1 1,2 (N − 1) .(18)
Similarly, we get the optimal control strategy of the other controller as
u 2 (N − 1) = −L 2 (N − 1) z (N − 1) ,(19)
where
L 2 (N − 1) = P 2 2,2 (N − 1) −1 P 2 1,2 (N − 1) ,(20)
and
P 2 1,2 (N − 1) = D T 2 S 2 (N)C 2 (N − 1) , P 2 2,2 (N − 1) = D T 2 S 2 (N) D 2 + R 2 , S 2 (N) = Q 2,N 0 · · · 0 0 0 · · · 0 . . . . . . . . . . . . 0 0 · · · 0 , D 2 = Γ 2,0 0 I 0 . . . 0 , C 2 (k) = Φ + Γ 1,0 A 1 (k) Γ 1,1 +B 1 1,1 (k) Γ 2,1 +B 1 2,1 (k)B 1 1,2 (k)B 1 2,2 (k) · · ·B 1 1,k (k)B 1 2,k (k) A 1 (k) B 1 1,1 (k) B 1 2,1 (k) B 1 1,2 (k) B 1 2,2 (k) · · · B 1 1,k (k) B 1 2,k (k) 0 0 0 0 0 · · · 0 0 0 I 0 0 0 · · · 0 0 0 0 I 0 0 · · · 0 0 . . . . . . . . . . . . . . . · · · . . . . . . , B 1 i, j (k) = Γ 1,0 B 1 i, j (k) , i = 1, 2; j = 1, 2, · · · , k.(21)
From (6), (8), (17) and (19), we have
L i (N − 1) = − A i (N − 1) B i 1,1 (N − 1) B i 2,1 (N − 1) · · · B i 1,N−1 (N − 1) B i 2,N−1 (N − 1) , i = 1, 2.(22)
Based on (18), (20), from (22), we obtain
B 1 j,i (N − 1) = −αB 2 j,i (N − 1) , B 2 j,i (N − 1) = −βB 1 j,i (N − 1) , i = 2, 3, · · · , N − 1; j = 1, 2,(23)
where
α = Γ T 1,0 Q 1,N Γ 1,0 + R 1 −1 Γ T 1,0 Q 1,N Γ 2,0 , β = Γ T 2,0 Q 2,N Γ 2,0 + R 2 −1 Γ T 2,0 Q 2,N Γ 1,0 .(24)
Then, from (23), we have
B 1 j,i (N − 1) ≡ αβB 1 j,i (N − 1) , i = 2, 3, · · · , N − 1; j = 1, 2,(25)
which means that
B 1 j,i (N − 1) = B 2 j,i (N − 1) = 0, i = 2, 3, · · · , N − 1; j = 1, 2.(26)
Based on (22) and (26), when L = N − 1, the optimal solutions can be simplified as
L i (N − 1) = − A i (N − 1) B i 1,1 (N − 1) B i 2,1 (N − 1) , i = 1, 2.(27)
Then, from (17) and (19), the optimal control strategies of the two controllers can be rewritten as
u i (N − 1) = −L i (N − 1) x (N − 1) u 1 (N − 2) u 2 (N − 2) , i = 1, 2,(28)
and the related parameters can be simplified as
Q i 1,1 = Q i 0 0 0 0 0 0 0 0 , S i (N) = Q i,N 0 0 0 0 0 0 0 0 , i = 1, 2,(29)
and
D 1 = Γ 1,0 I 0 , D 2 = Γ 2,0 0 I , C 1 (k) = Φ + Γ 2,0 A 2 (k) Γ 1,1 + Γ 2,0 B 2 1,1 (k) Γ 2,1 + Γ 2,0 B 2 2,1 (k) 0 0 0 A 2 (k) B 2 1,1 (k) B 2 2,1 (k) , C 2 (k) = Φ + Γ 1,0 A 1 (k) Γ 1,1 + Γ 1,0 B 1 1,1 (k) Γ 2,1 + Γ 1,0 B 1 2,1 (k) A 1 (k) B 1 1,1 (k) B 1 2,1 (k) 0 0 0 .(30)
Substituting u 1 (N − 1) in (28) into (15), V 1 N−1 can be expressed as
V 1 N−1 = x (N − 1) u 1 (N − 2) u 2 (N − 2) TS 1 (N − 1) x (N − 1) u 1 (N − 2) u 2 (N − 2) = z T (N − 1) S 1 (N − 1) z (N − 1) ,(31)whereS 1 (N − 1) = P 1 1,1 (N − 1) − L T 1 (N − 1) P 1 2,2 (N − 1) L 1 (N − 1) , S 1 (N) = S 1 (N − 1) 0 · · · 0 0 0 · · · 0 . . . . . . . . . . . . 0 0 · · · 0 .(32)
3) L = N − 2, · · · , 1, 0: When L = N − 2, from (13) and (31), we have
V 1 N−2 = min u 1 (N−2) z (N − 2) u 1 (N − 2) T Q 1 1,1 0 0 R 1 z (N − 2) u 1 (N − 2) + z T (N − 1) S 1 (N − 1) z (N − 1) .(33)
We can see that, (15) and (33) have the same form. Thus, repeat the same process as that for L = N − 1, we can derive the optimal controller u 1 (k) , k = N − 2, · · · , 1, 0, which can be expressed as
u i (k) = −L i (k) x (k) u 1 (k − 1) u 2 (k − 1) , i = 1, 2; k = 0, 1, · · · , N − 1,(34)
where
L i (k) = P i 2,2 (k) −1 P i 1,2 (k) , S i (k) = P i 1,1 (k) − L T i (k) P i 2,2 (k) L i (k) ,(35)
and
P i 1,1 (k) = C T i (k) S i (k + 1)C i (k) + Q i 1,1 , P i 1,2 (k) = D T i S i (k + 1)C i (k) , P i 2,2 (k) = D T i S i (k + 1) D i + R i .(36)
From (6) and (34), we have
L i (k) = − A i (k) B i 1,1 (k) B i 2,1 (k) , i = 1, 2; k = 0, 1, · · · , N − 1,(37)
which means that the optimal control strategies are the linear with current plant states and the last control strategies.
Based on (35) and (37), we can deduce the values of A i (k), B i 1,1 (k) and B i 2,1 (k), i = 1, 2 as follows (see Appendix A for details).
A 1 (k) = I − a 1 2 (k) a 2 2 (k) −1 a 1 2 (k) a 2 1 (k) − a 1 1 (k) , B 1 1,1 (k) = I − b 1 2 (k) b 2 2 (k) −1 b 1 2 (k) b 2 1 (k) − b 1 1 (k) , B 1 2,1 (k) = I − c 1 2 (k) c 2 2 (k) −1 c 1 2 (k) c 2 1 (k) − c 1 1 (k) , A 2 (k) = I − a 2 2 (k) a 1 2 (k) −1 a 2 2 (k) a 1 1 (k) − a 2 1 (k) , B 2 1,1 (k) = I − b 2 2 (k) b 1 2 (k) −1 b 2 2 (k) b 1 1 (k) − b 2 1 (k) , B 2 2,1 (k) = I − c 2 2 (k) c 1 2 (k) −1 c 2 2 (k) c 1 1 (k) − c 2 1 (k) .(38)
Then, using (37) and (38), from (34), we can achieve the optimal control strategies. The algorithm can be summarized as follows.
The optimal distributed controllers
Off-line:
1: Initialize S 1 (N) and S 2 (N) using (29).
2: for k = N − 1 : −1 : 0 do.
3: Calculate
A i (k), B i 1,1 (k) and B i 2,1 (k), i = 1, 2 using (38). Calculate L 1 (k) and L 2 (k) using (37).
Calculate S 1 (k) and S 2 (k) using (35). Use x (k), u 1 (k − 1), u 2 (k − 1) and L 2 (k) to compute u 2 (k) in (34) .
Exchange control signals u 1 (k) and u 2 (k) between the two controllers. 4: end for.
B. Special Cases
In this subsection, the optimal control solutions derived in the last subsection are applied to two special cases. One is the optimal control strategy for a single controller with time delay, and the other is the optimal control strategy for two controllers without time delays.
1) Single controller with time delay:
Consider the case of a single controller, we have
A 2 (k) = B 2 1,1 (k) = B 2 2,1 (k) = 0, Γ 2,0 = Γ 2,1 = 0.(39)
Then, the optimal control strategies in (34) can be simplified as
u 1 (k) = −L 1 (k) x (k) u 1 (k − 1) , k = 0, 1, · · · , N − 1,(40)
where
L 1 (k) = P 1 2,2 (k) −1 P 1 1,2 (k) , S 1 (k) = P 1 1,1 (k) − L T 1 (k) P 1 2,2 (k) L 1 (k) , S 1 (N) = Q 1,N 0 0 0 , P 1 1,1 (k) = C T 1 (k) S 1 (k)C 1 (k) + Q 1 1,1 , P 1 1,2 (k) = D T 1 S 1 (k + 1)C 1 (k) , P 1 2,2 (k) = D T 1 S 1 (k + 1) D 1 + R 1 ,(41)
and
Q 1 1,1 = Q 1 0 0 0 , D 1 = Γ 1,0 I , C 1 (k) = Φ Γ 1,1 0 0 .(42)
The above optimal solution is the same as that in [19] when the time delay is deterministic.
2) Two controllers without time delays:
If we ignore the time delays, we have
B 1 1,1 (k) = B 1 2,1 (k) = B 2 1,1 (k) = B 2 2,1 (k) = 0, Γ i,1 = 0, i = 1, 2.(43)
Then, the optimal control strategies become
u i (k) = A i (k) x (k) , i = 1, 2; k = 0, 1, · · · , N − 1,(44)
where A i (k) is derived by
A 1 (k) = I − a 1 2 (k) a 2 2 (k) −1 a 1 2 (k) a 2 1 (k) − a 1 1 (k) , A 2 (k) = I − a 2 2 (k) a 1 2 (k) −1 a 2 2 (k) a 1 1 (k) − a 2 1 (k) ,(45)
where
a i 1 (k) = R i + Γ T i,0 S i (k + 1) Γ i,0 −1 Γ T i,0 S i (k + 1) Φ, a i 2 (k) = R i + Γ T i,0 S i (k + 1) Γ i,0 −1 Γ T i,0 S i (k + 1) Γ 3−i,0 ,(46)
and
S i (N) = Q i,N , S i (k) = Q i + (Φ + Γ 3−i,0 A 3−i (k)) T S i (k + 1) (Φ + Γ 3−i,0 A 3−i (k)) − A T i (k) Γ T i,0 S i (k + 1) Γ i,0 + R i A i (k) .(47)
These results correspond to the discrete-time control strategies for the non-cooperative feedback games in [18].
IV. EXTENSION TO MULTIPLE DISTRIBUTED CONTROLLERS
In this section, we extend the results for the case of two distributed controllers in Section III to multiple distributed controllers. We will omit the detailed derivations since they are similar to those in Section III.
Similar to (37), the optimal linear control strategies are linear with the current plant states and the last control strategies, i.e.,
u i (k) = A i (k) x (k) + p ∑ j=1 B i j (k) u j (k − 1), i = 1, 2, · · · , p,(48)
where p is the number of controllers, A i and B i j , j = 1, 2, · · · , p are coefficient matrices. Taking controller i as the desired one, we can rewrite the control process as
x (k + 1) = Φx (k) + p ∑ j=1 Γ j,0 u j (k) + Γ j,1 u j (k − 1) = Φ + p ∑ m=1 m =i Γ m,0 A m (k) x (k) + Γ i,0 u i (k) + p ∑ j=1 Γ j,1 + p ∑ n=1 n =i Γ n,0 B n j (k) u j (k − 1) .(49)
Define
z (k) = x (k) u 1 (k − 1) u 2 (k − 1) . . . u p (k − 1) .(50)
Similarly as in Section III, we can derive the optimal solution for controller i as
u i (k) = −L i (k) z (k) , k = 0, 1, · · · , N − 1, L i (k) = P i 2,2 (k) −1 P i 1,2 (k) ,(51)
where
S i (k) = P i 1,1 (k) − L T i (k) P i 2,2 (k) L i (k) , S i (N) = Q i,N 0 · · · 0 0 0 · · · 0 . . . . . . . . . . . . 0 0 · · · 0 , P i 1,1 (k) = C T i (k) S i (k + 1)C i (k) + Q i 1,1 , P i 1,2 (k) = D T i S i (k + 1)C i (k) , P i 2,2 (k) = D T i S i (k + 1) D i + R i ,(52)
and
Q i 1,1 = Q i 0 · · · 0 0 0 · · · 0 . . . . . . . . . 0 0 0 0 0 , D i = Γ i,0 0 . . . 0 I i+1 0 . . . 0 , C i (k) = Φ + p ∑ n=1 n =i Γ n,0 A n (k) (Γ 1,1 +B 1 (k)) (Γ 2,1 +B 2 (k)) · · · (Γ p,1 +B p (k)) A 1 (k) B 1 1 (k) B 1 2 (k) · · · B 1 p (k) . . . . . . . . . . . . . . . A i−1 (k) B i−1 1 (k) B i−1 2 (k) · · · B i−1 p (k) 0 0 0 0 0 A i+1 (k) B i+1 1 (k) B i+1 2 (k) · · · B i+1 p (k) . . . . . . . . . . . . . . . A p (k) B p 1 (k) B p 2 (k) · · · B p p (k) ,(53)thatB j (k) = p ∑ n=1 n =i
Γ n,0 B n j (k), and I i+1 denotes the (i +1)-th block of D i , which is a identity matrix of size N × N.
From (48) and (51), for controller i, i = 1, 2, · · · , p, we can obtain
A i (k) = E −1 i Γ T i,0 S i 1,1 (k + 1) Φ + S i i+1,1 (k + 1) Φ + p ∑ j=1 j =i F j i A j (k) , B i 1 (k) = E −1 i Γ T i,0 S i 1,1 (k + 1) Γ 1,1 + S i i+1,1 (k + 1) Γ 1,1 + p ∑ j=1 j =i F j i B j 1 (k) , . . . B i p (k) = E −1 i Γ T i,0 S i 1,1 (k + 1) Γ p,1 + S i i+1,1 (k + 1) Γ p,1 + p ∑ j=1 j =i F j i B j p (k) ,(54)
where
E i = D T i S i (k + 1) D i + R i , F j i = Γ T i,0 S i 1,1 (k + 1) Γ j,0 + S i i+1,1 (k + 1) Γ j,0 + Γ T i,0 S i 1, j+1 (k + 1) + S i i+1, j+1 (k + 1) ,(55)
and S i m,n (k + 1) is the (m, n)-th block of matrix S i (k + 1), whose size is M × M when m = n = 1, M × N when m = 1; n ≥ 2, N × M when m ≥ 2; n = 1, and N × N when m ≥ 2; n ≥ 2.
It can be seen that all the equations in (54) are linear functions. We can easily calculate all values of A i (k) , B i j (k) , i = 1, 2, · · · , p, j = 1, 2, · · · , p. Then we can obtain the optimal control strategies from (48).
V. SIMULATION RESULTS
In this section, we provide simulation studies on the performance of the proposed optimal distributed control strategies. First we consider a generic control system, and then introduce a power-grid application.
A. A Generic System
We consider a system with two distributed controllers. The sampling period is chosen as h = 0.05, the sampling duration N = 50, and the other parameters of the control system are set as follows [7]:
A = 0 1 −3 −4 , B 1 = B 2 = 0 1 ,
and, for simplicity, we choose This is because the effect of the previous control signals increases when the time delay becomes larger, which leads to less system stability and larger cost. Fig. 4 depicts the ratio of costs between controller 1 and controller 2. It can be seen that the ratio decreases with TD1 and increases with TD2, which means that the controller with smaller time delay contributes more to the total cost. This is because when the time delay becomes smaller, the system becomes more stable using the corresponding controller, and the effect of the controller will increase. [19], where controller 1 is assumed to be the desired controller; (3) the LQG-controller algorithm designed for two distributed controllers neglecting the time delays in [18]. In the simulations, TD2 is set to be 0 and 0.02, and TD1 varies within [0, 0.02]. It can be seen that the proposed algorithm outperforms the other two schemes in the sense that it has a lower total cost. It can also be seen that the total cost of the two distributed controllers without delays increases more rapidly with the time delay than the other two schemes when TD2 = 0.02. It is because this scheme cannot effectively deal with the time delay so that large time delay introduces severe performance degradation. Note that, in Fig. 5, the total cost of the single controller scheme is the same for different TD2, since only controller 1 is considered in this scheme.
Q i,N = Q i = 1 0 0 1 × 100, R i = 1, i = 1, 2.
B. Load Frequency Control in Power Grid
We next consider the application of the proposed optimal distributed control scheme to twoarea load frequency control (LFC) in power grid systems [21] [22]. The LFC block diagram is shown in Fig. 6, where the system states and feedback control signals are separately transmitted through a shared wireless network, which incur the time delays. The adjusting speed u i will be optimally designed according to the requested deviation of generator outputs ∆P ci .
The linear dynamic control model can be described aṡ
x (t) = A c x (t) + B c 1 u 1 (t − τ 1 ) + B c 2 u 2 (t − τ 2 ) ,(56)
where
x (t) = ∆ f 1 ∆P g1 ∆X g1 ∆ f 2 ∆P g2 ∆X g2 ∆P tie ∆P c1 ∆P c2 T , A c = −1 T p1 K p1 T p1 0 0 0 0 K p1 T p1 0 0 0 −1 T t1 1 T t1 0 0 0 0 0 0 −1 r 1 T g1 0 −1 T g1 0 0 0 0 1 T g1 0 0 0 0 −1 T p2 K p2 T p2 0 K p2 T p2 0 0 0 0 0 0 −1 T t2 1 T t2 0 0 0 0 0 0 −1 r 2 T g2 0 −1 T g2 0 0 1 T g2 T 12 0 0 −T 12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 , B c 1 = B c 2 = 0 0 0 0 0 0 0 1 1 ,(57)
and the subscript i = 1, 2 representing the i-th control area, ∆ f i is the deviation of frequency, ∆P gi is the deviation of generator mechanical output, ∆X gi is the deviation of valve position, ∆P tie is the deviation of tie-line power, ∆P ci is the requested deviation of generator output, T gi is the time constant of the governor, T ti is the time constant of the turbine, K pi is the electric system gain, T pi is the electric system time constant, T 12 is the tie-line synchronizing coefficient, and r i is the speed drop.
In Fig. 6, the system state x (t) has nine elements. We need nine sensors, each observing one dimension of x (t) directly, and after sampling the measurement signals are immediately sent to the LFC controllers. Then, discretizing the process at the sampling instant gives the same formulas as in (2) and (3) choosing p = 2.
In the simulation, based on [21] [22], the sampling period is set to be h=0.01, T 12 = 2.4, K pi = 1, T pi = 0.2, T ti = 0.3, T gi = 0.08, r i = 0.2545, R i = 1, i = 1, 2, and
Q i,N = Q i = , i = 1, 2.(58)
In Fig. 7, we also compare the system performances of the three schemes. Again we see that the proposed optimal distributed control strategy significantly outperforms the other two methods.
VI. CONCLUSIONS
We have considered the problem of optimal control for networked control systems with distributed controllers and time delays under the linear quadratic control framework. In particular, the optimal control problem is formulated as a non-cooperative linear quadratic game, and we have obtained the optimal distributed controllers assuming the time delays between sensors and actuators are deterministic and within one sampling period. We have also applied the proposed optimal distributed control scheme to load frequency control in power grid systems. Future works include to investigate the optimal distributed controller when the time delays are larger than one sampling period and stochastic.
Fig. 1 .
1The structure of a networked distributed control system.
Fig. 2 .
2Timing of signals in the control system.
Initialize x (0) = x 0 , and u i (k) = 0, i = 1, 2; k < 0.2: for k = 0 : 1 : N − 1 do .3: Use x (k), u 1 (k − 1), u 2 (k − 1) and L 1 (k) to compute u 1 (k) in (34) .
Fig. 3 .
3Total cost with two distributed controllers under various time delays.
Fig. 3
3shows the total costs of the system with various time delays. TD1 and TD2 represent the time delays of controller 1 and controller 2, respectively. It can be seen that the total cost becomes larger when either time delay increases, especially when both time delays are large.
Fig. 4 .
4Ratio of costs between two distributed controllers under various time delays.
Fig
Fig. 5 shows the performance comparison for three schemes: (1) the proposed distributed control algorithm; (2) the LQG-controller algorithm designed for a single controller with time delay in [19], where controller 1 is assumed to be the desired controller; (3) the LQG-controller
Fig. 5 .
5Performance comparison for three schemes with various TD1.
Fig. 6 .
6Block diagram of a two-area LFC system for power grid.
Fig. 7 .
7Performance comparison for three schemes under various TD1 for a two-area LFC system.
APPENDIX A From (35) and (36), we can rewrite L 1 (k) as L 1 (k) = P12,2 (k) −1 P 1 1,2 (k)From (A-1) and (37), we haveSimilarly, we haveand S 2 m,n (k + 1) is the (m, n)-th block of matrix S 2 (k + 1).Then, based on (A-3) and (A-4), we can derive
Cyber physical systems: Design challenges. E Lee, University of California, BerkeleyTechnical ReportE. Lee, "Cyber physical systems: Design challenges," University of California, Berkeley Technical Report, 2008.
Overview of the Smart Grid: Policies, Initiatives and Needs. ISO New England Inc.ISO New England Inc., Overview of the Smart Grid: Policies, Initiatives and Needs, Feb. 17, 2009.
Distributed Control of Robotic Networks: A Mathematical Approach to Motion Coordination Algorithms. F Bullo, J Cortes, S Martinez, Princeton University PressPrinceton, NJF. Bullo, J. Cortes, and S. Martinez, Distributed Control of Robotic Networks: A Mathematical Approach to Motion Coordination Algorithms, Princeton University Press, Princeton, NJ, 2009.
Embeded and cyber-physical systems in a nutshell. P Marwedel, Proc. Design Autom. Conf. Design Autom. ConfAnaheim, CAP. Marwedel, "Embeded and cyber-physical systems in a nutshell," in Proc. Design Autom. Conf., Anaheim, CA, 2010.
An observer-based compensator for distributed delays. R Luck, A Ray, Automatica. 265R. Luck and A. Ray, "An observer-based compensator for distributed delays," Automatica, vol. 26, no. 5, pp. 903-908, 1990.
Stochastic analysis and control of real-time systems with random time delays. J Nilsson, B Bernhardsson, B Wittenmark, Proc. 13th Int. Fed. of Autom. Control World Congress. 13th Int. Fed. of Autom. Control World CongressJ. Nilsson, B. Bernhardsson, and B. Wittenmark, "Stochastic analysis and control of real-time systems with random time delays," Proc. 13th Int. Fed. of Autom. Control World Congress, pp. 267-272, 1996.
Stochastic analysis and control of real-time systems with random time delays. J Nilsson, B Bernhardsson, B Wittenmark, Automatica. 341J. Nilsson, B. Bernhardsson, and B. Wittenmark, "Stochastic analysis and control of real-time systems with random time delays," Automatica, vol. 34, no 1, pp. 57-64, Jan. 1998.
Stochastic optimal control and analysis of stability of networked control systems with long delay. S Hu, W Zhu, Automatica. 39S. Hu and W. Zhu, "Stochastic optimal control and analysis of stability of networked control systems with long delay," Automatica, vol. 39, pp. 1877-1884, 2003.
Optimal control for linear systems with multiple time delays in control input. M V Basin, J R Gonzalez, IEEE Trans. Autom. Control. 511M. V. Basin and J. R. Gonzalez, "Optimal control for linear systems with multiple time delays in control input," IEEE Trans. Autom. Control, vol. 51, no. 1, pp. 91-97, Jan. 2006.
Stability of networked control systems. W Zhang, M S Branicky, S M Phillips, IEEE Contr. Syst. Mag. 211W. Zhang, M. S. Branicky, and S. M. Phillips, "Stability of networked control systems," IEEE Contr. Syst. Mag., vol. 21, no. 1, pp. 84-99, Feb. 2001.
Stability analysis of networked control systems. G C Walsh, H Ye, L G Bushnell, IEEE Trans. Contr. Syst. Tech. 103G. C. Walsh, H. Ye, and L. G. Bushnell, "Stability analysis of networked control systems," IEEE Trans. Contr. Syst. Tech., vol. 10, no. 3, pp. 438-446, May 2002.
Robust stability and disturbance attenuation analysis of a class of networked control systems. H Lin, G Zhai, P J Antsaklis, Proc. 42nd Conf. Decision and Contr. 42nd Conf. Decision and Contr2H. Lin, G. Zhai, and P. J. Antsaklis, "Robust stability and disturbance attenuation analysis of a class of networked control systems," in Proc. 42nd Conf. Decision and Contr., vol. 2, pp. 1182-1187, Dec. 2003.
A survey of recent results in networked control systems. J P Hespanha, P Naghshtabrizi, Y Xu, Proc. IEEE. IEEE95J. P. Hespanha, P. Naghshtabrizi, and Y. Xu, "A survey of recent results in networked control systems," Proc. IEEE, vol. 95, no. 1, pp. 138-162, 2007.
Optimal stabilizing gain selection for networked control systems with time delays and packet losses. H Li, M Chow, Z Sun, IEEE Trans. Contr. Syst. Tech. 175H. Li, M. Chow, and Z. Sun, "Optimal stabilizing gain selection for networked control systems with time delays and packet losses," IEEE Trans. Contr. Syst. Tech., vol. 17, no. 5, pp. 1154-1162, Sep. 2009.
Wireless medium access control in networked control systems. X Liu, A Goldsmith, Proc. IEEE Amer. Contr Conf. IEEE Amer. Contr ConfX. Liu and A. Goldsmith, "Wireless medium access control in networked control systems," in Proc. IEEE Amer. Contr Conf., pp. 688-694, 2004.
The wireless control network: A new approach for control over networks. M Pajic, S Sundaram, G J Pappas, R Mangharam, IEEE Trans. Autom. Control. 5610M. Pajic, S. Sundaram, G. J. Pappas, and R. Mangharam, "The wireless control network: A new approach for control over networks," IEEE Trans. Autom. Control, vol. 56, no.10, pp. 2305-2318, Oct. 2011.
Multicast routing for decentralized control of cyber physical systems with an application in smart grid. H Li, L Lai, H V Poor, IEEE J. Sel. Areas Commun. 306H. Li, L. Lai, and H. V. Poor, "Multicast routing for decentralized control of cyber physical systems with an application in smart grid," IEEE J. Sel. Areas Commun., vol. 30, no. 6, pp. 1097-1107, July 2012.
J C Engwerda, Linear Quadratic Dynamic Optimization and Differential Game Theory. ChichesterWileyJ. C. Engwerda, Linear Quadratic Dynamic Optimization and Differential Game Theory, Chichester: Wiley, 2005.
Real-time control systems with delays. J Nilsson, Dept. Automatic Control, Lund Inst. Technology. Ph.D. dissertationJ. Nilsson, "Real-time control systems with delays," Ph.D. dissertation, Dept. Automatic Control, Lund Inst. Technology, Lund, Sweden, 1998.
Computer-Controlled Systems Theory and Design. K J Astrom, B Wittenmark, Prentice Hall3rd editionK. J. Astrom and B. Wittenmark, Computer-Controlled Systems Theory and Design, Prentice Hall, 3rd edition, 1997.
Load frequency control in power systems based on differential games. R Ye, H Chen, X Wang, IEEE Trans. Power System. submittedR. Ye, H. Chen, and X. Wang, "Load frequency control in power systems based on differential games," IEEE Trans. Power System, submitted.
The megawatt frequency control problem: A new approach via optimal control theory. C E Fosha, O I Elgerd, IEEE Trans. Power App., Syst. 4C. E. Fosha and O. I. Elgerd, "The megawatt frequency control problem: A new approach via optimal control theory," IEEE Trans. Power App., Syst., vol. PAS-89, no. 4, pp. 563-577, Apr. 1970.
|
[] |
[] |
[
"\n03C15, 03E02, 05D10, 22F50, 43A07, 54H20\n"
] |
[
"03C15, 03E02, 05D10, 22F50, 43A07, 54H20"
] |
[
"Annales Henri Lebesgue"
] |
The Kechris-Pestov-Todorcevic correspondence connects extreme amenability of topological groups with Ramsey properties of classes of finite structures. The purpose of the present paper is to recast it as one of the instances of a more general construction, allowing to show that Ramsey-type statements actually appear as natural combinatorial expressions of the existence of fixed points in certain compactifications of groups, and that similar correspondences in fact exist in various dynamical contexts.Résumé. -La correspondance de Kechris-Pestov-Todorcevic établit une relation entre la moyennabilité extrême des groupes topologiques et les propriétés de type Ramsey de certaines classes de structures finies. Le but de cet article est de la resituer comme une instance particulière d'une construction plus générale, permettant ainsi de montrer que des énoncés de type Ramsey apparaissent en fait comme l'expression combinatoire naturelle de l'existence de points fixes dans certaines compactifications de groupes, et que des correspondances similaires sont en réalité présentes dans toute une variété de contextes dynamiques.
|
10.5802/ahl.16
|
[
"https://ahl.centre-mersenne.org/article/AHL_2019__2__149_0.pdf"
] | 119,696,301 |
1701.04257
|
9e64ed9fbd570714887afbc72795f0a2fd0226cb
|
2019
03C15, 03E02, 05D10, 22F50, 43A07, 54H20
Annales Henri Lebesgue
2201910.5802/ahl.16L IO N E L N G U Y E N VA N T H É F I X E D P OIN TS IN C O M PA C TIF IC ATIO N S A N D C O M BIN AT O R I A L C O U N T E R PA R TS P OIN TS F I X E S D A N S L E S C O M PA C TIF IC ATIO N S E T C O N T R E PA R TIE S C O M BIN AT OIR E S referred to via the following notation: G X for G-flows, G (X, x) for G-ambits.Ramsey theory, fixed point properties in topological dynamics 2010 Mathematics Subject Classification: 37B05
The Kechris-Pestov-Todorcevic correspondence connects extreme amenability of topological groups with Ramsey properties of classes of finite structures. The purpose of the present paper is to recast it as one of the instances of a more general construction, allowing to show that Ramsey-type statements actually appear as natural combinatorial expressions of the existence of fixed points in certain compactifications of groups, and that similar correspondences in fact exist in various dynamical contexts.Résumé. -La correspondance de Kechris-Pestov-Todorcevic établit une relation entre la moyennabilité extrême des groupes topologiques et les propriétés de type Ramsey de certaines classes de structures finies. Le but de cet article est de la resituer comme une instance particulière d'une construction plus générale, permettant ainsi de montrer que des énoncés de type Ramsey apparaissent en fait comme l'expression combinatoire naturelle de l'existence de points fixes dans certaines compactifications de groupes, et que des correspondances similaires sont en réalité présentes dans toute une variété de contextes dynamiques.
Introduction and results
Introduction
In [KPT05], Kechris, Pestov and Todorcevic established a striking correspondence between topological dynamics and structural Ramsey theory (for a precise statement, see Theorem 1.1 below). Building on the seminal works of Graham-Rothschild [GR71], Graham-Leeb-Rothschild [GLR72,GLR73], Abramson-Harrington [AH78] and Nešetřil-Rödl [NR77,NR83], this turned out to be an invaluable tool to produce extremely amenable groups when concentration of measure is not available (as in [GM83,Gla98,GP07]), and to reach a better understanding of the dynamics of infinite-dimensional topological groups (see for example [AKL12,Zuc14] in the non-Archimedean Polish case, or [MT11,MNVTT16,BYMT17] in the general Polish case). It also considerably impacted the recent activity around Fraïssé theory and structural Ramsey theory, providing new incentives to construct and/or identify highly homogeneous structures (see [KS13,Kub14, EFH + 16]), and to prove and/or use new partition results (see, for example, the paper [Sol14] and references therein, the surveys [Bod15] and [NVT15], as well as [BK17,BK18,BLALM16,HN16,PS16] for more recent results).
The purpose of this paper is to recast the Kechris-Pestov-Todorcevic correspondence as an instance of a more general construction, allowing to show that Ramseytype statements actually appear naturally when expressing combinatorially the existence of fixed points in certain compactifications of groups. As a consequence, it is proved in a unified way that similar correspondences in fact exist in various dynamical contexts. Some of them are presented here as illustrations, and exhibit combinatorial properties that are equivalent or implied by fixed-point properties like minimal almost periodicity, strong amenability, and amenability. Among those, some isolate new phenomena, while others allow the recovery of some previously known results that were originally obtained in different contexts.
The original motivation to undertake such a project was also to gain a better understanding of those non-Archimedean Polish groups that contain a coprecompact extremely amenable subgroup. According to [MNVTT16] and [Zuc16], this class coincides with those groups for which all minimal flows are metrizable (and have a generic orbit). It also captures the so-called Ramsey property, which expresses a particularly good behavior from the point of view of partition calculus, and whose distribution remains mysterious among classes of finite structures. The new connections that are established in the present work do not solve that problem, but somehow make more precise the contours of the "dark" side that remains to be understood when attacking it "from below".
Results
Throughout this paper, a G-flow of a topological group G will be a continuous action of G on some compact Hausdorff space, while a G-ambit will be a G-flow together with a distinguished point whose orbit is dense. These objects will be morphism to a compact group). This leads to the following "dual" form of the previous diagram:
Extreme amenability o w & .
Minimal almost periodicity Strong amenability
Amenability
In practice, the aforementioned strategy suggests in fact two slightly different kinds of applications. Starting from a natural class X of flows, one may express combinatorially the fixed point property relative to those flows; this requires some particular conditions on X , which are satisfied for equicontinuous/distal flows and for proximal flows. Conversely, starting from natural algebras, one may isolate a class of flows on which the fixed point property is combinatorially meaningful. This will be done for the Roelcke algebra, and to some extent for the weakly almost periodic algebra. The relationship between all the corresponding ambits can be represented as follows, where S(G) stands for the Samuel compactification of G, R(G) for the Roelcke compactification, W (G) for the weakly almost periodic compactification, B(G) for the Bohr compactification, P (G) for the proximal compactification, and P S (G) for the strongly proximal compactification: On the combinatorial side, the general setting is that of first-order structures in the usual sense of model-theory (see for example [Hod93] for a standard reference) but for simplicity, we will restrict our attention to the relational setting. Given a firstorder relational language (i.e. a family (R i ) i∈I of symbols together with associated arities m i 1), a structure A is a non-empty set A, together with a family of subsets R A i ⊂ A m i for every i ∈ I. Naturally attached to such objects is a notion of isomorphism and of embedding, where an embedding is just an isomorphism onto its image; given two structures A and B, the set of all embeddings of A in B will be denoted by B A (note that this differs from the common notation, according to which B A refers to the set of all substructures of B isomorphic to A). A structure is ultrahomogeneous when any isomorphism between any two of its finite substructures extends to an automorphism. There is now a rich theory around those objects, starting with the seminal work of Fraïssé himself [Fra54]. For that reason, countable ultrahomogeneous structures are now called Fraïssé structures (denoted here by F).
(S(G), e G )
In the recent developments of Fraïssé theory, a main concern is the study of the interaction between the combinatorics of the set Age(F) of all finite substructures of F, and the dynamics of the automorphism group Aut(F). The main theorem of [KPT05] is a striking illustration of this.
Theorem 1.1 (Kechris-Pestov-Todorcevic [KPT05]). -Let F be a Fraïssé structure. The following are equivalent (TFAE):
(1) Aut(F) is extremely amenable.
(2) Age(F) has the Ramsey property.
The Ramsey property (for embeddings) referred to in the previous results means that for every A ∈ Age(F), every function χ taking finitely many values on F A (such a χ is usually referred to as a finite coloring) is necessarily constant on an arbitrarily large finite set. Precisely: given any B ∈ Age(F), in which A typically embeds in many ways, χ is constant of some set of the form b(B)
A , for some b ∈ F B . Under that form, the Ramsey property is a property of F rather than Age(F), but it finitizes under the following form: for every A, B ∈ Age(F), every k ∈ N, there exists C ∈ Age(F) such that every coloring of C A taking at most k values is constant on
b(B)
A , for some b ∈ C B . The typical result of the paper will be of similar flavor. Its general form, condensed in Theorem 1.2, states that Aut(F) has a fixed point property of a particular kind iff F has some Ramsey-type property, restricted to some particular kind of colorings (see Sections 3.1 and 3.3 for definitions):
Theorem 1.2. -Let F be a Fraïssé structure, X be a class of Aut(F)-flows such that the class of X -Aut(F)-ambits is closed under suprema and factors, and that every Aut(F) X ∈ X admits some x ∈ X such that Aut(F) Aut(F) · x ∈ X . Then A := {f ∈ RUC b (G) : G G • f ∈ X } is a unital, left-invariant, closed C * -subalgebra of RUC b (Aut(F)), and TFAE:
(1) Every Aut(F)-flow in X has a fixed point.
(2) For every ε > 0, F has the Ramsey property up to 2ε for the finite colorings in (A) ε . These imply the following equivalent statements:
(3) Every zero-dimensional Aut(F)-flow in X has a fixed point.
(4) F has the Ramsey property for the finite colorings in A.
When the finite colorings are dense in A, all those statements are equivalent.
Notice that by considering the class X of all G-flows, which obviously satisfies the hypotheses of Theorem 1.2, we directly obtain Theorem 1.1. By varying the class of flows under consideration, this will lead to several other concrete incarnations.
The left side of the above diagrams appears to be particularly well adapted for such an analysis. A joint embedding a, z of two structures A and Z is a pair (a, z) of embeddings of A and Z into some common structure C. Two such pairs a, z with common range C and a , z with common range C are isomorphic (written a, z ∼ = a , z ) when there is an isomorphism c : C → C so that a = c • a and z = c • z. Occasionally, the isomorphism type of a joint embedding a, z will be referred to as its joint embedding pattern and will be written [a, z]. In what follows, because the language is assumed to be relational, the joint embeddings which satisfy C = a(A) ∪ z(Z) will be the only ones to be considered, without any explicit mention of C. Note also that the notion of joint embedding A, Z 1 , . . . , Z k and joint embedding pattern [A, Z 1 , . . . , Z k ] can be defined in the same way in the case of finitely many structures A, Z 1 , . . . , Z k . Definition 1.3. -Let K be a class of finite structures in some first-order language. It has the definable Ramsey property when for every A, B ∈ K, every Z ∈ K, there exists C ∈ K such that for every joint embedding c, z of C and Z, there is b ∈ C B so that the coloring a → [a, z] is constant on b(B) A . Note the similarity with the usual Ramsey property. For the combinatorialist, what has just been defined should be thought of as C → (B) A Z . The dynamical meaning of the definable Ramsey property will be made explicit soon, but in view of the fixed point properties described previously, it makes more sense to consider the following weakening first, which will look familiar to the model theorist.
Definition 1.4. -Let K be a class of finite structures in some first-order language, and A, Z ∈ K. An unstable (A, Z)-sequence is a family of joint embeddings ( a m , z n ) m,n∈N of A and Z such that there exist two different joint embedding patterns τ < and τ > satisfying:
∀ m, n ∈ N (m < n ⇒ [a m , z n ] = τ < ) ∧ (m > n ⇒ [a m , z n ] = τ > ).
When there is no unstable (A, Z)-sequence, the pair (A, Z) is stable.
Definition 1.5. -Let K be a class of finite structures in some first-order language. It has the stable Ramsey property when it has the definable Ramsey property restricted to stable pairs. More formally: for every A, B ∈ K, every Z 1 , . . . , Z k ∈ K so that every pair (A, Z i ) is stable, there exists C ∈ K such that for every joint embedding c, z 1 , . . . , z k , there is b ∈ C B so that for every i k, the joint embedding pattern [a, z i ] does not depend on a ∈ b(B)
A . With these notions in mind, here is the characterization of minimal almost periodicity in the spirit of the Kechris-Pestov-Todorcevic correspondence.
Theorem 1.6. -Let F be a Fraïssé structure with Roelcke-precompact automorphism group. TFAE:
(1) Aut(F) is minimally almost periodic.
(2) For every A ∈ Age(F), every Aut(F)-invariant equivalence relation on F A with finitely many classes is trivial. This approach provides a new proof of the equivalence between the first two items, which already appears in the work of Tsankov [Tsa12] where unitary representations of oligomorphic groups were classified, and of Ben Yaacov [BY18] where the relationship between the Bohr compactification and the algebraic closure of the empty set was already identified. Note also that since minimal almost periodicity is implied by the existence of a fixed point in the Roelcke compactification, it can also be proved thanks to the following.
Theorem 1.7. -Let F be a Fraïssé structure. TFAE:
(1) The flow Aut(F) R(Aut(F)) has a fixed point.
(2) For every A, B, Z ∈ Age(F), and every finite coloring γ of the joint embedding patterns of A and Z, there exists a joint embedding b, z such that the coloring a → γ([a, z]) is constant on b(B) A . When Aut(F) is Roelcke-precompact, these conditions are equivalent to:
(3) For every A, B, Z ∈ Age(F), there exists a joint embedding b, z such that the coloring a → [a, z] is constant on b(B) A . This result is very useful in practice; for example, it automatically holds when Age(F) has the free amalgamation property. Therefore, the automorphism group of the random graph, of the random hypergraph of any fixed finite type, or of any Henson graph (= countable ultrahomogeneous K n -free graph for some n ∈ N), is minimally almost periodic (this can also be proved using a different method, see [NVT17]). As a slightly more involved application, Theorem 1.7 can also be used to prove that the orthogonal group of 2 is minimally almost periodic when equipped with its strong operator topology (see Section 4.3.2). Much more is known about that object but the present proof is, in comparison, rather simple.
We will now discuss the dynamical content of the definable Ramsey property.
Theorem 1.8. -Let F be a Fraïssé structure with Roelcke-precompact automorphism group. TFAE:
(1) Every minimal subflow of Aut(F) R(Aut(F)) is trivial.
(2) Age(F) has the definable Ramsey property.
Besides those discussed above, Theorems 1.7 and 1.8 exhibit several interesting features, among which is the interaction between amalgamation properties and Ramsey properties (which was first isolated in the pioneering work of Nešetřil and Rödl in [NR83]) (see Section 4.3 for more details); a distinction between the finite language case and the ω-categorical case (this is connected to the problems mentioned in [BPT13, Section 7]); the possibility of a proof of it by induction, which sometimes reduces the task to a proof of elementary pigeonhole principles, in the spirit of [Tod10] and [Sol13]; the model-theoretic flavor, which certainly calls for a deeper study in that direction.
For the right side of the diagram from p. 151, the general strategy applies as well, but the corresponding results turn out to be a rather different flavor. Definition 1.9. -Let F be a Fraïssé structure and χ be a coloring of F A . Say that χ is proximal when for every D ∈ Age(F), there exists E ∈ Age(F) such that for every e 1 , e 2 ∈ F E , there exists d ∈ E D such that the colorings a → χ(e 1 • a) and a → χ(e 2 • a) agree on d(D)
A .
Definition 1.10. -Let F be a Fraïssé structure. Say that F has the proximal Ramsey property when for every A, B ∈ Age(F) and every finite proximal coloring
χ of F A , there is b ∈ F B such that χ is constant on b(B) A .
Theorem 1.11. -Let F be a Fraïssé structure. TFAE:
(1) Every zero-dimensional proximal Aut(F)-flow has a fixed point.
(2) F has the proximal Ramsey property. When the finite proximal colorings are uniformly dense in the set of all proximal functions, these statements are equivalent to Aut(F) being strongly amenable. (For the precise meaning of this last sentence, see Section 3.1.) Theorem 1.11 is, however, less satisfactory than the previous ones on the practical side, for at least two reasons. The first one is the intrusion of a non-trivial condition, of topological nature, which potentially truly limits the use of our combinatorial methods (see Section 6.3 for a more detailed discussion). The second is that at the present stage, because of the difficulty of handling proximal colorings in concrete structures, there is no example where Theorem 1.11 can be used to prove strong amenability by combinatorial means. It can, however, be used to deduce non-trivial combinatorial consequences from strong amenability.
The same obstacles appear when considering amenability and strongly proximal flows. In fact, this case is, in some sense, even more resistant, as it remains unclear whether a combinatorial description of the relevant class of colorings in the spirit of Definition 1.9 exists at all. Nevertheless, a slight modification of the general strategy leads to the Ramsey-theoretic counterpart previously obtained by Moore and by Tsankov, as follows.
Definition 1.12 (Moore [Moo13]). -Let K be a class of finite structures in some first-order language. It has the convex Ramsey property when for every A, B ∈ K, and every ε > 0, there exists C ∈ K such that for every finite coloring χ of C A , there is a finite convex linear combination λ 1 , . . . , λ n , and b 1 , . . . , b n ∈ C B such that the coloring a → n i=1
λ i χ(b i • a) is constant up to ε on B A .
Theorem 1.13 (Moore [Moo13]; Tsankov [Tsa14]). -Let F be a Fraïssé structure. TFAE:
(1) Aut(F) is amenable.
(2) Age(F) has the convex Ramsey property.
The practical use of this result in order to study amenability is so far rather limited, but there are promising exceptions (see Section 7 for a more detailed discussion).
The paper is organized is follows: The first part is devoted to the proof of two master results, Theorems 1.2 and 3.1, of which all the previous results are specific incarnations. This proof is based on a general analysis of the existence of fixed points in compactifications of topological groups via the notion of finite oscillation stability (Section 2) and on its discretization in Ramsey-theoretic terms (Section 3). The second part of the paper focuses on applications. Section 4 deals with the Roelcke algebra and the Roelcke compactification, leading to Theorems 1.7 and 1.8.
Equicontinuous and distal flows are treated in Section 5, leading to Theorem 1.6. Proximal flows are discussed in Section 6, leading to Theorem 1.11. Strongly proximal flows and amenability are discussed in Section 7, leading to Theorem 1.13.
As a final remark before starting: most of the present work can certainly be completed in the context of continuous Fraïssé theory, in the spirit of [MT11]. I leave it to the interested reader to make the appropriate translation.
Fixed points in compactifications and finite oscillation stability
In this section, given a topological group G, the goal is to isolate conditions that characterize the existence of fixed points in certain compactifications of G.
To do so, for the sake of completeness, we need to remind ourselves of certain general facts about uniformities on G (for a more detailed treatment, see, for example, [Bou98] or [Eng89]). Recall that such a structure is a family U of subsets of G × G, often called entourages (of the diagonal), which satisfies the following properties:
(1) Every U ∈ U contains the diagonal {(g, g) ∈ G 2 : g ∈ G}.
(2) The family U is closed under supersets and finite intersections.
(3) If U is in U , so is U −1 := {(h, g) ∈ G 2 : (g, h) ∈ U }. (4) If U ∈ U, there is V ∈ U so that V • V ⊂ U , where V • V is the set V • V := {(g, h) ∈ G 2 : ∃ k ∈ G (g, k) ∈ V ∧ (k, h) ∈ V }
Informally, when (g, h) ∈ U , g and h must be thought of as U -close. Such a structure naturally appears when G is equipped with a metric (in which case a typical entourage is of the form {(g, h) ∈ G 2 : d(g, h) < ε} for some ε > 0), but there is no need for a metric to have a uniformity. Uniform structures constitute the natural framework to express the concepts of uniform continuity and of completion. Here, uniformities will be useful because they will make it possible to manipulate various compactifications of G while staying within G. More precisely, every compact topological space admits a unique compatible uniformity. Therefore, when G is compactified (i.e. continuously mapped onto a dense subspace of a compact space), it inherits a natural uniformity, which retains all the information about the whole compactification. In particular, if G acts on the compactification, detecting the existence of fixed points is possible when the interaction between the group operation on G and the uniformity is well understood.
In the present case, G is not just a set but a topological group, and it carries several natural uniformities. A few of them are described below, starting with the left uniformity, the right uniformity, and the Roelcke uniformity.
The left uniformity U L is generated by those entourages of the form
V U = {(g, h) ∈ G 2 : g −1 h ∈ U },
where U is an open neighborhood of the identity element e G . It is induced by any left-invariant metric d L compatible with the topology of G (of course, when G is Polish, there is always such a metric). When G is of the form Aut(F), where F is a Fraïssé structure whose underlying set is N, a basis of open neighborhoods of e G consists of clopen subgroups of the form Stab(A), where Stab(A) denotes the pointwise stabilizer of A, and A ⊂ F denotes a finite substructure. In this uniformity, two elements g, h ∈ G are A-close when g and h agree on A. Thus, the corresponding entourage can be seen as a partition of G into sets of the form g Stab(A), and the corresponding quotient space coincides with the usual (algebraic) quotient G/ Stab(A). The right uniformity U R is defined in a similar way. It is generated by those sets of the form
V U = {(g, h) ∈ G 2 : gh −1 ∈ U },
where U is an open neighborhood of the identity element e G . It is induced by any right-invariant metric d R compatible with the topology of G. When G is of the form Aut(F) and A is a finite substructure of F, two elements g, h ∈ G are A-close when g −1 and h −1 agree on A. The corresponding entourage can be seen as a partition of G into sets of the form Stab(A)g, and the corresponding quotient space coincides with the right quotient Stab(A)\G. For reasons that will become clear later on, we will see in detail in Section 3.1 how to think about these objects. The Roelcke uniformity U L ∧ U R is the finest uniformity that is coarser than the two previous uniformities. When G is of the form Aut(F), a typical uniform neighborhood of this uniformity is indexed by two finite substructures A, Z ⊂ F, and two elements g, h ∈ G are (A, Z)-close when they are equal in the double quotient Stab(A)\G/ Stab(Z). We will see in Section 4 how to translate this combinatorially.
So far, uniformities were given via a description of their entourages. For those that are induced by compactifications of G, another convenient way to produce them is to use algebras of bounded functions. For example, consider the set RUC b (G) of all bounded uniformly continuous maps from (G, d R ) to C (these maps are called rightuniformly continuous). This is a unital C * -algebra when equipped with the supremum norm, on which the group G acts continuously by left shift: g · f (h) = f (g −1 h). Next, we will follow the terminology from [dV93, Chapter IV, Section 5] and will call left-invariant the closed C * -subalgebras of RUC b (G) that are invariant under this action.
Given a G-flow G X, and x ∈ X, there is a very simple way to produce such an object. Let C(X) denote the space of (bounded) continuous functions from X to C. This is a unital C * -algebra when equipped with the supremum norm.
For f ∈ C(X), define the map f x : G → C by f x (g) = f (g · x). Because the map g → g · x is always right-uniformly continuous (see [Pes06, Lemma 2.15]), f x is always in RUC b (G), and one can check that {f x : f ∈ C(X)} is a unital left-invariant closed C * -subalgebra of RUC b (G).
Conversely, to every unital left-invariant closed C * -subalgebra A of RUC b (G), one can associate a compact space G A , the Gelfand space of A. More details on this classical object will be given in Section 3.3. For the moment, we will just need that this is a compactification of G, on which the left-regular action by G on itself naturally extends in a continuous way, and turns G A into a G-flow. Furthermore, considering the point e G ∈ G A , the map C(G A ) → A defined by f → f e G (as in the previous paragraph) realizes an isomorphism of C * -algebras. This justifies the identification of C(G A ) with A. Here, we will use that fact under the following form: the entourages of the uniformity induced on G by the compactification
G → G A are of the form {(g, h) ∈ G 2 : ∀ f ∈ F |f (g) − f (h)| < ε}, where F ⊂ A is finite, and ε > 0.
Definition 2.1. -Let G be a topological group and F ⊂ RUC b (G). Say that F is finitely oscillation stable when for every finite H ⊂ G, ε > 0, there exists g ∈ G so that every f ∈ F is constant on Hg up to ε:
∀ ε > 0 ∃ g ∈ G ∀ f ∈ F ∀ h, h ∈ H |f (hg) − f (h g)| < ε.
This crucial notion is due to Pestov (for more on this, see [Pes06]), even if it was originally stated for left-uniformly continuous functions. The reason to deal with right-uniformly continuous functions here is that these are the ones that are naturally used to compactify G in a way that is compatible with the left-regular action.
Proposition 2.2. -Let G be a topological group, G X a G-flow, and x ∈ X. TFAE:
(1) The orbit closure G · x contains a fixed point.
(2) For every finite F ⊂ C(X), the family {f x : f ∈ F} is finitely oscillation stable.
Proof.
-(1) ⇒ (2): Fix F ⊂ C(X) finite, H ⊂ G finite, ε > 0. Let y ∈ G ·
x be a fixed point. Thanks to the continuity of the elements of F, we may find g · x close enough to y so that for every f ∈ F and every h ∈ H,
|f (h · g · x) − f (h · y)| < ε/2, i.e. |f (h · g · x) − f (y)| < ε/2, since y is fixed. Then, for every f ∈ F, h, h ∈ H, we have: |f x (hg) − f x (h g)| = |f (h · g · x) − f (h · g · x)| = |f (h · g · x) − f (y)| + |f (y) − f (h · g · x)| < ε. (2) ⇒ (1): For F ⊂ C(X) finite, H ⊂ G finite, ε > 0, define A F ,H,ε = {y ∈ G · x : ∀ f ∈ F ∀ h ∈ H |f (h · y) − f (y)| ε}.
This defines a family of closed subsets of G · x. Thanks to (2), it has the finite intersection property (every finite intersection of its members contains an element of G · x). Its intersection is therefore non-empty. Notice now that this intersection consists of fixed points.
As direct consequences, we obtain the following.
Proposition 2.3. -Let G be a topological group. Let A be a unital left- invariant, closed C * -subalgebra of RUC b (G). TFAE:
(1) The flow G G A has a fixed point.
(2) Every finite F ⊂ A is finitely oscillation stable.
Proposition 2.4. -Let G be a topological group. Let A be a unital left- invariant, closed C * -subalgebra of RUC b (G). TFAE:
(1) Every minimal subflow of G G A is trivial.
(2) For every x ∈ G A , F ⊂ A finite, the family {f x : f ∈ F} is finitely oscillation stable.
Here, we will write F x for the family {f x : f ∈ F}. Note that the inclusion A ⊂ {A x : x ∈ G A } may be strict. This is, for example, the case for the Roelcke algebra Ro b (G) defined in Section 4 (see [GM08, Corollary 4.11]). However, there are interesting cases where equality holds, e.g. RUC b (G) itself, the algebra WAP(G) of weakly almost periodic functions on G (for a definition, see Section 5), or any of its closed left-invariant subalgebras [BJM78, Chapter III, Lemma 8.8]. A more detailed discussion about this topic and its dynamical interpretation in terms of pointuniversality can be found in [GM06]
Ramsey properties as natural combinatorial counterparts to the existence of fixed points
The purpose of this section is to show that when G is of the form Aut(F) for some Fraïssé structure F, the existence of fixed points expressed in Propositions 2.3 and 2.4 naturally translates combinatorially as Ramsey-theoretical statements. Precisely, our aim here is first to prove Theorem 3.1 below, and then Theorem 1.2 (for definitions, see Sections 3.1 and 3.3).
Theorem 3.1. -Let F be a Fraïssé structure, and let A be a unital, leftinvariant, closed C * -subalgebra of RUC b (Aut(F)). TFAE:
(1) The flow Aut(F) Aut(F) A has a fixed point.
(2) For every ε > 0, F has the Ramsey property up to 2ε for the finite colorings in (A) ε . Those imply the following equivalent statements:
(3) Every zero-dimensional factor of Aut(F) Aut(F) A has a fixed point. (4) F has the Ramsey property for the finite colorings in A. When the finite colorings are dense in A, all those statements are equivalent.
Even though Theorems 1.2 and 3.1 look quite similar, we will see in the following sections that they will both be handy when dealing with practical situations. This will lead to Theorems 1.6, 1.7 and 1.8. However, other natural algebras do not seem to admit approximations by finite colorings. We will see two such examples later on, with the proximal and the strongly proximal algebras.
Finite oscillation stability and Ramsey properties
Let F be a Fraïssé structure, whose underlying set is N. As before, for a finite substructure A ⊂ F, let Stab(A) ⊂ Aut(F) denote the pointwise stabilizer of A. Given any g ∈ G, its equivalence classḡ in the right quotient Stab(A)\ Aut(F) is the set of all those elements of G that are A-close to g (i.e. some sort of ball of radius A) relative to the right uniformity, and can be thought of as the restriction g −1 A, an embedding of A into F. Furthermore, because F is ultrahomogeneous, every element of F A is of that form. In other words, we can identify Stab(A)\ Aut(F) and F A . In addition, since every element of Stab(A)\ Aut(F) can be thought of as a ball for the right uniformity, every coloring of F A , that is, every mapχ : F A → C, can be seen as an element χ of RUC b (Aut(F)) that is constant on small enough balls and satisfies χ(g) =χ(ḡ). Here, we will not usually make any notational distinction between χ andχ, and by a coloring in (resp. finite coloring in) RUC b (Aut(F)), we will mean exactly a function χ of that kind (resp. with finite range). From this point of view, note that even if we allow A to range over the set of all finite substructures of F, every finite set C ⊂ RUC b (Aut(F)) of finite colorings can be seen as a finite set of finite colorings defined on the same set F A , with values in a common set.
Definition 3.2. -Let F ⊂ RUC b (Aut(F))
and ε > 0. Say that F has the Ramsey property (resp. Ramsey property up to ε) for colorings in F when for every A, B in Age(F), and every finite set
C ⊂ F of finite colorings of F A , there exists b ∈ F B such that every χ ∈ C is constant (resp. constant up to ε) on b(B) A .
Note that as it is defined, the Ramsey property for colorings in F is a property of F, as opposed to a property of Age(F). We will meet several instances where it completely finitizes (e.g. Theorems 1.6, 1.7 and 1.8), but for the moment, this is only feasible via a case-by-case analysis. (1) For every finite H ⊂ G, there exists g ∈ G so that every χ ∈ C is constant up to ε on Hg.
(2) For every B ∈ Age(F), there exists b ∈ F B such that every χ ∈ C is constant up to ε on b(B)
A . Proof. -The proof hinges on the following observation. Let A be a finite substructure of F and H be a finite subset of Aut(F). For h ∈ Aut(F), recall thath denotes the equivalence class of h in the quotient Stab(A)\ Aut(F). As we have seen, h can be thought of as the restriction h −1 A, so H := {h : h ∈ H} can be seen as a finite set of embeddings of A into F. As such, it is contained in some set of the form B A for some finite substructure B of F. Next, if g is fixed in G, we have:
Hg = {hg : h ∈ H} = {g −1 • h −1 A : h ∈ H} ⊂ g −1 (B) A . Conversely, if B ⊂ F is a finite substructure, then there is H ⊂ Aut(F) finite so that B A ⊂ H, and if g ∈ Aut(F), then g −1 (B) A ⊂ Hg.
We now go on with the proof. Assume that for every finite H ⊂ G, there exists g ∈ G so that every χ ∈ C is constant up to ε on Hg. Let H ⊂ Aut(F) be a finite set so that B A ⊂ H. Find g ∈ Aut(F) such that every χ ∈ C is constant up to ε on Hg. Then every χ ∈ C is constant up to ε on Hg, and hence on Hg
⊃ g −1 (B) A . Therefore, it suffices to set b = g −1 B. Conversely, fix H ⊂ Aut(F) finite. Let B be a finite substructure of F so that H ⊂ B A . By hypothesis, find b ∈ F B such that every χ ∈ C is constant up to ε on b(B) A . Take g ∈ Aut(F) such that g −1 extends b. Then Hg ⊂ Hg ⊂ g −1 (B)
A and every χ ∈ C is constant up to ε on Hg.
Proposition 3.4. -Let F be a Fraïssé structure, and let A be a unital, leftinvariant, closed C * -subalgebra of RUC b (Aut(F)). TFAE:
(1) Every finite F ⊂ A is finitely oscillation stable.
(2) For every ε > 0, F has the Ramsey property up to 2ε for colorings in (A) ε .
Proof. -Assume that every finite F ⊂ A is finitely oscillation stable. Fix A in Age(F), C ⊂ (A) ε a finite set of finite colorings of F A , H ⊂ Aut(F) finite. Fix {f χ : χ ∈ C} ⊂ A and η > 0 so that χ − f χ ∞ + η < ε for every χ ∈ C.
By finite oscillation stability of {f χ : χ ∈ C}, find g ∈ Aut(F) so that every f χ is constant up to η on Hg. Then every χ ∈ C is constant up to 2ε on Hg. Thanks to Proposition 3.3, we deduce that for every
B ∈ Age(F), there exists b ∈ F B such that every χ ∈ C is constant up to 2ε on b(B)
A . This is exactly what we needed to prove. Conversely, assume that (2) holds, and fix
F ⊂ A finite, ε > 0, H ⊂ Aut(F) finite. Let {χ f : f ∈ F} be a finite family of finite colorings in (A) ε/4 so that f − χ f ∞ < ε/4 for every f ∈ F.
Thanks to Proposition 3.3, (2) implies that there is g ∈ Aut(F) so that every χ f is constant up to ε/2 on Hg. Then, every f ∈ F is constant up to ε on Hg and F is finitely oscillation stable.
Ramsey properties and fixed point in compactifications
In this section, we prove Theorem 3.1. Tying up Proposition 3.4 with Proposition 2.3, we obtain:
Proposition 3.5. -Let F be a Fraïssé structure, and let A be a unital, leftinvariant, closed C * -subalgebra of RUC b (Aut(F)). TFAE:
(1) The flow Aut(F) Aut(F) A has a fixed point.
(2) For every ε > 0, F has the Ramsey property up to 2ε for colorings in (A) ε .
Note the presence of the error term 2ε in item (2) of the previous equivalence. Its appearance seems necessary in full generality, but can be removed under the additional assumption that finite colorings are dense in A. In order to see this, observe first that considering all ε > 0 simultaneously in Proposition 3.3, one easily obtains:
Proposition 3.6. -Let A ∈ Age(F), C be a finite set of finite colorings of F A . TFAE:
(1) C is finitely oscillation stable.
(
2) For every B ∈ Age(F), there exists b ∈ F B such that every χ ∈ C is constant on b(B)
A . This yields:
Proposition 3.7. -Let F be a Fraïssé structure, and let A be a unital, leftinvariant, closed C * -subalgebra of RUC b (Aut(F)). Assume that finite colorings are dense in A. TFAE:
(1) The flow Aut(F) Aut(F) A has a fixed point.
(2) The structure F has the Ramsey property for colorings in A.
Proof. -Thanks to Proposition 2.3, the flow Aut(F) Aut(F) A has a fixed point iff every finite F ⊂ A is finitely oscillation stable. Because finite colorings are dense in A, this holds iff every finite set C ⊂ A of finite colorings is oscillation stable. This is equivalent to F having the Ramsey property for colorings in A thanks to Proposition 3.6.
Proof of Theorem 3.1. -The equivalence (1) ⇔ (2) follows from Proposition 3.5. For (3) ⇔ (4), consider B the unital, left-invariant, closed C * -subalgebra of A generated by the set of all finite colorings in A. By Proposition 3.7, the flow Aut(F) Aut(F) B has a fixed point iff F has the Ramsey property for colorings in B, which is equivalent to the Ramsey property for colorings in A. Therefore, it suffices to show that Aut(F) Aut(F) B has a fixed point iff every zero-dimensional factor of Aut(F) Aut(F) A does. To do this, recall that a compact topological space X is zero-dimensional exactly when the continuous maps taking finitely many values are uniformly dense in C(X). It follows that Aut(F) B is zero-dimensional, which proves one implication. For the other one, let Aut(F) X be a zero-dimensional factor of Aut(F) Aut(F) A , as witnessed by the map π : Aut(F) A → X. Let x = π(e Aut(F) ). Then C(X) x ⊂ A. Since X is zero-dimensional, the continuous maps taking finitely many values are dense in C(X), so finite colorings are dense in C(X) x . Therefore, we have in fact C(X) x ⊂ B and, by duality,
(Aut(F) C(X)x , x) ∼ = (G · x, x) is a factor of Aut(F) Aut(F) B .
Since this latter flow has a fixed point, so does the former one.
The following result, which can be thought of as a combinatorial counterpart to Proposition 2.4, is an easy corollary.
Corollary 3.8. -Let F be a Fraïssé structure, and let A be a unital, leftinvariant, closed C * -subalgebra of RUC b (Aut(F)). TFAE:
(1) Every minimal subflow of the flow Aut(F) Aut(F) A is trivial.
(2) For every x ∈ Aut(F) A , ε > 0, the structure F has the Ramsey property up to 2ε for colorings in (A x ) ε . Those imply the following equivalent statements:
(3) Every minimal zero-dimensional subflow of Aut(F) Aut(F) A is trivial. (4) For every x ∈ Aut(F) A , the structure F has the Ramsey property for colorings in A x . When finite colorings are dense in A, all those statements are equivalent.
Ramsey properties and fixed points in classes of flows
In this section, we prove Theorem 1.2. We have just seen how Ramsey-theoretical statements reflect the existence of fixed points in certain compactifications. In practice, however, one is often interested in the existence of fixed points in a given class X of flows defined by a dynamical property (like being distal, equicontinuous, proximal, . . .), as opposed to the existence of a fixed point in a particular compactification. The purpose of what follows is to show that in that setting, the Ramsey-theoretical approach remains relevant at the cost of rather mild hypotheses on X . The reader familiar with topological dynamics and Gelfand compactifications may go directly to the proof of Theorem 1.2 (see p. 166). For the others, a synthetic treatment based on [dV93, Chapter IV, Sections 4 and 5] is presented below. This material is classical and is only included here for the sake of completeness.
In what follows, it will be convenient to work with X -G-ambits, i.e. G-ambits G (X, x) so that G X ∈ X . Recall first that for a family (X α , x α ) α of G-ambits, its supremum α (X α , x α ) is the G-ambit induced on the orbit closure of (x α ) α in the product α X α , together with the distinguished point (x α ) α . Next, consider the algebra RUC b (G). We have already seen that G acts continuously on it by left shift via g · f (h) = f (g −1 h). It also acts by right shift via g • f (h) := f (hg). It turns out that when RUC b (G) is equipped with the pointwise convergence topology, this action is continuous (1) on the orbit (pointwise) closure G • f of every f ∈ RUC b (G). This set is then a compact invariant subset of RUC b (G), to which one can attach the G-ambit (G • f , f ). The reason this ambit is relevant here comes from the following fact. To prove this proposition, we start by making more explicit the construction of Gelfand compactifications. Let A be a unital left-invariant, closed C * -subalgebra of RUC b (G). The Gelfand space G A is, by definition, the space of C * -algebra homomorphisms φ : A → C. It is compact when equipped with its weak * -topology. Every g ∈ G defines an evaluation functionalĝ : α → α(g), and this defines a compactification of G, on which the left-regular action of G on itself extends naturally to an action on G A by left shift g · φ(α) = φ(g −1 · α). Here are the crucial features of G A that we will use:
(1) C(G A ) can be identified with A. This is realized by the isomorphism of C *algebras C(G A ) → A defined by f → f e G , and whose inverse sends α ∈ A to the continuous functionα defined on G A byα : φ → φ(α).
(2) Duality: if A, B are two unital left-invariant, closed C * -subalgebra of RUC b (G), then A ⊂ B holds iff (G A , e G ) is a factor of (G B , e G ).
(3) Let G (X, x) be a G-ambit. Then the unital left-invariant, closed C * - subalgebra C(X) x of RUC b (G) defined by C(X) x = {f x : f ∈ C(X)} (recall that f x (g) = f (g · x)
) is such that (G C (X)x , e G ) is isomorphic to (X, x).
(1) Caution: continuity may not hold on RUC b (G) itself. I am grateful to the referee for having pointed it out.
With all this in mind, let us now turn to the proof of Proposition 3.9. Proof of Proposition 3.9. -As we have seen in Section 2, f can be thought of as the continuous functionf on G f defined byf : φ → φ(f ). It follows that for every
φ ∈ G f , the map π(φ) : h →f (h · φ) (= h · φ(f ) = φ(h −1 · f )) is in RUC b (G). This defines π : G f −→ RUC b (G). Note that for g, h ∈ G, π(ĝ)(h) =ĝ(h −1 · f ) = (h −1 · f )(g) = f (hg) = g • f (h)
. Therefore, π(ĝ) = g • f , and in particular π(e G ) = f . Let us now verify that π is an injective homomorphism of G-flows. This will suffice to prove the desired result, since π will then be a G-flow isomorphism between G f and its image in
RUC b (G), which is G • π(e G ) = G • f .
For injectivity, assume that π(φ 1 ) = π(φ 2 ). From the expression of π(φ)(h) above, this implies that φ 1 and φ 2 agree on the orbit G · f , and therefore on all of f . To prove that π is G-equivariant, consider g, h ∈ G and φ ∈ G f . Then:
π(g · φ)(h) = (g · φ)(h −1 · f ) = φ(g −1 · (h −1 · f )) = φ((hg) −1 · f ) = π(φ)(hg).
The last term of the equality is (g
• π(φ))(h), so π(g · φ) = g • π(φ). To prove that π is continuous, fix H ⊂ G finite, ε > 0. If φ 1 , φ 2 ∈ G f agree on the finite set H −1 · f , then |φ 1 (h −1 · f ) − φ 2 (h −1 · f )| < ε for every h ∈ H. This means that for every h ∈ H, |π(φ 1 )(h) − π(φ 2 )(h)| < ε, as required.
Before going on, note the following. We now have two actions of G on RUC b (G). When G = Aut(F) for a Fraïssé structure F, we have seen the set of finite colorings as a subset of RUC b (Aut(F)), consisting of those functions χ such that χ(h) = χ(h −1 A) for some finite A. Thus,
g • χ(h) = χ(hg) =χ(g −1 h −1 A) = g ·χ(h −1 A).
In other words, the action by right shift on RUC b (Aut(F)) induces the action by left shift on the space of finite colorings. In counterpart, the action by left shift on RUC b (Aut(F)) does not seem to transfer naturally to the space of colorings. Proof. -Let (X, x) = f ∈A (G • f , f ). As a supremum of X -G-ambits, it is a X -G-ambit as well. Let C(X) x = {f x : f ∈ C(X)}. As we have seen, this is a unital leftinvariant, closed C * -subalgebra of RUC b (G). To prove the result, it suffices to show that it is equal to A. Let f ∈ RUC b (G). Then f ∈ C(X) x iff f ⊂ C(X) x . Passing to Gelfand compactifications, this means that (G f , e G ) is a factor of (G C (X)x , e G ), or, equivalently, that (G • f , f ) is a factor of (X, x) (Proposition 3.9). Now, this happens iff f ∈ A: the direct implication holds because the class of X -G-ambits is closed under factors, and the converse holds thanks to the definition of (X, x), as (G • f , f ) appears as one of its factors.
Proof of Theorem 1.2. -In view of the previous proposition, it follows at once that G G A (resp. every zero-dimensional factor of G G A ) has a fixed point iff every X -G-ambit (resp. zero-dimensional X -G-ambit) has a fixed point. When X satisfies the additional property that every G X ∈ X admits some x ∈ X such that G G · x ∈ X , those statements are equivalent to the fact that every G-flow (resp. zero-dimensional G-flow) in X has a fixed point. Theorem 1.2 now follows from Theorem 3.1.
Roelcke flows and definable colorings
The purpose of this section is to prove Theorems 1.7 and 1.8 thanks to the machinery that we just developed. This is done in Sections 4.1 and 4.2, respectively. We finish in Section 4.3 with several remarks.
Fixed points in the Roelcke compactification, Roelcke colorings, and joint embedding patterns
Definition 4.1. -Let f : G → C.
It is Roelcke when it is uniformly continuous relative to the Roelcke uniformity on G.
Equivalently, f is Roelcke when it is both right and left uniformly continuous on G.
In what follows, we will be particularly interested in Roelcke-precompact groups, i.e. groups with compact completion relative to the Roelcke uniformity. In that case, every Roelcke function on G is bounded, and the set Ro b (G) of all Roelcke, bounded, functions is a unital, left-invariant, closed C * -subalgebra of RUC b (G). The corresponding compactification G Ro b (G) will be denoted by R(G). After their introduction in Roelcke and Dierolf [RD81], Roelcke-precompact groups have shown their utility through the work of Uspenskij [Usp01,Usp02,Usp08]. More recently, several essential contributions by Tsankov [Tsa12], Ben-Yaacov-Tsankov [BYT16] and Ibarlucía [Iba16b, Iba16a] have shown that their role is central when studying automorphism groups of Fraïssé structures from the model-theoretic point of view. As a matter of fact, Roelcke-precompact groups of the form Aut(F) for F Fraïssé can be easily characterized combinatorially. Indeed, we have seen in Section 2 that a typical entourage of the Roelcke uniformity on Aut(F) is indexed by two finite substructures A and Z of F, and that two elements g, h ∈ Aut(F) are (A, Z)-close when Stab(A)g Stab(Z) = Stab(A)h Stab(Z). If we denote by z the identity embedding Z → F, this means:
g −1 A, z ∼ = h −1 A, z .
Here, it will be useful to remember that for a joint embedding a, z , [a, z] refers to its pattern, i.e. its isomorphism type. Proof. -Aut(F) is Roelcke-precompact iff for every entourage U there are g 1 , . . . , g n ∈ Aut(F) so that every g ∈ Aut(F) is U -close to some g i . From the discussion above, this means that for any two finite substructures A, Z of F, Aut(F) can be covered by finitely many Stab(A)\ Aut(F)/ Stab(Z)-classes, which holds exactly when there are only finitely many joint embedding patterns of A and Z. Proof. -Let f ∈ Ro b (Aut(F)) and fix ε > 0. Since f is bounded, there is a finite set Y so that the range of f is contained in (Y ) ε . By uniform continuity of f , there are two finite substructures A, Z of F so that f is constant up to ε on every Stab(A)\ Aut(F)/ Stab(Z)-class. For any such class P , choose h P ∈ P, y P ∈ Y such that |y P − f (h P )| < ε. For g ∈ G, setḡ := Stab(A)g Stab(Z), the equivalence class of g in Stab(A)\ Aut(F)/ Stab(Z). Then, the map χ : g → yḡ is a finite coloring of F A . It is in Ro b (Aut(F)) because it is constant on the Stab(A)\ Aut(F)/ Stab(Z)-classes, and in addition, for any g ∈ G:
|χ(g) − f (g)| = |yḡ − f (g)| |yḡ − f (hḡ)| + |f (hḡ) − f (g)| < 2ε.
Thanks to Theorem 3.1, it follows that under the precompactness assumption of Aut(F), every Roelcke flow has a fixed point iff F has the Ramsey property for finite colorings in Ro b (Aut(F)). To see how this leads to Theorem 1.7, we now turn to a description of those colorings that are in Ro b (Aut(F)). In fact, the previous proof already provides such a description. Indeed, if f is assumed to be a finite coloring, then it has to be constant on every Stab(A)\ Aut(F)/ Stab(Z)-class for A, Z large enough. This means exactly that f can be seen as a finite coloring of the joint embedding patterns of A and Z. Therefore, we have just proved the following. Proof. -Assume that F has the Ramsey property for colorings in Ro b (Aut(F)), and fix A, B, Z ∈ Age(F), z ∈ F Z , γ a finite coloring of the joint embedding patterns of A and Z. Then the coloring defined on F A by a → γ([a, z]) is in Ro b (Aut(F)) by Proposition 4.4. The conclusion follows. The converse is an immediate consequence of the following easy fact: if C = {χ i : i n} ⊂ Ro b (Aut(F)) is a finite set of finite colorings, which we may assume to be colorings of F A , Proposition 4.4 guarantees that each χ i is associated to some finite Z i ⊂ Age(F) and finite coloring γ i of the joint embedding patterns of A and Z i . Then, we see that the hypothesis applied to
Z = i n z i (Z i )
and γ defined by γ([a, z]) = (γ i ([a, z i ])) i n provides b ∈ F B so that for every i n, the coloring a → γ i ([a, z i ]) is constant on b(B)
A . Proposition 4.6. -Let F be a Fraïssé structure. Then F has the Ramsey property for colorings in Ro b (Aut(F)) iff for every A, B, Z ∈ Age(F), and for every finite coloring γ of the joint embedding patterns of A and Z, there exists a joint embedding b, z such that the coloring a → γ([a, z]) is constant on b(B) A . Proof. -Assume that the Ramsey property for colorings in Ro b (Aut(F)) holds, and fix A, B, Z ∈ Age(F), γ a finite coloring of the joint embedding patterns of A and Z. Fix z ∈ F Z . Then b ∈ F B obtained by Proposition 4.5 is as required. Conversely, fix A, B, Z ∈ Age(F), z ∈ F Z , γ a finite coloring of the joint embedding patterns of A and Z.
Consider a joint embedding b , z such that γ([·, z ]) is constant on b (B)
A .
Let i be the unique isomorphism such that i • z = z. Then by ultrahomogeneity of F, we can extend i to b (B) ∪ z (Z), and b := i • b is as required.
Proof of Theorem 1.7. -Thanks to Theorem 3.1, Aut(F) R(Aut(F)) has a fixed point iff F has the Ramsey property for colorings in Ro b (Aut(F)). Apply then Proposition 4.6. When Aut(F) is Roelcke-precompact, the additional statement is a reformulation of (2) with the coloring γ([a, z]) := [a, z], which is finite by Proposition 4.2.
Trivial minimal subflows in the Roelcke compactification and the definable Ramsey property
We now turn to the proof of Theorem 1.8, where we do assume from the beginning that Aut(F) is Roelcke-precompact. Since we wish to do so via an application of Corollary 3.8, we need to understand first how functions of the form f x look when f ∈ Ro b (Aut(F)) and x ∈ R(Aut(F)). This is possible thanks to a convenient representation of the elements of R(Aut(F)). Thanks to the discussion at the beginning of Subsection 4.1, a typical open neighborhood around a point g ∈ Aut(F) in R(Aut(F)) is determined by all those h ∈ Aut(F) so that h −1 A, z ∼ = g −1 A, z , where A and Z are finite substructures of F, and z is the natural inclusion map of Z in F. In particular, letting A and Z being equal to the substructure F n of F supported by {k : k n} for each n ∈ N (recall that F is based on N), we obtain the nested sequence of clopen sets [g −1 F n , e Aut(F) F n ] whose intersection can be thought of as [g −1 , e Aut(F) ]. In other words, in R(Aut(F)), g ∈ Aut(F) is identified with [g −1 , e Aut(F) ]. In general, it is not too difficult to see that in R(Aut(F)), a Cauchy sequence of elements of Aut(F) essentially corresponds to a coherent sequence of joint embedding patterns of two copies of F 0 , F 1 , F 2 . . . which naturally converges to the pattern [φ 1 , φ 2 ] of a joint embedding φ 1 , φ 2 of two copies of F. A basic open neighborhood around this point is of the form [φ 1 A, φ 2 Z], with A, Z finite substructures of F. To describe the action Aut(F) R(Aut(F)), it suffices to observe that for g, h ∈ Aut(F), gh is identified with [h −1 • g −1 , e Aut(F) ]. So, in general, since the action of Aut(F) on R(Aut(F)) extends the left-regular action of Aut(F) on itself, we have, for every [φ 1 , φ 2 ] ∈ R(Aut(F)),
g · [φ 1 , φ 2 ] = [φ 1 • g −1 , φ 2 ].
Proposition 4.7. -Let F be a Fraïssé structure with Roelcke-precompact automorphism group, and x ∈ R(Aut(F)). Then Ro b (Aut(F)) x can be approximated by finite colorings.
Proof. -Let x ∈ R(Aut(F)), f ∈ Ro b (Aut(F)), and ε > 0. From the previous discussion, x is of the form
x = [φ 1 , φ 2 ] and f x (g) = f ([φ 1 • g −1 , φ 2 ]) (
where f is now seen as a continuous function on R(Aut(F)). By uniform continuity of f , there are two finite substructures A, Z of F so that for every g, h ∈ Aut(F),
φ 1 • g −1 A, φ 2 Z ∼ = φ 1 • h −1 A, φ 2 Z ⇒ |f x (g) − f x (h)| < ε.
Since Aut(F) is Roelcke-precompact, by Proposition 4.2, there are only finitely joint embedding patterns of the form [φ 1 •a, φ 2 •z]. By choosing appropriate constants for each of these, we obtain χ ∈ Ro b (Aut(F)) x so that f x − χ ∞ < ε, and which can be thought of as a finite coloring of F A . As in Proposition 4.4, the previous proof also provides a description of those finite colorings that are in Ro b (Aut(F)) x : if f x is assumed to be a finite coloring, then for A, Z large enough finite substructures of F, it has to give same value to any two g, h ∈ Aut(F) which satisfy
φ 1 • g −1 A, φ 2 • z Z ∼ = φ 1 • h −1 A, φ 2 • z Z .
This means exactly that f x can be seen as a finite coloring of F A whose value at a depends only on the joint embedding pattern [φ 1 • a, φ 2 • z]. We have just proved the following.
Proposition 4.8. -Let F be a Fraïssé structure with Roelcke-precompact automorphism group, x = [φ 1 , φ 2 ] ∈ R(Aut(F)), A ∈ Age(F) and χ be a finite coloring of F A . Then χ ∈ Ro b (Aut(F)) x iff there is Z ∈ Age(F) and a joint embedding of the form φ 1 , φ 2 • z of F and Z such that χ(a) depends only on the joint embedding
pattern [φ 1 • a, φ 2 • z].
Proposition 4.9. -Let F be a Fraïssé structure with Roelcke-precompact automorphism group and let x = [φ 1 , φ 2 ] ∈ R(Aut(F)). Then F has the Ramsey property for colorings in Ro b (Aut(F)) x iff for every A, B ∈ Age(F), every Z ∈ Age(F), and every joint embedding of the form
φ 1 , φ 2 • z of F and Z, there is b ∈ F B so that the coloring a → [φ 1 • a, φ 2 • z] does not depend on a on b(B) A .
Proof. -Assume that F has the Ramsey property for colorings in Ro b (Aut(F)) x , and fix A, B, Z ∈ Age(F) together with a joint embedding of F and Z of the form φ 1 , φ 2 • z . Then the coloring defined on F A by a → [φ 1 • a, φ 2 • z] is finite by Proposition 4.2, and is in Ro b (Aut(F)) x by Proposition 4.8. The conclusion follows. The converse is an immediate consequence of Proposition 4.8, and of the fact that any finite set C of finite colorings in Ro b (Aut(F)) x can be captured by one single such coloring, as in the proof of Proposition 4.5.
Proposition 4.10. -Let F be a Fraïssé structure with Roelcke-precompact automorphism group. Then F has the Ramsey property for colorings in Ro b (Aut(F)) x for every x ∈ R(Aut(F))} iff Age(F) has the definable Ramsey property.
Proof. -From Proposition 4.9, it appears that the definable Ramsey property is nothing other than a finitization of the fact that for every x ∈ R(Aut(F)), the Ramsey property holds for colorings in Ro b (Aut(F)) x . This proves the converse implication.
The direct implication is obtained by a standard compactness argument as follows. Assume that we can find A, B, Z finite substructures of F such that for every C ∈ Age(F), there exists a joint embedding c, z such that no b ∈ C B satisfies that the map a → [a, z] is constant on b(B) A . Consider now the sequence (F n ) n∈N of initial segments of F (recall that F is based on N and that F n is the substructure of F supported by {k : k n}). Each comes with some joint embedding pattern [φ n , z n ] witnessing the failure of the definable Ramsey property. Note that we may assume that each φ n is just the natural inclusion map from F n in F. Closing off this set of joint embedding patterns under initial segments of the first coordinate, we obtain a countable set whose elements are [φ m , z n ], with m n. Setting [φ m , z n ] [φ p , z q ] when m p and [φ m , z q ] = [φ m , z n ], this becomes a countable tree, which is finitely branching since Aut(F) is Roelcke-precompact (Proposition 4.2). By König's lemma, this tree contains an infinite branch, which can be seen as a joint embedding pattern
[φ, z] of F and Z. By construction, there is no b ∈ φ(F) B such that a → [a, z] is constant on b(B)
A . Therefore, the Ramsey property for colorings in Ro b (Aut(F)) x fails for any x = [φ 1 , φ 2 ] ∈ R(Aut(F)) satisfying φ 1 = φ and φ 2 extending z to F.
Proof of Theorem 1.8. -By Corollary 3.8, the minimal subflows of Aut(F) R(Aut(F)) are trivial iff for every x ∈ R(Aut(F)), F has the Ramsey property for colorings in Ro b (Aut(F)) x . Apply then Proposition 4.10.
Remarks
Roelcke flows
It is easy to see that the factors of G R(G) are exactly the G-flows G X such that for some x ∈ X, the map G → X, g → g · x is both left-and right-uniformly continuous. Equivalently, there exists a right-action G · x G commuting with the action G X such that ∀ g ∈ G g · x = x · g. Note that g → g · x is right-uniformly continuous for any G-flow, so the definition of Roelcke flow really lies on the left-uniform continuity of this map. Note also that a subflow of a Roelcke flow may not be Roelcke itself. For that reason, while it is easy to translate Theorem 1.7 in terms of Roelcke flows (it characterizes when every Roelcke flow has a fixed point), the meaning of Theorem 1.8 is much less clear.
As the class of Roelcke flows does not seem to be of particular interest, let us simply mention that it is quite closely related to the class of strongly continuous flows as defined by Glasner-Megrelishvili in [GM08], which is much better behaved. However, in the case of Roelcke precompact groups, Ibarlucía has shown in [Iba16a] that the corresponding subalgebra of RUC b (G) corresponds to the weakly almost periodic algebra (see Section 5.2). The study of the fixed point property on strongly continuous flows therefore reduces to that of equicontinuous and distal flows, which are treated in Section 5.
Minimal almost periodicity of the orthogonal group of 2
It was mentioned in the introduction that the orthogonal group O( 2 ) of 2 equipped with the strong operator topology can be shown to be minimally almost periodic thanks to Theorem 1.7. Here is the proof: consider the class of all finite metric spaces with distances in Q that embed isometrically in an affinely independent way in 2 . This is a Fraïssé class, for which it is easy to show via some elementary geometry that item (2) of Theorem 1.7 holds. The corresponding Fraïssé limit H ind Q is a countable dense metric subspace of 2 (see [NVT10, Chapter 1, Section 4.3], from which the proof can easily be adapted), whose isometry group is therefore minimally almost periodic. This group embeds continuously and densely into O( 2 ), which suffices to reach the desired conclusion. Again, much more is known about that object -its unitary representations have been completely classified by Kirillov in [Kir73]; furthermore, it is in fact extremely amenable by a result of Gromov and Milman [GM83] -but the present proof is, in comparison, rather simple.
Ramsey-like and amalgamation properties
The connection between Ramsey-like and amalgamation properties originates from the fundamental work of Nešetřil and Rödl: on the one hand, any Ramsey class of finite ordered structures must have the amalgamation property [Neš89]; on the other hand, the partite construction from [NR83] and its descendants (arguably among the most powerful methods in structural Ramsey theory so far) are entirely based on amalgamation. Theorems 1.7 and 1.8 strengthen this link, by showing that amalgamation suffices to express combinatorial partition properties whose dynamical content (fixed point or trivial minimal components in the Roelcke compactification) is actually quite close to that of the usual Ramsey property (extreme amenability, i.e. fixed point or trivial minimal components in the Samuel compactification).
Induction and the definable Ramsey property
Unlike the usual Ramsey property, the definable Ramsey property is particularly well adapted to a treatment by induction. This is particularly true when the underlying language is finite, as finitely many base cases suffice to show that it holds in general. More precisely, given A, B, C, Z, write C → (B) A Z when for every joint embedding c, z , there is b ∈ C B so that on b(B) A , the joint embedding pattern [a, z] does not depend on a. Then, when the language is finite with maximum arity k, the definable Ramsey property holds for Age(F) as soon as for every A, B, Z ∈ Age(F) with |A| + |Z| k, there exists C in Age(F) such that C → (B) A Z . For example, for binary structures, it suffices to consider |A| = |Z| = 1, which is notoriously simpler than the general case where no restriction is placed on |A|.
ω-categoricity versus finite language
The definable Ramsey property is one of the first Ramsey-type phenomena where the distinction between ω-categorical structures and structures in a finite language appears so explicitly. This certainly deserves to be noticed in view of the still open problem which consists in finding a well-behaved class of Fraïssé structures that admit a precompact expansion where the Ramsey property holds, see [BPT13]. Recall that by a result of Zucker [Zuc16], this problem is equivalent to that of finding a wellbehaved class of non-Archimedean Polish groups whose universal minimal flow is metrizable, see also [MNVTT16] and [BYMT17]. I conjectured in [NVT15] that Roelcke precompact groups do fall into that category. This was disproved by Evans in 2015 thanks to the use of an intricate model-theoretic construction originally due to Hrushovski, but the problem remains open for the automorphism groups coming from a Fraïssé structure in a finite language. (Evans' example is also at the center of the recent work [EHN16].) With this in mind, it will be interesting to see to which extent techniques from model theory allow a better grasp on the combinatorial property exhibited in Theorem 1.7 or on the definable Ramsey property.
Equicontinuous and distal flows, definable equivalence relations, and stable colorings
In this section, we concentrate on minimal almost periodicity and on the proof of Theorem 1.6. The first part, consisting of the equivalence between (1) and (2), is carried out in Section 5.1, where several known facts about equicontinuity and minimal periodicity are recalled. The second part is completed in Section 5.2, which deals with weakly almost periodic functions.
Minimal almost periodicity, almost periodic colorings, and definable equivalence relations
Given a topological group G, the class of equicontinuous ambits is closed under suprema and factors [dV93, Chapter IV, Section 2.27]. Since equicontinuity passes to subflows, Theorem 1.2 applies to the class of equicontinuous flows. The corresponding C * -subalgebra of RUC b (G) can be determined by using that the restriction of a G-flow G X is equicontinuous on the orbit closure G · x iff
∀ U ε ∈ U X ∃ U η ∈ U X ∀ x 1 , x 2 ∈ G · x (x 1 , x 2 ) ∈ U η ⇒ ∀ g ∈ G (g · x 1 , g · x 2 ) ∈ U ε ,
and it is not difficult to verify that we recover the classical result according to which the corresponding C * -subalgebra of RUC b (G) is the almost-periodic algebra AP(G), the subalgebra of Ro b (G) consisting of all those f ∈ RUC b (G) such that the orbit G • f is norm-precompact in RUC b (G) (equivalently, the orbit G · f is normprecompact, see [dV93, Chapter IV, Sections 5.30 and 6.15]). The corresponding compactification G AP(G) , usually denoted B(G), is the Bohr compactification of G, and is always a compact group [dV93, Appendix (D.12)].
In view of Theorem 1.2, we could try to provide a Ramsey-type characterization of minimal almost periodicity. However, the problem is of slightly different flavor here.
Indeed, unlike what happens with many other classes of flows, having a fixed point in G AP(G) simply means that G AP(G) is trivial. Equivalently: every almost periodic function on G is constant. Formulating Theorem 1.2 would become rather awkward in that case, as it would just express that Aut(F) is almost periodic iff F has the Ramsey property for some class of colorings. . . which all turn out to be constant! Instead, the right approach to adopt here is to analyze which class of colorings we would be talking about.
Proposition 5.1. -Let F be a Fraïssé structure with Roelcke precompact automorphism group. Then, finite colorings are dense in AP(Aut(F)).
Proof. -This proof is largely inspired from the proof of [BYT16,Proposition 4.7]. Let f ∈ AP(Aut(F)), ε > 0. Since G · f is norm-precompact in RUC b (Aut(F)), we can consider the G-flow induced on G · f . By continuity of the action, find a finite substructure A of F such that for every g ∈ Stab(A), g · f − f ∞ < ε. Consider now the induced Stab(A)-flow on the closed convex hull co(Stab(A) · f ). This is an affine flow by isometries. By Hahn's fixed point theorem [Gla76, Chapter III, Section 5], it admits a fixed point χ 1 . This is a coloring of F A by Stab(A)-invariance, and since χ 1 ∈ co(Stab(A) · f ), we have χ 1 − f ∞ ε. At that stage, however, χ 1 may not be finite. This can be fixed by repeating the previous argument using the right shift action. Consider the orbit G • χ 1 . As mentioned above, G • χ 1 is also norm-precompact and since AP(Aut(F)) ⊂ Ro b (Aut(F)), this action is continuous. Hence, there exists a finite substructure Z of F such that for every g ∈ Stab(Z), g • χ 1 − χ 1 ∞ < ε. Consider now the induced Stab(Z)-flow on co(Stab(Z) • χ 1 ). This is an affine flow by isometries and by Hahn's fixed point theorem, it admits a fixed point χ 2 . This is still a coloring of F A , as every point of the orbit G • χ 1 is Stab(A)-fixed by left shift: for g ∈ G, h ∈ Stab(A), and k ∈ G,
h · (g • χ 1 )(k) = g • χ 1 (h −1 k) = χ 1 (h −1 kg) = h · χ 1 (kg) = χ 1 (kg) = g • χ 1 (k).
By Stab(Z)-invariance, χ 2 is in fact constant on all the Stab(A)\ Aut(F)/ Stab(Z)classes, but by Roelcke-precompactness of Aut(F), there are only finitely many such classes, so that χ 2 is finite. Finally, since χ 2 ∈ co(Stab(B) • χ 1 ), we have χ 2 − χ 1 ∞ ε, and therefore χ 2 − f ∞ 2ε. Proof of Theorem 1.6, (1) ⇔ (2). -Let F be a Fraïssé structure with Roelcke precompact automorphism group. From Proposition 5.1, Aut(F) is minimally almost periodic iff every finite coloring in AP(Aut(F)) is constant. Quite clearly, the orbit Aut(F) · χ is norm-discrete in RUC b (Aut(F)) whenever χ is a finite coloring. It follows that the only finite colorings in AP(Aut(F)) are those with finite orbit, and all of them are constant iff every Aut(F)-invariant equivalence relation on F A with finitely many classes is trivial.
Note that the Roelcke-precompactness assumption was used to make sure that finite colorings are dense in AP(Aut(F)). This is certainly not true in general: consider an action of Z on the circle via an irrational rotation n · θ = θ + nα. This action is isometric, hence equicontinuous, so the map n → nθ is almost periodic on Z. It is easy to see that this cannot be ε-approximated by a finite almost periodic coloring on Z for ε small enough.
Minimal almost periodicity, weakly almost periodic colorings and the stable Ramsey property
As already mentioned in the introduction, minimal almost periodicity is equivalent to the formally stronger notion of having a fixed point in any distal flow. The corresponding class of ambits is closed under suprema and factors [dV93, Chapter IV, Section 2.27]. The corresponding compactification is the so-called maximal group-like compactification of G [dV93, Chapter IV, Section 6.18], to which is attached the distal algebra. Since this algebra contains the almost periodic one, it could have been interesting to use Theorem 1.2 to derive a different combinatorial characterization of almost periodicity than the one obtained using the algebra AP(G). However, we will not do that for two reasons. The first one is that the description of the distal algebra provided by Theorem 1.2 does not provide any particularly illuminating way to charactize distal colorings. The second is that an even more general result can be obtained by considering a still larger algebra of functions, namely, the weakly almost periodic algebra WAP(G), consisting of all those f ∈ RUC b (G) such that the closure of G • f is weakly compact in the Banach space RUC b (G). Note that by the following result of Grothendieck (which we only state here for topological groups), this is equivalent to the fact that G · f is weakly compact.
WAP(G) = {f x : f ∈ WAP(G) ∧ x ∈ W (G)}.
It follows that all minimal subflows of G W (G) are trivial iff G W (G) has a fixed point. This last condition is, in turn, known to be equivalent to minimal almost periodicity for G (for example, this is a consequence of the fact that B(G) is isomorphic to the unique minimal two-sided ideal in W (G) [Rup84, Chapter III, Section 1.9]). Applying Theorem 3.1, it follows that when it is Roelcke-precompact, Aut(F) is minimally almost periodic iff F has the Ramsey property for finite colorings in WAP(Aut(F)). We now proceed as in Section 4 to show that this leads to the equivalence (1) ⇔ (3) in Theorem 1.6. To do so, we follow the same scheme as for the proof of Theorem 1.8. The first step is to characterize weakly almost periodic colorings combinatorially. Following [BYT16], this can easily be done thanks to Theorem 5.2. Recall first that according to Proposition 4.4, a finite coloring χ of F A is in Ro b (Aut(F)) when there is Z ∈ Age(F) and an embedding z of Z in F so that χ(a) depends only on [a, z]. We will say then that χ is fully determined by z when the converse also holds: Therefore, χ / ∈ WAP(Aut(F)).
χ(a) = χ(a )
Proposition 5.5. -Let F be a Fraïssé structure with Roelcke-precompact automorphism group. Then F has the Ramsey property for the finite colorings in WAP(Aut(F)) iff for every A, B ∈ Age(F), every Z 1 , . . . , Z k ∈ Age(F) so that every pair (A, Z i ) is stable, every joint embedding φ, z 1 , . . . , z k of F and Z 1 , . . . ,
Z k , there is b ∈ φ(F) B so that for every i k, the coloring a → [a, z] is constant on b(B)
A . Proof. -Assume that F has the Ramsey property for colorings in WAP(Aut(F)), fix A, B, Z 1 , . . . , Z k ∈ Age(F) so that every pair (A, Z i ) is stable, and consider a joint embedding of F and Z 1 , . . . , Z k of the form φ, z 1 , . . . , z k . Each coloring defined on F A by a → [φ • a, z i ] is finite by Proposition 4.2, and is in WAP(Aut(F)) by Proposition 5.4. The conclusion follows.
The converse is an immediate consequence of Proposition 5.4, and of the fact that to check the Ramsey property for colorings in WAP(Aut(F)), it suffices to consider fully determined finite colorings.
Proposition 5.6. -Let F be a Fraïssé structure with Roelcke-precompact automorphism group. Then F has the Ramsey property for the finite colorings in WAP(Aut(F)) iff Age(F) has the stable Ramsey property.
Proof. -The proof is similar to the proof of Proposition 4.10. The converse implication holds because the stable Ramsey property is a finitization of the Ramsey property for colorings in WAP(Aut(F)), while the direct implication is obtained by a compactness argument.
Proof of Theorem 1.6, (1) ⇔ (3). -We have seen in the introduction of the current section that Aut(F) is minimally almost periodic iff Aut(F) W (Aut(F)) has a fixed point. By Theorem 3.1 and Proposition 5.3, this happens exactly when F has the Ramsey property for colorings in WAP(Aut(F)). Then apply Proposition 5.6.
Remarks
One of the strengths of the original Kechris-Pestov-Todorcevic correspondence, and of Theorem 1.1 in particular, is its applicability: during the last ten years, it has produced numerous examples of extremely amenable groups and of concrete descriptions of universal minimal flows. It turns out that a similar strategy can be used in order to compute the Bohr compactification of the Roelcke-precompact groups of the form Aut(F). This is suggested by the equivalence (1) ⇔ (2) of Theorem 1.6, but was already noticed by Ben Yaacov in [BY18] and by Tsankov (personal communication): first, examine whether (1) holds by detecting all the invariant equivalence relations with finitely many classes on the sets of the form F A . If all of those are trivial, the group is minimally almost periodic. If not, determine the non-trivial ones (a task which may not be easy), and the closed subgroup of Aut(F) which fixes all the corresponding classes setwise. At the level of F, this corresponds to passing to the group Aut(F * ), where F * is the expansion of F obtained by naming those classes. This has a natural interpretation from the model-theoretic point of view: it fixes pointwise the algebraic closure of the empty set (in all finite cardinalities). This group is now minimally almost periodic. By Roelcke precompactness, F * is a precompact expansion of F, which means that the quotient Aut(F)/ Aut(F * ) is precompact. By construction, the flow Aut(F) Aut(F)/ Aut(F * ) is minimal and universal for all minimal equicontinuous Aut(F)-flows. To show that it is equicontinuous, it suffices to show that Aut(F * ) is normal in Aut(F), which is easy to check.
For example, this method can be used to compute the Bohr compactifications for all the groups coming from Fraïssé graphs and tournaments. Note that in those cases, this may be done using a slightly different method, because the original Kechris-Pestov-Todorcevic correspondence already provides a description of the universal minimal flow as G G/G * , where G * is an extremely amenable coprecompact subgroup of G. It is then easy to show that the Bohr compactification of G is the (compact) group G/(G * ) G , where (G * ) G stands for the normal closure of G * in G (for details, see [NVT17]). Item (3), on the other hand, should not be thought of as a possible way to prove minimal almost periodicity, but rather as a non-trivial combinatorial consequence of it. Of course, to make use of it presupposes an ability to detect stable pairs (A, Z), a task which can be attacked with model-theoretic tools.
Proximal flows and proximal colorings
The purpose of this section is to concentrate on strong amenability. Ideally, the discussion would have led to analogs of Theorems 1.6, 1.7, and 1.8 in the context of proximal flows after the following steps: (1) description of the corresponding algebra A;
(2) description of the finite colorings in A; (3) proof of the fact that finite colorings are dense in A; (4) finitization of the corresponding Ramsey-type statement. While the first two steps can be completed pretty smoothly, this is not the case for the third and fourth, which show some unexpected resistance. This explains the somewhat unsatisfactory form of Theorem 1.11.
The proximal algebra
Given a topological group G, the class of proximal ambits is closed under suprema and factors [dV93, Chapter IV, Section 5.30]. Since proximality passes to subflows, Theorem 1.2 applies to the class of proximal flows. Quite surprisingly however, no description of the corresponding C * -subalgebra Prox(G) of RUC b (G) seems to be available in the literature, so our first task here is to fill this gap thanks to the characterization provided in Theorem 1.2: Prox(G) consists exactly of those f ∈ RUC b (G) for which the G-flow G G • f is proximal (we will call those functions proximal). To achieve this, it will be convenient to call a subset D ⊂ G 2 diagonally syndetic when there is K ⊂ G finite so that
G 2 = K · D = k∈K k · D ,
where g · (g 1 , g 2 ) refers to the diagonal action: g · (g 1 , g 2 ) = (gg 1 , gg 2 ). This definition is of course modeled on the standard concept of syndetic subset of G, where S ⊂ G is syndetic when there is a finite K ⊂ G so that G = K · S.
For a G-flow G X, x 1 , x 2 ∈ X, U ⊂ X 2 , define the set P (x 1 , x 2 , U ) as:
P (x 1 , x 2 , U ) := {(g 1 , g 2 ) ∈ G 2 : (g 1 · x 1 , g 2 · x 2 ) ∈ U }.
Proposition 6.1. -Let G X be a G-flow, x 1 , x 2 ∈ X. TFAE: (1) For every entourage U , the set P (x 1 , x 2 , U ) is diagonally syndetic.
(2) For every (y 1 , y 2 ) ∈ G · x 1 × G · x 2 , every entourage U , there exists g ∈ G so that g · (y 1 , y 2 ) ∈ U .
Proof. -(1) ⇒ (2): Fix (y 1 , y 2 ) ∈ G · x 1 × G · x 2 and U an entourage of the diagonal in X, which we may assume to be compact. We will show that there is a finite set K ⊂ G so that G · x 1 × G · x 2 ⊂ K · U . This is sufficient: passing to closures G · x 1 × G · x 2 ⊂ K · U = K·U , so (y 1 , y 2 ) ∈ k·U for some k ∈ K, so g = k −1 satisfies g · (y 1 , y 2 ) ∈ U , as required. To prove the existence of K: P (x 1 , x 2 , U ) is diagonally syndetic, so we can write G 2 = K · P (x 1 , x 2 , U ) for some finite K ⊂ G. Now, for g 1 , g 2 ∈ G, we have (g 1 , g 2 ) = k · (h 1 , h 2 ) for some k ∈ K and (h 1 , h 2 ) ∈ P (x 1 , x 2 , U ), so (g 1 · x 1 , g 2 · x 2 ) = k · (h 1 · x 1 , h 2 · x 2 ) ∈ K · U .
(2) ⇒ (1): Fix U an open entourage of the diagonal in X. By assumption,
G · x 1 × G · x 2 ⊂ g∈G g · U,
so by compactness, there exists K ⊂ G finite such that G · x 1 × G · x 2 ⊂ g∈K g · U . Now,
G 2 = P (x 1 , x 2 , G · x 1 × G · x 2 ) = P x 1 , x 2 , g∈K g · U = g∈K g · P (x 1 , x 2 , U )
As a direct corollary:
Proposition 6.2. -Let G X be a G-flow, x ∈ X.
Then G · x is proximal iff for every entourage U of the diagonal in X, the set P (x, x, U ) is diagonally syndetic.
Specializing this to the G-flow G G • f , we directly obtain:
Proposition 6.3. -Let f ∈ RUC b (G).
Then f ∈ Prox(G) iff for every finite F ⊂ G, ε > 0, there exists a finite K ⊂ G such that for every (g 1 , g 2 ) ∈ G 2 , there exists k ∈ K such that g 1 • f and g 2 • f are equal up to ε on F k.
Proximal colorings, fixed points in zero-dimensional proximal flows and proximal Ramsey property
We now turn to a description of the colorings in Prox(Aut(F)) and to a proof of Theorem 1.11. Specializing Proposition 6.3 to the case where G = Aut(F) with F Fraïssé and f a finite coloring, we obtain: Proposition 6.4. -Let F be a Fraïssé structure, χ be a finite coloring of F A . Then χ ∈ Prox(Aut(F)) iff for every D ∈ Age(F), there are copies D 1 , . . . , D k of D in F such that for every (g 1 , g 2 ) ∈ G 2 , there is i k such that g 1 · χ = g 2 · χ on D i A . Observing now that D 1 , . . . , D k are contained in some finite E, we obtain: Proposition 6.5. -Let F be a Fraïssé structure and χ be a finite coloring of F A . Then χ ∈ Prox(Aut(F)) iff χ is proximal.
Proposition 6.6. -Let F be a Fraïssé structure. Then F has the Ramsey property for colorings in Prox(Aut(F)) iff F has the proximal Ramsey property.
Proof. -The Ramsey property for colorings in Prox(Aut(F)) refers to finite collections of finite proximal colorings, while the proximal Ramsey property only refers to one such coloring. The direct implication is therefore obvious. For the converse, notice that given a finite set χ 1 , . . . , χ l of finite proximal colorings, the Aut(F)ambit l i=1 (Aut(F) · χ i , χ i ) is proximal. It follows that the coloring a → (χ i (a)) 1 i l is also proximal, so by the proximal Ramsey property, it is constant on b(B) A for some b. Clearly, each χ i is then constant on b(B) A , witnessing that F has the Ramsey property for colorings in Prox(Aut(F)).
Proof of Theorem 1.11. -According to Theorem 1.2, every zero-dimensional proximal Aut(F)-flow has a fixed point iff F has the Ramsey property for colorings in Prox(Aut(F)). By the previous proposition, this is equivalent to the proximal Ramsey property.
Remarks
The difficulty of proving that finite colorings are dense in the proximal algebra is the main obstacle to a more satisfactory form of Theorem 1.11, and it is reasonable to wonder where it is coming from. Can this be solved by adding an extra natural topological hypothesis on Aut(F), which would play the role that Roelcke precompactness played for distal flows? Note that even if this were possible, the relevance of the present approach as an effective method to prove strong amenability by combinatorial means looks rather questionable, as the proximality condition on colorings does not seem to make it particularly easy to deal with in practice. Note also that, in the same vein, it would be interesting to find a topological property that ensures that the proximal universal minimal flow of Aut(F) is metrizable. (This should probably be equivalent to the fact that Aut(F) contains a coprecompact strongly amenable closed subgroup.)
In a slightly different spirit, assume that a Polish group G is minimally almost periodic and strongly amenable. Is G necessarily extremely amenable? The answer is positive when the universal minimal flow of G is metrizable (see [NVT17]) but the general case remains open. In fact, even when G is assumed to be monothetic (i.e. contains a dense cyclic subgroup), this is the content of a famous open problem of Glasner (see [Gla98], as well as Pestov's contribution in [Pea07] for a detailed account about it).
Strongly proximal flows and amenability
Following Furstenberg, recall that a flow is strongly proximal when the affine flow it induces on the space of Borel probability measures is proximal. These flows are wellbehaved in the sense that they satisfy the hypotheses of Theorem 1.2. In addition, the fixed point property on this class is equivalent to being amenable, which, in principle, makes amenability approachable by the general method of the present paper. However, in practice, the obstructions that appeared with proximal flows in the previous section also appear when dealing with strongly proximal flows. In addition, because of a lack of a characterization of strong amenability in terms of syndetic sets in the spirit of Proposition 6.2, no characterization of the strongly proximal algebra parallel to Proposition 6.3 is available at the moment. For these reasons, the specialization of Theorem 1.2 to amenability and strongly proximal flows will not be detailed further here.
Nevertheless, there does exist a Ramsey-theoretic characterization of amenability, provided by Theorem 1.13. This result is originally due to Moore [Moo13] and to Tsankov [Tsa14]. Both proofs are rather similar, and pretty close to the following one, which is in the spirit of the rest of the paper.
Proof of Theorem 1.13. -The starting point is the following characterization of amenability. A topological group G is amenable iff every G-flow admits an invariant (Borel probability) measure. Because the Samuel compactification S(G) maps onto any minimal G-flow, this is equivalent to the existence of a fixed point in Prob(S(G)), the set of all Borel probability measures on S(G). This set is compact and convex, and it admits a fixed point iff the following statement ( * ) holds: for every finite family F of continuous affine maps on Prob(S(G)), every ε > 0, every finite H ⊂ G, there exists µ ∈ Prob(S(G)) which is fixed up to (F, ε, H), i.e. every f ∈ F is constant on H · µ up to ε. Now, since G is dense in S(G) and the finitely supported measures on S(G) are dense in Prob(S(G)), the above µ can be replaced by a finite convex linear combination n i=1 λ i δ g i . Next, because S(G) is the set of extreme points in Prob(S(G)), every element of F is nothing other than the natural affine extension of its restriction to S(G). This, in turn, is just an element of C(S(G)) = RUC b (G). In other words, ( * ) is equivalent to: for every finite F ⊂ RUC b (G), every ε > 0, every finite H ⊂ G, there exists a convex linear combination λ 1 , . . . , λ n and g 1 , . . . , g n ∈ G such that for every f ∈ F, the map
h → f h · n i=1 λ i δ g i = n i=1 λ i f (hg i )
is constant on H up to ε. Note that without loss of generality, we may assume that F consists of one single f ∈ RUC b (G).
When G is of the form Aut(F) for some Fraïssé structure F, this discretizes (in the spirit of Section 3.1) as: for every A, B ∈ Age(F), every ε > 0, and every finite coloring χ of F A , there is a finite convex linear combination λ 1 , . . . , λ n , and b 1 , . . . , b n ∈ F B such that the coloring a → n i=1 λ i χ(b i • a) is constant on B A . This, in turn, is equivalent to the convex Ramsey property via a standard compactness argument.
As indicated in the introduction, the practical use of Theorem 1.13 is so far limited. There are promising exceptions, as the papers by Gadhernezhad, Khalilian and Pourmahdian [GKP18], and by Etesami and Gadhernezhad [EG17], do make use of it to prove that certain automorphism groups of the form Aut(F), where F is a so-called Hrushovski structure, are not amenable. Nevertheless, there is presently no significant instance where Theorem 1.13 can be used to prove that some group is amenable. There are substantial results regarding amenability of groups of the form Aut(F) (see for example [AKL12,PS16]), but all of them rest on an explicit description of the universal minimal flow, as well as on an analysis of the invariant measures on this flow. This method, in turn, imposes severe restrictions on the groups under consideration.
Quite interestingly though, the use of the convex Ramsey property to characterize amenability naturally leads to the following question: is there a characterization of strong amenability in similar terms? Once again, the answer is positive when the universal flow is metrizable (see [MNVTT16]), but the general answer remains unknown, due to the lack of a general characterization of strong amenability in terms of existence of invariant measures.
G), e G ) (P (G), e G ) (B(G), e G ) (P S (G), e G )
( 3 )
3Age(F) has the stable Ramsey property.
(Definition 2.5 and related material), [GM08, Sections 3 and 4] and [GM13, Remarks 4.15 and 4.16] by Glasner-Megrelishvili.
Proposition 3.3. -Let A ∈ Age(F), C be a finite set of finite colorings of F A , ε > 0. TFAE:
Proposition 3.9. -Let G be a topological group, f ∈ RUC b (G). Let f denote the unital left-invariant, closed C * -subalgebra of RUC b (G) generated by f . Then the ambits (G f , e G ) and (G • f , f ) are isomorphic.
Proposition 3.10. -Let G be a topological group and X be a class of G-flows such that the class of X -G-ambits is closed under suprema and factors. Then the set A = {f ∈ RUC b (G) : G G • f ∈ X } forms a unital left-invariant, closed C * -subalgebra of RUC b (G), and the factors of G (G A , e G ) are exactly the X -Gambits.
Proposition 4.2. -Let F be a Fraïssé structure. Then Aut(F) is Roelckeprecompact iff for every A, Z ∈ Age(F), there are only finitely many joint embedding patterns of A and Z.
Proposition 4.3. -Let F be a Fraïssé structure. Then finite colorings are dense in Ro b (Aut(F)).
Proposition 4.4. -Let F be a Fraïssé structure, A ∈ Age(F) and χ be a finite coloring of F A . Then χ ∈ Ro b (Aut(F)) iff there is a finite substructure Z of F such that χ(a) depends only on [a, z], where z is the identity embedding.
Proposition 4.5. -Let F be a Fraïssé structure. Then F has the Ramsey property for colorings in Ro b (Aut(F)) iff for every A, B, Z ∈ Age(F), every z ∈ F Z , and every finite coloring γ of the joint embedding patterns of A and Z, there is b ∈ F B so that the coloring a → γ([a, z]) is constant on b(B) A .
Theorem 5 . 2 (
52Grothendieck [Gro52, Proposition 7]). -Let G be a topological group and f ∈ RUC b (G). Then f ∈ WAP(G) iff there are no sequences (g m ) m∈N , (h n ) n∈N of elements of G such that lim m lim n f (g m h n ) and lim n lim m f (g m h n ) both exist and are distinct. In addition, by a result of Berglund-Junghenn-Milnes [BJM78, Chapter III, Lemma 8.8], we have:
Proposition 5.3. -Let F be a Fraïssé structure with Roelcke-precompact automorphism group. Then WAP(Aut(F)) can be approximated by finite colorings. Proof. -See [BYT16, Proposition 4.7].
implies [a, z] = [a , z]. (In other words, χ(a) is essentially [a, z].)Proposition 5.4. -Let F be a Fraïssé structure with Roelcke-precompact automorphism group, A a finite substructure of F and χ be a finite coloring ofF A . Assume that χ ∈ Ro b (Aut(F)) is fully determined by z ∈ F Z . Then χ ∈ WAP(Aut(F)) iff the pair (A, Z) is stable. Proof. -We prove that χ / ∈ WAP(Aut(F)) iff the pair (A, Z) is unstable. If χ / ∈ WAP(Aut(F)), consider witness sequences (g m ) m , (h n ) n provided by Theorem 5.2. For m ∈ N, define a m = g −1 m A and z m = h m • z. Then for m, n ∈ N, we have χ(g m h n ) = [a m , z n ]. By Roelcke-precompactness of Aut(F), this provides a finite coloring of the pairs of naturals, so by the standard Ramsey's theorem, passing to subsequences, we may assume that there are joint embedding patterns τ < and τ > so that ∀ m, n ∈ N (m < n ⇒ [a m , z n ] = τ < ) ∧ (m > n ⇒ [a m , z n ] = τ > ). In particular, lim m lim n χ(g m h n ) = τ < , lim n lim m χ(g m h n ) = τ > and by choice of (g m ) m and (h n ) n , τ < and τ > are distinct, witnessing that (A, Z) is unstable.Conversely, assume that (A, Z) is unstable, as witnessed by sequences (a m ) m and (z n ) n , and distinct joint embedding patterns τ < and τ > . By ultrahomogeneity of F, we can find, for every m ∈ N, g m and h m so that a m = g −1m A and z m = h m • z. Then for m, n ∈ N, we have, as above, χ(g m h n ) = [a m , z n ], so lim m lim n χ(g m h n ) = τ < = τ > = lim n lim m χ(g m h n ).
AcknowledgementsThis paper has benefited from numerous discussions with several people. I would like to particularly thank Itaï Ben Yaacov regarding the Roelcke and the Bohr compactifications, as well as the notion of stability; Eli Glasner regarding the role of ambits, as opposed to flows; Michael Megrelishvili and Vladimir Pestov regarding the notion of point-universality; Julien Melleray for his sharpness in detecting sloppy arguments; Todor Tsankov for the helpful references concerning unitary representations; Benjy Weiss regarding proximal flows; Sylvie Benzoni-Gavage, Isabelle Chalendar and Xavier Roblot, for hosting at the Institut Camille Jordan; and finally the anonymous referee for their very careful revision of the paper. Their suggestions and comments led to the correction of several mistakes, and substantially improved the quality of the paper.BIBLIOGRAPHY
Models without indiscernibles. Fred G Abramson, Leo A Harrington, 572-600. ↑150J. Symb. Log. 433Fred G. Abramson and Leo A. Harrington, Models without indiscernibles, J. Symb. Log. 43 (1978), no. 3, 572-600. ↑150
Random Orderings and Unique Ergodicity of Automorphism Groups. Omer Angel, Alexander S Kechris, Russell Lyons, J. Eur. Math. Soc. 1610181Omer Angel, Alexander S. Kechris, and Russell Lyons, Random Orderings and Unique Ergodicity of Automorphism Groups, J. Eur. Math. Soc. 16 (2012), no. 10, 2059-2095. ↑150, 181
Compact right topological semigroups and generalizations of almost periodicity. John F Berglund, Hugo D Junghenn, Paul Milnes, Lecture Notes in Mathematics. 663174SpringerJohn F. Berglund, Hugo D. Junghenn, and Paul Milnes, Compact right topological semigroups and generalizations of almost periodicity, Lecture Notes in Mathematics, vol. 663, Springer, 1978. ↑160, 174
Gowers' Ramsey theorem with multiple operations and dynamics of the homeomorphism group of the Lelek fan. Dana Bartošová, Aleksandra Kwiatkowska, J. Comb. Theory, Ser. A. 150Dana Bartošová and Aleksandra Kwiatkowska, Gowers' Ramsey theorem with multiple operations and dynamics of the homeomorphism group of the Lelek fan, J. Comb. Theory, Ser. A 150 (2017), 108-136. ↑150
The universal minimal flow of the homeomorphism group of the Lelek fan, to appear in Trans. Am. Math. Soc. 150, The universal minimal flow of the homeomorphism group of the Lelek fan, to appear in Trans. Am. Math. Soc., 2018. ↑150
The Ramsey property for Banach spaces. Dana Bartošová, Jordi Lopez-Abad, Martino Lupini, Brice R Mbombo, Choquet simplices, and their noncommutative analogs. 150Dana Bartošová, Jordi Lopez-Abad, Martino Lupini, and Brice R. Mbombo, The Ramsey property for Banach spaces, Choquet simplices, and their noncommutative analogs, https://arxiv.org/abs/1708.01317, 2016. ↑150
Manuel Bodirsky, Ramsey classes: examples and constructions, Surveys in combinatorics 2015. Cambridge University Press424Manuel Bodirsky, Ramsey classes: examples and constructions, Surveys in combina- torics 2015, London Mathematical Society Lecture Note Series, vol. 424, Cambridge University Press, 2015, pp. 1-48. ↑150
. Nicolas Bourbaki, Elements of Mathematics. General topology. Chapters 1-4. 157SpringerTranslated from the French, Reprint of the 1989 English translationNicolas Bourbaki, Elements of Mathematics. General topology. Chapters 1-4, Springer, 1998, Translated from the French, Reprint of the 1989 English translation. ↑157
Decidability of definability. Manuel Bodirsky, Michael Pinsker, Todor Tsankov, J. Symb. Log. 784172Manuel Bodirsky, Michael Pinsker, and Todor Tsankov, Decidability of definability, J. Symb. Log. 78 (2013), no. 4, 1036-1054. ↑155, 172
On a Roelcke-precompact Polish groups that cannot act transitively on a complete metric space. Yaacov Itaï Ben, Isr. J. Math. 2241176Itaï Ben Yaacov, On a Roelcke-precompact Polish groups that cannot act transitively on a complete metric space, Isr. J. Math. 224 (2018), no. 1, 105-132. ↑154, 176
Metrizable universal minimal flows of Polish groups have a comeagre orbit. Julien Itaï Ben Yaacov, Todor Melleray, Tsankov, Geom. Funct. Anal. 271172Itaï Ben Yaacov, Julien Melleray, and Todor Tsankov, Metrizable universal minimal flows of Polish groups have a comeagre orbit, Geom. Funct. Anal. 27 (2017), no. 1, 67-77. ↑150, 172
Weakly almost periodic functions, model-theoretic stability, and minimality of topological groups. Todor Itaï Ben Yaacov, Tsankov, Trans. Am. Math. Soc. 36811175Itaï Ben Yaacov and Todor Tsankov, Weakly almost periodic functions, model-theoretic stability, and minimality of topological groups, Trans. Am. Math. Soc. 368 (2016), no. 11, 8267-8294. ↑166, 173, 174, 175
Jan De, Vries , Elements of topological dynamics, Mathematics and its Applications. Kluwer Academic Publishers257177Jan de Vries, Elements of topological dynamics, Mathematics and its Applications, vol. 257, Kluwer Academic Publishers, 1993. ↑158, 164, 172, 173, 174, 177
Fraïssé limits of C * -algebras. Christophe J Eagle, Ilijas Farah, Bradd Hart, Boris Kadets, Vladyslav Kalashnyk, Martino Lupini, J. Symb. Log. 812Christophe J. Eagle, Ilijas Farah, Bradd Hart, Boris Kadets, Vladyslav Kalashnyk, and Martino Lupini, Fraïssé limits of C * -algebras, J. Symb. Log. 81 (2016), no. 2, 755-773. ↑150
Convex Ramsey matrices and nonamenability of automophism groups of generic structures. Omid Etesami, Zaniar Ghadernezhad, 180Omid Etesami and Zaniar Ghadernezhad, Convex Ramsey matrices and non- amenability of automophism groups of generic structures, https://arxiv.org/abs/ 1711.02049, 2017. ↑180
Automorphism groups and Ramsey properties of sparse graphs. David M Evans, Jan Hubička, Jaroslav Nešetřil, 172David M. Evans, Jan Hubička, and Jaroslav Nešetřil, Automorphism groups and Ram- sey properties of sparse graphs, https://arxiv.org/abs/1801.01165, 2016. ↑172
Ryszard Engelking, General topology. Heldermann Verlag6157Ryszard Engelking, General topology, Sigma Series in Pure Mathematics, vol. 6, Heldermann Verlag, 1989. ↑157
Sur l'extension aux relations de quelques propriétés des ordres. Roland Fraïssé, Ann. Sci. Éc. Norm. Supér. 71Roland Fraïssé, Sur l'extension aux relations de quelques propriétés des ordres, Ann. Sci. Éc. Norm. Supér. 71 (1954), 363-388. ↑153
Automorphism groups of generic structures: Extreme amenability and amenability. Zaniar Ghadernezhad, Hamed Khalilian, Massoud Pourmahdian, Fundam. Math. 2421180Zaniar Ghadernezhad, Hamed Khalilian, and Massoud Pourmahdian, Automorphism groups of generic structures: Extreme amenability and amenability, Fundam. Math. 242 (2018), no. 1, 1-23. ↑180
Proximal flows. Shmuel Glasner, Lecture Notes in Mathematics. 517173SpringerShmuel Glasner, Proximal flows, Lecture Notes in Mathematics, vol. 517, Springer, 1976. ↑173
On minimal actions of Polish groups. Eli Glasner, Topology Appl. 851-3179Eli Glasner, On minimal actions of Polish groups, Topology Appl. 85 (1998), no. 1-3, 119-125. ↑150, 179
Ramsey's theorem for a class of categories. Roland L Graham, Klaus Leeb, Bruce L Rothschild, Adv. Math. 8Roland L. Graham, Klaus Leeb, and Bruce L. Rothschild, Ramsey's theorem for a class of categories, Adv. Math. 8 (1972), 417-433. ↑150
Ramsey's theorem for a class of categories. [glr73] , Errata , Adv. Math. 10[GLR73] , Errata: "Ramsey's theorem for a class of categories", Adv. Math. 10 (1973), 326-327. ↑150
A topological application of the isoperimetric inequality. Mikhael Gromov, D Vitali, Milman, Am. J. Math. 1054171Mikhael Gromov and Vitali D. Milman, A topological application of the isoperimetric inequality, Am. J. Math. 105 (1983), no. 4, 843-854. ↑150, 171
Hereditarily non-sensitive dynamical systems and linear representations. Eli Glasner, Michael Megrelishvili, Colloq. Math. 1042Eli Glasner and Michael Megrelishvili, Hereditarily non-sensitive dynamical systems and linear representations, Colloq. Math. 104 (2006), no. 2, 223-283. ↑160
New algebras of functions on topological groups arising from G-spaces. 201171[GM08] , New algebras of functions on topological groups arising from G-spaces, Fun- dam. Math. 201 (2008), no. 1, 1-51. ↑151, 160, 171
Banach representations and affine compactifications of dynamical systems, Asymptotic geometric analysis, Fields Institute Communications. Springer68[GM13] , Banach representations and affine compactifications of dynamical systems, Asymptotic geometric analysis, Fields Institute Communications, vol. 68, Springer, 2013, pp. 75-144. ↑160
Some extremely amenable groups related to operator algebras and ergodic theory. Thierry Giordano, Vladimir G Pestov, J. Inst. Math. Jussieu. 6Thierry Giordano and Vladimir G. Pestov, Some extremely amenable groups related to operator algebras and ergodic theory, J. Inst. Math. Jussieu 6 (2007), 279-315. ↑150
Ramsey's theorem for n-parameter sets. Roland L Graham, Bruce L Rothschild, Trans. Am. Math. Soc. 159Roland L. Graham and Bruce L. Rothschild, Ramsey's theorem for n-parameter sets, Trans. Am. Math. Soc. 159 (1971), 257-292. ↑150
Critères de compacité dans les espaces fonctionnels généraux. Alexander Grothendieck, Am. J. Math. 74Alexander Grothendieck, Critères de compacité dans les espaces fonctionnels généraux, Am. J. Math. 74 (1952), 168-186. ↑174
All those Ramsey classes. Jan Hubička, Jaroslav Nešetřil, 150Jan Hubička and Jaroslav Nešetřil, All those Ramsey classes, https://arxiv.org/ abs/1606.07979, 2016. ↑150
Wilfrid Hodges, Model theory, Encyclopedia of Mathematics and Its Applications. Cambridge University Press42152Wilfrid Hodges, Model theory, Encyclopedia of Mathematics and Its Applications, vol. 42, Cambridge University Press, 1993. ↑152
The dynamical hierarchy for Roelcke precompact Polish groups. Tomas Ibarlucía, Isr. J. Math. 2152171Tomas Ibarlucía, The dynamical hierarchy for Roelcke precompact Polish groups, Isr. J. Math. 215 (2016), no. 2, 965-1009. ↑166, 171
Méthodes de théorie des modèles pour l'étude de groupes topologiques. 166Université Claude Bernard -Lyon 1 (France)Ph.D. thesis[Iba16b] , Méthodes de théorie des modèles pour l'étude de groupes topologiques, Ph.D. thesis, Université Claude Bernard -Lyon 1 (France), 2016. ↑166
Representations of the infinite-dimensional unitary group. Alexandre A Kirillov, 288-290. ↑171Dokl. Akad. Nauk SSSR. 212Alexandre A. Kirillov, Representations of the infinite-dimensional unitary group, Dokl. Akad. Nauk SSSR 212 (1973), 288-290. ↑171
Fraïssé limits, Ramsey theory, and topological dynamics of automorphism groups. Alexander S Kechris, Vladimir G Pestov, Stevo Todorcevic, Geom. Funct. Anal. 151153Alexander S. Kechris, Vladimir G. Pestov, and Stevo Todorcevic, Fraïssé limits, Ramsey theory, and topological dynamics of automorphism groups, Geom. Funct. Anal. 15 (2005), no. 1, 106-189. ↑150, 151, 153
A proof of uniqueness of the Gurariȋ space. Wiesław Kubiś, Sławomir Solecki, Isr. J. Math. 1951Wiesław Kubiś and Sławomir Solecki, A proof of uniqueness of the Gurariȋ space, Isr. J. Math. 195 (2013), no. 1, 449-456. ↑150
Fraïssé sequences: category-theoretic approach to universal homogeneous structures. Wiesław Kubiś, 1755-1811. ↑150Ann. Pure Appl. Logic. 16511Wiesław Kubiś, Fraïssé sequences: category-theoretic approach to universal homoge- neous structures, Ann. Pure Appl. Logic 165 (2014), no. 11, 1755-1811. ↑150
Polish groups with metrizable universal minimal flows. Julien Melleray, Lionel Nguyen Van Thé, Todor Tsankov, Int. Math. Res. Not. 1505181Julien Melleray, Lionel Nguyen Van Thé, and Todor Tsankov, Polish groups with metrizable universal minimal flows, Int. Math. Res. Not. (2016), no. 5, 1285-1307. ↑150, 172, 181
. Justin Moore, Amenability , Ramsey Theory, Fundam. Math. 2203180Justin Moore, Amenability and Ramsey theory, Fundam. Math. 220 (2013), no. 3, 263-280. ↑156, 180
Julien Melleray, Todor Tsankov, Extremely amenable groups via continuous logic. 150157Julien Melleray and Todor Tsankov, Extremely amenable groups via continuous logic, https://arxiv.org/abs/1404.4590, 2011. ↑150, 157
For graphs there are only four types of hereditary Ramsey classes. J. Comb. Theory, Ser. B. 462171Jaroslav NešetřilJaroslav Nešetřil, For graphs there are only four types of hereditary Ramsey classes, J. Comb. Theory, Ser. B 46 (1989), no. 2, 127-132. ↑171
Partitions of finite relational and set systems. Jaroslav Nešetřil, Vojtěch Rödl, J. Comb. Theory, Ser. A. 223Jaroslav Nešetřil and Vojtěch Rödl, Partitions of finite relational and set systems, J. Comb. Theory, Ser. A 22 (1977), no. 3, 289-312. ↑150
Ramsey classes of set systems. J. Comb. Theory, Ser. A. 342171[NR83] , Ramsey classes of set systems, J. Comb. Theory, Ser. A 34 (1983), no. 2, 183-201. ↑150, 155, 171
Structural Ramsey theory of metric spaces and topological dynamics of isometry groups. Lionel Nguyen Van Thé, x+140. ↑171Mem. Am. Math. Soc. 206968Lionel Nguyen Van Thé, Structural Ramsey theory of metric spaces and topological dynamics of isometry groups, Mem. Am. Math. Soc. 206 (2010), no. 968, x+140. ↑171
A survey on structural Ramsey theory and topological dynamics with the Kechris-Pestov-Todorcevic correspondence in mind. 17172volume on Selected topics in combinatorial analysis, updated version available on arXiv[NVT15] , A survey on structural Ramsey theory and topological dynamics with the Kechris-Pestov-Todorcevic correspondence in mind, Zb. Rad. (Beogr.) 17 (2015), 189- 207, volume on Selected topics in combinatorial analysis, updated version available on arXiv. ↑150, 172
Glasner's problem for Polish groups with metrizable universal minimal flow. 155179to appear in Ann[NVT17] , Glasner's problem for Polish groups with metrizable universal minimal flow, to appear in Ann. Inst. Fourier, 2017. ↑155, 177, 179
Open problems in topology. Elliott PearlElsevierII179Elliott Pearl (ed.), Open problems in topology. II, Elsevier, 2007. ↑179
On free actions, minimal flows, and a problem by Ellis. Vladimir G Pestov, Trans. Am. Math. Soc. 35010151Vladimir G. Pestov, On free actions, minimal flows, and a problem by Ellis, Trans. Am. Math. Soc. 350 (1998), no. 10, 4149-4165. ↑151
Ramsey-Milman phenomenon, Urysohn metric spaces, and extremely amenable groups. Isr. J. Math. 127151[Pes02] , Ramsey-Milman phenomenon, Urysohn metric spaces, and extremely amenable groups, Isr. J. Math. 127 (2002), 317-357. ↑151
Dynamics of infinite-dimensional groups. The Ramsey-Dvoretzky-Milman phenomenon. University Lecture Series. 40159American Mathematical Society[Pes06] , Dynamics of infinite-dimensional groups. The Ramsey-Dvoretzky-Milman phenomenon, University Lecture Series, vol. 40, American Mathematical Society, 2006. ↑151, 158, 159
Amenability and unique ergodicity of automorphism groups of countable homogeneous directed graphs. Michael Pawliuk, Miodrag Sokić, 150181Michael Pawliuk and Miodrag Sokić, Amenability and unique ergodicity of auto- morphism groups of countable homogeneous directed graphs, https://arxiv.org/ abs/1712.09461, 2016. ↑150, 181
Walter Roelcke, Susanne Dierolf, Uniform structures on topological groups and their quotients. McGraw-Hill International Book Co166Walter Roelcke and Susanne Dierolf, Uniform structures on topological groups and their quotients, McGraw-Hill International Book Co., 1981. ↑166
Compact semitopological semigroups: an intrinsic theory. Wolfgang Ruppert, Lecture Notes in Mathematics. 1079174SpringerWolfgang Ruppert, Compact semitopological semigroups: an intrinsic theory, Lecture Notes in Mathematics, vol. 1079, Springer, 1984. ↑174
Abstract approach to finite Ramsey theory and a self-dual Ramsey theorem. Sławomir Solecki, Adv. Math. 248Sławomir Solecki, Abstract approach to finite Ramsey theory and a self-dual Ramsey theorem, Adv. Math. 248 (2013), 1156-1198. ↑155
Recent developments in finite Ramsey theory: Foundational aspects and connections with dynamics. Proceedings of the International Congress of Mathematicians. Dae-Woong Lee, and Ikkwon Yiethe International Congress of MathematiciansSun Young Jang, Young Rock Kim2[Sol14] , Recent developments in finite Ramsey theory: Foundational aspects and con- nections with dynamics, Proceedings of the International Congress of Mathematicians (Sun Young Jang, Young Rock Kim, Dae-Woong Lee, and Ikkwon Yie, eds.), vol. 2, 2014, pp. 103-115. ↑150
Introduction to Ramsey spaces. Stevo Todorcevic, Annals of Mathematics Studies. 174155Princeton University PressStevo Todorcevic, Introduction to Ramsey spaces, Annals of Mathematics Studies, vol. 174, Princeton University Press, 2010. ↑155
Unitary representations of oligomorphic groups. Todor Tsankov, Geom. Funct. Anal. 222166Todor Tsankov, Unitary representations of oligomorphic groups., Geom. Funct. Anal. 22 (2012), no. 2, 528-555. ↑154, 166
Automorphism groups and their actions, Habilitation memoir. 156180[Tsa14] , Automorphism groups and their actions, Habilitation memoir, 2014. ↑156, 180
The Roelcke compactification of groups of homeomorphisms. Vladimir V Uspenskij, 195-205. ↑166Topology Appl. 1111-2Vladimir V. Uspenskij, The Roelcke compactification of groups of homeomorphisms, Topology Appl. 111 (2001), no. 1-2, 195-205. ↑166
Compactifications of topological groups. Proceedings of the Ninth Prague Topological Symposium. the Ninth Prague Topological Symposium166Topology Atlas, Compactifications of topological groups, Proceedings of the Ninth Prague Topological Symposium (2001), Topology Atlas, 2002, pp. 331-346. ↑166
On subgroups of minimal topological groups. Topology Appl. 15514, On subgroups of minimal topological groups, Topology Appl. 155 (2008), no. 14, 1580-1606. ↑166
Amenability and unique ergodicity of automorphism groups of Fraïssé structures. Andy Zucker, Fundam. Math. 2261Andy Zucker, Amenability and unique ergodicity of automorphism groups of Fraïssé structures, Fundam. Math. 226 (2014), no. 1, 41-62. ↑150
Topological dynamics of automorphism groups, ultrafilter combinatorics, and the generic point problem. Trans. Am. Math. Soc. 3689172, Topological dynamics of automorphism groups, ultrafilter combinatorics, and the generic point problem, Trans. Am. Math. Soc. 368 (2016), no. 9, 6715-6740. ↑150, 172
. Nguyen Lionel, Van Thé Aix Marseille Univ, Cnrs Centrale Marseille, I2M UMR 7373Lionel NGUYEN VAN THÉ Aix Marseille Univ, CNRS Centrale Marseille, I2M UMR 7373
Marseille (France) lionel. [email protected] (France) [email protected]
|
[] |
[
"Surface segregation of conformationally asymmetric polymer blends",
"Surface segregation of conformationally asymmetric polymer blends"
] |
[
"Semjon Stepanow \nFachbereich Physik\nMartin-Luther-Universität Halle\nD-06099HalleGermany\n",
"Andrei A Fedorenko \nFachbereich Physik\nMartin-Luther-Universität Halle\nD-06099HalleGermany\n"
] |
[
"Fachbereich Physik\nMartin-Luther-Universität Halle\nD-06099HalleGermany",
"Fachbereich Physik\nMartin-Luther-Universität Halle\nD-06099HalleGermany"
] |
[] |
We have generalized the Edwards' method of collective description of dense polymer systems in terms of effective potentials to polymer blends in the presence of a surface. With this method we have studied conformationally asymmetric athermic polymer blends in the presence of a hard wall to the first order in effective potentials. For polymers with the same gyration radius Rg but different statistical segment lengths lA and lB the excess concentration of stiffer polymers at the surface is derived as δρA(z = 0) ∼ (l −2 B − l −2 A )ln(R 2 g /l 2 c ), where lc is a local length below of which the incompressibility of the polymer blend is violated. For polymer blends differing only in degrees of polymerization the shorter polymer enriches the wall.
|
10.1103/physreve.73.031801
|
[
"https://export.arxiv.org/pdf/cond-mat/0509639v2.pdf"
] | 21,847,838 |
cond-mat/0509639
|
e5d7ecf06a31ecae48bc3a9c0de974aadd3d6619
|
Surface segregation of conformationally asymmetric polymer blends
3 Mar 2006
Semjon Stepanow
Fachbereich Physik
Martin-Luther-Universität Halle
D-06099HalleGermany
Andrei A Fedorenko
Fachbereich Physik
Martin-Luther-Universität Halle
D-06099HalleGermany
Surface segregation of conformationally asymmetric polymer blends
3 Mar 2006(Dated: March 23, 2022)numbers: 6125Hq6847Pe8380Tc
We have generalized the Edwards' method of collective description of dense polymer systems in terms of effective potentials to polymer blends in the presence of a surface. With this method we have studied conformationally asymmetric athermic polymer blends in the presence of a hard wall to the first order in effective potentials. For polymers with the same gyration radius Rg but different statistical segment lengths lA and lB the excess concentration of stiffer polymers at the surface is derived as δρA(z = 0) ∼ (l −2 B − l −2 A )ln(R 2 g /l 2 c ), where lc is a local length below of which the incompressibility of the polymer blend is violated. For polymer blends differing only in degrees of polymerization the shorter polymer enriches the wall.
We have generalized the Edwards' method of collective description of dense polymer systems in terms of effective potentials to polymer blends in the presence of a surface. With this method we have studied conformationally asymmetric athermic polymer blends in the presence of a hard wall to the first order in effective potentials. For polymers with the same gyration radius Rg but different statistical segment lengths lA and lB the excess concentration of stiffer polymers at the surface is derived as δρA(z = 0) ∼ (l −2 B − l −2 A )ln(R 2 g /l 2 c ), where lc is a local length below of which the incompressibility of the polymer blend is violated. For polymer blends differing only in degrees of polymerization the shorter polymer enriches the wall.
I. INTRODUCTION
The effects of surfaces on the behavior of polymer melts and blends is of basic importance in their numerous applications such as adhesion, lubrication, wetting, catalysis, etc. [1]. The structure and properties of the blends and other polymeric materials within a few nanometers at a surface can differ significantly from corresponding properties in the bulk. For example, in polymer blend a segregation of one of the components to the surface is possible even if the blend is miscible in bulk. Hence the questions how the surface can change the properties of polymeric materials and how it may be controlled are of practical interest. Despite the large theoretical and experimental interest in the behavior of polymer blends in the presence of surfaces and achieved basic understanding there is no satisfactory analytical treatment of segregation of polymers at surfaces. Most studies of polymer blends near surfaces are based on phenomenological expressions for a free energy, which include surface terms that account for adsorption or repulsion of a particular type of monomers [2,3]. Minimization of the free energy gives equilibrium concentration profiles for each component. There exist more rigorous approaches, which allow one to derive the concentration profiles starting from the microscopic polymer statistics in the presence of a surface. One of such microscopic approaches is the integral equations method which can be applied to various sitesite or hard-particle models of a dense polymeric system [4]. This method having many advantages such as ability to predict microscopic correlations between different types of monomers and between monomers and surfaces requires a considerable amount of numerical computations. Most of analytically treatable methods, which rely on the continuum Gaussian chain model, take into account the monomer-monomer interactions usually using either the random phase approximation [5], which is most suited to treat systems in the weak segregation limit or self-consistent-field theories [6,7], which are most suited to treat systems in the strong segregation limit.
Recently there has been a big deal of attention on the surface segregation due to conformational asymmetry of the molecules of the polymer blend and due to differences in topology [4,8,9,10,11,12,13,14,15,16,17]. It was established that the composition of polymer blends in the vicinity of surfaces can be different from the bulk composition even for neutral surfaces. It was found that for polymer blends composed of polymers with different degree of polymerization but of chemically identical monomers, shorter polymers are in excess at the wall. It was also demonstrated in simulations [4,11] and supported by calculations using the integral equation theory [18] that stiffer polymers are present in excess in the vicinity of the wall. However the self-consistent field theory developed in Ref. [13] predicts the opposite effect, i.e., the excess of more flexible polymers. Unfortunately, no predictions on the behavior of polymer blends of chemically identical polymers with different degrees of polymerization were made in Ref. [13].
In this paper we present the analytical study of the behavior of athermic polymer blend in the presence of a hard wall using the generalization of the Edwards' collective description of dense polymer systems in terms of effective potentials [19,20] to polymer blends in the presence of a neutral surface. The bare one-polymer Green's function G obeys the Dirichlet boundary condition. We show that a partial summation of graphs results in replacing the bare G with the effective one-polymer Green's function G r , which, as we argue, obeys the reflecting boundary condition. The bare and effective Green's functions are related by the Dyson equation, where the selfenergy Σ is defined by series of graphs. This part of our work is similar to Ref. [13], however with the following significant difference. In the present work the Dyson equation results in an integral equation for G r , which determines the relevant reference state to describe polymer melts in the presence of a neutral wall. The concentration profiles are due to fluctuations, which are not taken into account in self-consistent field theories. The method we use can be applied in a straightforward way to study the behavior of polymer blends and copolymer melt in the presence of selective surfaces, the dimensions of polymer molecules in the melt, the distribution of polymer ends, etc.
The paper is organized as follows. Section II A outlines the statistics of a single polymer chain in the presence of a hard wall. Section II B introduces to the collective description of dense polymer system. Section II C contains the discussion of the behavior of the effective potentials and screening effects in the presence of a hard wall. Subsection II D introduces to the collective description in the presence of a neutral surface. Section III contains calculations of the excess monomer concentration of constituents of an incompressible athermic polymer blend in the vicinity of a hard wall. Section IV contains our conclusions.
II. COLLECTIVE DESCRIPTION OF DENSE POLYMER SYSTEMS
A. Polymer chains in the presence of a hard wall
The Green's function of a free polymer, which is proportional to the relative number of configurations of the ideal chain with the ends fixed at r and r ′ , and gives under appropriate normalization the distribution function of the end-to-end distance, obeys the Schrödinger type differential equation [19,21]
∂ ∂N − a 2 ∇ 2 r G(r, N ; r ′ ) = δ(r − r ′ )δ(N ),(1)
where N is the number of statistical segments, and a 2 = l 2 /6 with l being the statistical segment length of the chain. The distribution function of the end-to-end distance obtained from Eq. (1) reads
G 0 (r, N ; 0) = 1 4πa 2 N 3/2 exp − r 2 4a 2 N .(2)
In the presence of a hard wall we have to impose an appropriate boundary condition on Eq. (1). For polymers in a dilute solution with a hard wall situated at z = 0 one should use the Dirichlet boundary condition
G(r, N ; r ′ )| z=0 = 0,(3)
where r ≡ {r , z}. The solution of Eq. (1) with the boundary condition (3) is given by
G(r, N ; r ′ ) = G 0 (r − r ′ , N ; 0) [G 0 (z − z ′ , N ; 0) − G 0 (z + z ′ , N ; 0)] .(4)
It was argued in Ref. [22] that for an incompressible polymer melt in the presence of a neutral surface one should impose the reflecting (Neumann) boundary condition
∂ z G(z, N ; z ′ )| z=0 = 0(5)
on the Green's function of single polymer chains. The solution of Eq. (1) with the boundary condition (5) is given by
G(r, N ; r ′ ) = G 0 (r − r ′ , N ; 0) [G 0 (z − z ′ , N ; 0) + G 0 (z + z ′ , N ; 0)] .(6)
The Laplace transform of the z part of the Green's function with respect to N , which we will need in the following, is given by
G 0 (z − z ′ , p) = 1 2a √ p exp − | z − z ′ | √ p a .(7)
The monomer density of a single polymer chain
n(z, N ) = N 0 ds δ(z − z(s))(8)
can be expressed through the Green's function of the polymer chain as follows
n(z, N ) = N 0 ds ∞ 0 dz ′ ∞ 0 dz ′′ G(z ′ , z, N − s)G(z, z ′′ , s).
(9) The straightforward computation using the Green's function obeying adsorbing and reflecting boundary condition yields
n(z, N ) = N 2erf(z/2) + z 2 [1 + erf(z/2)] + 2z √ π exp(−z 2 /4) − erf(z) − 2z 2 erf(z) − 2z √ π exp(−z 2 ) ,(10)n(z, N ) = N,(11)
respectively. The distance z in Eq. (10) is measured in units of R g . The monomer density of one chain in the presence of a hard wall does not depend on the distance to the wall in the case of the reflecting boundary condition. The multiplication of n(z, N ) in Eqs. (10) and (11) with the number of chains per volume n/V gives the monomer density of a mixture of independent chains. The necessity of change of the boundary condition in the polymer melt will be discussed at the end of Section II D.
B. Collective description of the polymer mixture in bulk
In the analytical approach to the description of dense polymer systems due to Leibler [5] the random phase approximation is used to derive the Ginzburg-Landau type functional of the diblock copolymer melt as a functional of the order parameter. The collective description of concentrated polymer systems due to Edwards [20] gives the physical quantities under interest as series in powers of the effective potentials. These series are closely related to those in the theory of polymer solutions [23] with the main difference that the bare interaction potentials between the monomers are replaced by the effective ones (see below). The diagrammatic way of introduction the collective description in Ref. [24] enables one to go beyond the random phase approximation and establishes the connection between Leibler's and Edwards' approaches.
We now will consider the collective description of the polymer mixture consisting of A and B polymers in terms of effective potentials following Ref. [24], where this approach was developed for copolymer melt. The elastic part of the Edwards free energy of n A polymers of type A and n B polymers of type B chains confined to a volume V is given by
F el = 3 2l 2 nA+nB m=1 N 0 ds dr m (s) ds 2 ,(12)
where r m (s) parametrizes the configuration of mth polymer as a function of the position along the chain s. The interaction part of the free energy (in units of k B T ) of the blend can be written using the microscopic monomer densities of both polymers
ρ A (r) = nA m=1 N 0 ds δ(r − r m (s)), ρ B (r) = nA+nB m=nA+1 N 0 ds δ(r − r m (s))(13)
in the form
F int = 1 2 d 3 r 1 d 3 r 2 ρ α (r 1 )V αβ (r 1 − r 2 )ρ β (r 2 ),(14)
where
V αβ (r 1 − r 2 ) = V V + χ V + χ V δ (3) (r 1 − r 2 ) (15) (α,β = A, B)
is the interaction matrix of monomermonomer interactions, and χ is connected to the Flory-Huggins parameter. The sum convention over repeated indices is implied in Eq. (14) and henceforth. Let us now start with the computation of the average concentration of one of the polymers
ρ α (r) = Dr i (s)ρ α (r) exp (−F el − F int ) Dr i (s) exp (−F el − F int )(16)
using the collective description of the polymer blend in terms of effective potentials. The average monomer density can be written after introducing a two-component field Φ α (r) in the equivalent form as follows
ρ α (r) = Dr i (s) DΦ(r) r ′ ,β δ(Φ β − ρ β )Φ α (r)e −F el −Fint Dr i (s) DΦ(r) r ′ ,β δ(Φ β − ρ β )e −F el −Fint .(17)
The insertion of the Fourier transformation of the infinite product of δ-functions
r ′ ,β δ(Φ β (r ′ ) − ρ β (r ′ )) = DQ(r)e iQ·(Φ−ρ)
into Eq. (17), and replacement the order of integrations over the fields Φ(r) and Q(r) with the average over polymer configurations r i (s), yields the average over polymer configurations in the form
ρ α (r) = DΦ(r)Φ α (r)e −Fint DQ(r)e iQ·Φ e −iQ·ρ 0 DΦ(r)e −Fint DQ(r)e iQ·Φ e −iQ·ρ 0 ,(18)
where Q · ρ stands for d 3 r Q α (r)ρ α (r), and the brackets ... 0 means the average over conformations of ideal polymer chains according to
e −iQ·ρ 0 = Dr i (s)e −iQ·ρ e −F el .(19)
To perform the average over polymer configurations we expand the first exponent in expression (19) in Taylor series. The mean value (19) decomposes as products of averages over single polymer chains, which have the structure
d 3 r 1 ... d 3 r k Q α (r 1 )...Q α (r k ) ρ α (r 1 )...ρ α (r k ) ,(20)
where k = 0, 1, ..., and α = A, B. According to Ref. [24] it is convenient to associate expression (20) with a graph containing k wavy legs, which are associated with Q α (r i ). An example of graphs with k = 1, 2, 3 is shown in Fig. 1. The continuous lines are associated with the propagator (2) for a polymer blend in bulk and (4) for a polymer blend in the presence of a hard wall, respectively. Consequently, the series (19) is associated with a product of n A lines for A polymers and n B lines for B polymers containing an arbitrary number of wavy legs in each line. Note that below we will consider the mean value (19) in the thermodynamic limit n A → ∞, n B → ∞, V → ∞ under the condition that the monomer densities computed using the one-polymer Green's function are constant:
ρ α (r) 0 = N α n α /V ≡ ρ α .(21)
The corresponding density-density correlator reads
ρ α (r 2 )ρ α (r 1 ) 0 = 1 2 ρ α S αα (r 1 − r 2 ).
It is convenient to introduce the diagonal matrix S αβ such that the Fourier transforms of the diagonal elements, S αα (k), are the bulk structure factor of the αth component, which are given by
S αα (k) = ρ α N α g(k 2 R 2 g,α ),(22)
with g(y) = 2/y 2 [exp(−y) + y − 1] being the Debye function.
In order to carry out integrations over the twocomponent field Q(r) in treating a blend in bulk [24] one performs a partial summation of the series (19) (the latter can be carried out only in the thermodynamic limit) by taking into account only the lines with one and two insertions (wavy legs in Fig. 1) in one polymer line. As a result one obtains the expression
exp iQ · (Φ − ρ 0 ) − 1 2 Q · S · Q ,(23)
where ρ α 0 is the average monomer density (21). The integrations over Q for a polymer blend in the bulk is easily performed in Fourier space and result in
exp − 1 2 δΦ · S −1 · δΦ .(24)
where δΦ(r) = Φ(r) − ρ 0 . The expression obtained after performing integrations over Q(r) can be written as exp(−H{δΦ}) with H{δΦ} being the Ginzburg-Landau functional [5,24]. According to Ref. [24] the functional integration over δΦ in Eq. (17) yields the monomer density ρ α (r) as a series, which can be associated with Feynman graphs similar to those in the theory of polymer solutions in good solvent (see, for example, Ref. [23]) with the difference that the bare potentials are replaced by the effective ones
V eff = (V −1 + S) −1 .(25)
The lowest-order corrections to the monomer density are depicted in Fig. 2. The external lines in these graphs are associated with the expression
V ext = (V + S −1 ) −1 S −1 ,(26)
which can be written in the form V −1 V eff . The continuous lines denote the bare bulk one-polymer Green's functions (2).
C. Screened effective potentials in an incompressible polymer blend
We now will consider in more details the properties of the effective potential (25). We remind the reader that the quantities S, V , and V eff in Eq. (25) are matrices. The elements of the matrix V eff are explicitly given by where for the sake of simplicity we have introduced the notation
V eff AA (k) = R(k)[−V + 2V χS BB + χ 2 S BB ], V eff AB (k) = V eff BA (k) = −R(k)[V + χ],(27)V eff BB (k) = R(k)[−V + 2V χS AA + χ 2 S AA ],R(k) = [−1 − V S AA − V S BB + 2V χS AA S BB + χ 2 S AA S BB ] −1 .(28)
The behavior of the effective potentials in polymer blends was studied in Refs. [25,26,27]. In the following we will consider an incompressible and athermic polymer blend, which in the formalism under consideration is described in the limit V → ∞ and χ → 0. The effective potentials (27) simplify in this limit to
V eff αβ (k) = 1 S AA + S BB .
Using the explicit expression of the structure factor (22) we obtain for large polymer chains
V eff αβ (k) = 1 12 1 ρ A /l 2 A + ρ B /l 2 B k 2 .(29)
As it follows from Eq. (29) the expansion in powers of effective potentials is in fact an expansion in inverse powers of the density. We now will consider in more details the properties of the external potentials (26) associated with external lines in graphs a, b, and c in Fig. 2, which are explicitly given by
V ext AA (k) = −R(k)[1 + V S BB ], V ext AB (k) = R(k)[S AA (V + χ)], V ext BA (k) = R(k)[S BB (V + χ)], V ext BB (k) = −R(k)[1 + V S AA ].
In the case of athermic polymer blends the following identities hold
V ext AA (k) − V ext AB (k) = 1, V ext BB (k) − V ext BA (k) = 1.(30)
For incompressible and athermic polymer blends V ext αβ (k) simplify to
V ext AA (k) = S BB S AA + S BB , V ext AB (k) = − S AA S AA + S BB .
For large polymer chains we finally get
V ext AA (k) = ρ B /l 2 B ρ A /l 2 A + ρ B /l 2 B , V ext AB (k) = − ρ A /l 2 A ρ A /l 2 A + ρ B /l 2 B ,
and similar for V ext BA (k) and V ext BB (k). Note that in this limit the external lines are independent of the wave number k. Therefore, in the real space the external potential are local and are given by the Dirac's δ-function in this limit.
D. Collective description of the polymer mixture in the presence of a hard wall
We now will consider the collective description of a polymer blend in the presence of a hard wall. In contrast to the collective description of polymer blend in the bulk outlined in Sec. II B we have to use now instead of the free propagator given by Eq. (2) a propagator fulfilling an appropriate boundary condition. In a theory based on the statistical-mechanical description of single polymer chains, the boundary conditions should coincide with those of single polymers, i.e., be the Dirichlet boundary condition (3). Since the behavior of a polymer chain in solution and in a polymer melt in the presence of a wall may be quite different, one can expect that the one-polymer Green's functions in solution and in melt may obey different boundary conditions. A consistent statistical-mechanical theory of polymer melt should be able, in principle, to derive the boundary condition for one-polymer Green's function appropriate for melt. We will show here that using a partial summation of graphs it is possible to reformulate the description of polymer blend in terms of the effective one-polymer Green's function. We will bring forward the arguments that the latter should obey the reflecting boundary conditions.
In order to introduce the collective description in the presence of the wall, we perform the same steps as in the bulk and arrive at the expression (18), and expand further exp(−iQ · ρ) 0 in Taylor series as given in Eq. (20). In contrast to the bulk the continuous lines are associated with bare one-polymer Green's functions obeying the Dirichlet boundary condition (3). The field Φ α (r) as well as Q α (r) are defined in the whole space as in the bulk formalism. The mean density obtained by using the adsorbing boundary conditions is given by Eq. (10) multiplied with the factor n α /V. The computation of the density-density correlation function ρ α (r 1 )ρ α (r 2 ) 0 (no summation over α) for a polymer blend in the presence of a hard wall gives
ρ α (r 2 )ρ α (r 1 ) 0 = 1 2 ρ α N 0 ds 2 s2 0 ds 1 δ[z 2 − z(s 2 )]δ[z 1 − z(s 1 )] × δ[r 2, − r 2, (s 2 )]δ[r 1, − r 2, (s 1 )] 0 = 1 2 ρ α k exp(ik (r 2, − r 1, ))S αα (z 2 , z 1 , k , N ),(31)
where the Laplace transform of the diagonal element of the structure factor is given by
S αα (z 2 , z 1 , k , p) = 2 p 2 [G 0 (z 2 −z 1 , p+x)−G 0 (z 2 +z 1 , p+x)],(32)
and where the notation x = k 2 a 2 has been introduced. The nondiagonal elements of the matrix S αβ are zero. Note that mean values of density products are zero if one of z i is negative, so that the expression (32) applies only for positive z 1 and z 2 .
For a polymer blend in the presence of a wall the translationally invariant part of the structure factor (32) is defined only in the half space, so that the integration over Q, which requires the inversion of S αβ , is not so straightforward. In this case we separate the densitydensity correlator in two parts according to
ρ α (r 2 )ρ α (r 1 ) 0 = ρ α (r 2 )ρ α (r 1 ) 0b + ρ α (r 2 )ρ α (r 1 ) 0 − ρ α (r 2 )ρ α (r 1 ) 0b = ρ α (r 2 )ρ α (r 1 ) 0b + ρ α (r 2 )ρ α (r 1 ) 0s ,(33)
and perform a partial summation by taking into account in every line only the first term in Eq. (33). In proceeding in this way we fix the reference state to be that of the bulk far from the wall. The prize to pay is that the 2nd term in Eq. (33) has to be taken into account as a vertex with two insertions, which is shown in Fig. 3. The summation over lines with one and two insertions in one line results exactly in the expression given by Eq. (23) with the average density given now by Eq. (10). The terms in the series (19) with more than two fields Q(r) along one line [and two fields corresponding to the 2nd term in Eq. (33)] can be obtained from Eq. (23) and consequently Eq. (24) as derivatives with respect to δΦ(r). To compute the concentration profile according to Eq. (17) one should perform integration over the field Φ(r). While after integrations over Q the series (19) depends on δΦ(r) = Φ(r) − ρ , the interaction part of the free energy (14) has the form F int = 1 2 Φ · V · Φ. Rewriting the latter in terms of δΦ yields
F int = 1 2 δΦ · V · δΦ + δΦ · V · ρ ,(34)
where the linear term in δΦ has the same form as interaction with an external field in the formalism of Φ 4 theory. For an incompressible and athermic polymer blend, which we consider in the present work, the linear term vanishes. Similar to the consideration in bulk the expression obtained after performing integrations over Q(r) can be written as exp(−H{δΦ}) with H{δΦ} being a Ginzburg-Landau Hamiltonian including the surface terms. However, in contrast to the effective surface Hamiltonian used in many studies [1] the corresponding terms are not localized at the surface only [28]. The integration over the field Φ(r) can be performed in the same way as for bulk.
The collective description developed above is based on the concept of the effective potential, which takes into account the screening of monomer-monomer interactions in a melt. However, the effect of the wall is taken into account as in the case of diluted polymers via the Dirichlet boundary condition for one-polymer Green's function and leads to an inhomogeneous monomer density for distances up to the gyration radius. However, in a melt the density is expected to be rather homogeneous at distances z < R g . This is the result of the interplay of the interaction with the wall and the incompressibility of the polymer melt. While in the polymer solution (which is a liquid and as such is incompressible) the entropic repulsion with the wall favor the presence of solvent molecules at the wall. In the case of the melt this is not anymore the case, because the place of monomers being repulsed from the wall, will be occupied by monomers belonging to another polymer which at that moment are not or less repulsed from the wall. Due to this the melt density similar to the total density of the solution will not tend to zero in approaching the surface. We expect that the effect of the wall on the behavior in the polymer melt can be formulated in terms of the renormalized one-polymer Green's function, which should guarantee the uniformity of the density, and according to this should obey a boundary condition, which is different from the Dirichlet boundary condition. We now show that, indeed, the partial summation of graphs including insertions into continuous lines enables one to formulate the description of a polymer melt in terms of the effective one-polymer Green's function. We will consider for simplicity the renormalization of the bare one-polymer Green's functions in the expression for the concentration profile (35) where the bare one-polymer propagator obeys the Dirichlet boundary condition. The graphs b and c in Fig. 2 contribute to the bare Green's functions in (35). Using property (30) and expressing the external potentials V ext AA as 1+V ext AB we can divide this contribution into two parts. The first one is given by graphs b and c, in which the external line associated with 1 and which renormalizes the one-polymer propagators in (35). The second part, with the external line associated with V ext AB , together with the graph a describes the fluctuations corrections to the concentration profile. The renormalization to the first order can be extended to higher orders, with the result that the bare continuous lines will be replaced by the effective ones associated with the effective one-polymer Green's function. This procedure corresponds to reduction of the whole set of graphs to the skeleton graphs, i.e. the graphs without insertions into internal lines. The only exception are the graphs b and c in Fig. 2, which are due to the recasting of V ext AA . The renormalization of one-polymer graphs due to insertions into the internal lines can be represented using the Dyson equation
ρ α (z) = n α Nα 0 ds ∞ 0 dz ′ G(z ′ , z, N α − s) ∞ 0 dz ′′ G(z, z ′′ , s),G −1 r = G −1 − Σ,(36)
where Σ is the self-energy, which takes into account insertions along the chain. Note that G, etc., in Eq. (36) are matrices with respect to spatial coordinates. Examples of graphs contributing to Σ are given in Fig. 4. As a result of the partial summation of graphs taking into account the insertions into internal lines according to Eq. (36) the lowest-order contribution to the density profile (35) changes to
ρ α (z) = n α Nα 0 ds ∞ 0 dz ′ G r,α (z ′ , z, N α − s) × ∞ 0 dz ′′ G r,α (z, z ′′ , s).(37)
The fluctuation corrections to Eq. (37) are given by the skeleton graphs in Figs. 2 and 5. As a result of the partial summation the bare one-polymer propagators G are replaced by the effective ones G r,α . Equation (36) with Σ given as an infinite set of graphs is the basis of the self-consistent computation of the effective one-polymer Green's function in the polymer blend under presence of a hard wall. The solution of this equation is a difficult task which goes beyond the scope of the present article.
Fortunately, the form of G r,α in polymer fluid can be found from general arguments avoiding the direct solution of Eq. (36). According to the above discussion we expect that the density profile in an incompressible fluid in the presence of neutral wall will be uniform. On the another hand the density profile without taking into account the fluctuations is given by Eq. (37). As we have shown in Sec. II A the computation of the density using the one-polymer Green's function obeying the reflecting boundary condition gives a homogeneous density. Due to this we identify the effective one-polymer propagators G r,α with that obeying the reflecting boundary condition. Deviations from Silberberg hypothesis in thin polymer films were studied recently in Refs. [29,30,31,32]. Figure 4 shows that the contributions to the self-energy does not reduce to an effective potential as it was assumed in Ref. [13] in the approach based on the selfconsistent field theory. While the first graph in Fig. 4 takes into account the monomer-monomer interactions along one polymer, the 2nd graph (and higher-order graphs) takes into account monomer-monomer interactions between different polymers. Since the graphs contributing to the self-energy take into account the many particle interactions characteristic for a melt, we expect that the effective one-polymer Green's function G r,α obeys the boundary conditions appropriate for an incompressible liquid, i.e., the reflecting boundary conditions. This conclusion is supported by the following argument. In polymer melts similar to semidilute polymer solutions the relevant quantity governing the properties of the system is the number of monomers between two subsequent cross-links along the chain, which for polymer melts is of order of unity, instead of the chain length N . Consequently, in the polymer melt the effect of the wall on the monomers, which are close to the wall, will be similar to that on solvent molecules in solution. However, the monomers are classical objects, which are described in the relaxational regime. For a single monomer (or a solvent molecule), which dynamics is described by the Langevin equation, the steady-state distribution function is given by Boltzmann distribution exp[−U (z)/k B T ] and is therefore constant in the space between two walls. This makes clear that the Dirichlet boundary condition is irrelevant in dense polymeric systems.
It is well known that a polymer configuration corresponds to the trajectory of a quantum particle for imaginary times. According to this the problem of boundary condition in polymer melts is expected to have its counterpart in quantum fluids in the presence of a neutral boundary. While the wave functions of single particles obey the Dirichlet boundary condition at the wall, the density of the fluid is not required to be zero at the wall [33].
III. COMPUTATION OF THE EXCESS MONOMER CONCENTRATION
The skeleton graphs in Fig. 2 give the fluctuational part to the density profile. The free end of the external line is associated with the argument of the monomer density z (due to the symmetry along the wall the monomer density does not depend on r ). The explicit calculation shows that the one-loop graphs, where the external line is located outside the loop (graphs b and c in Fig. 2), are negligible for large N . The leading contribution is due to the graph a and the related graph, which describes the effect of B polymers on the concentration of A polymers. These graphs are shown in more details in Fig. 5.
We now will consider the computation of the concentration of say the component A in the presence of a hard wall. We will assume that the statistical segment length of the polymer A is larger than that of the polymer B, l A > l B , so that the polymer A is stiffer. The contribution to the excess concentration to the lowest order in powers of the effective potentials is given by graphs in Fig. 5. To conduct calculations it is convenient to consider the Laplace transform with respect to the contour length N . The analytical expression associated with the first graph in Fig. 5 is given by
− ρ A V ext AA 8π 3 N A d 2 q dq V eff AA (q 2 +q 2 ) × q 2 a 2 e −2z √ p+x/a + 2qa √ p + x sin(qz)e −z √ p+x/a + p+x p 2 (p + x)(p + x + q 2 a 2 ) 2 ,(38)
where p is Laplace conjugate to N and x = q 2 a 2 . The analytical expression of the 2nd graph in Fig. 5 is obtained from Eq. (38) using the replacements
V ext AA → V ext AB , V eff AA → V eff BB , ρ A → ρ B , N A → N B .
Note that the factor −1 is due to the fact that V and V eff appear with the sign minus in the exponential of the statistical weight of polymer configurations. The k 2 dependence of the effective potentials (29) leads to the divergence of the integrals over the wave vector in Eq. (38) at the upper limit of integration. However, the effective potentials acquire for finite V their bare values for large k, so that the integral converges at the upper limit of the integration. Therefore, for finite V the effective potentials are screened only for lengths larger than the local length
l c ≈ V −1/2 (ρ A /l 2 A +ρ B /l 2 B ) −1/2 ,
which is obtained from the explicit expressions of the effective potentials (27). The derivation shows that this length is the same for both polymers. We expect that for finite V the polymer blend can be considered as an incompressible only for lengths larger than l c . In order to simplify the integration over the wave vector in Eq. (38) we use the athermic and incompressible limit of the effective potentials (29), but restrict the integration to wave vectors smaller than the cutoff value Λ ≃ l −1 c . The inspection of Eq. (38) shows that it (and the expression associated with graphs with the external line being outside the loop) contains a z independent contribution to the excess concentration of the density. The straightforward computation yields the renormalization of the bulk monomer concentration as
ρ A = ρ A 1 + (1 − 2) 3ρ B Λ 4π l 2 A l 2 B (ρ A l 2 B + ρ B l 2 A ) 2 ( 1 l 2 B − 1 l 2 A
) .
(39) The factor 2 in Eq. (39) accounts for graphs similar to the graphs b and c in Fig. 2 but with the external lines being on the right side of the interaction line. Note that the mass divergences [23] of the graphs b and c are omitted, that implies the regularization of expression (16) with respect to the mass divergences at the beginning. Equation (39) shows that even in the bulk the packing effects change the bare density of the constituents: the concentration of the stiffer polymer becomes smaller. Without incorporating the possibility for a local nematic ordering, which is not taken into account in the model of a Gaussian polymer chain, polymers with larger statistical segment length are expected to have smaller density. Note that the renormalization of the bulk composition is local, and the comparison of Eq. (39) with the corresponding expression forρ B shows that the total density of the blend does not change. Although the renormalization of the bulk composition given by Eq. (39) is somewhat unexpected, its necessity can be explained qualitatively as follows. The density of an incompressible liquid at given T and V is determined by interactions between the molecules, and cannot be chosen arbitrarily as in gaslike systems. Thus, in application of the coarse-grained model under consideration to polymer blend Eq. (39) describes the renormalization of bare concentrations towards their concentrations in the polymer melt, which are determined by monomer-monomer interactions.
The z dependent part of Eq. (38) gives the excess monomer concentration as a function of the distance to the wall. The integration over the wave vector yields the simple expression
− V 0 8a 5 p 2 πz e −(2z/a) √ p (a − z √ p) −e −(2z/a) √ p+a 2 Λ 2 (a − zp p + a 2 Λ 2 ) − V 0 Λ 2a 4 π 2 p 2 Γ 0 2z a √ p − Γ 0 2z a p + a 2 Λ 2 ,(40)
where Γ α (x) = ∞ x dt t α−1 exp(−t) is the incomplete Gamma function, and the notation
V 0 = 1 12 ρ A ρ B N A l 2 B 1 (ρ A /l 2 A + ρ B /l 2 B ) 2
is introduced. To obtain the excess density one should add to expression (40), which is associated with the first graph in Fig. 5, the corresponding expression associated with the 2nd graph in Fig. 5. We will first compute the excess concentration of the stiffer (A) polymer at the surface δρ A (z = 0). To that end we put z = 0 in Eq. (40), take into account the second graph in Fig. 5, and perform the inverse Laplace transform. For large N we obtain the result
δρ A (z = 0) = 3 4π 2 Λ ρ A ρ B l 2 A l 2 B (ρ A l 2 B + ρ B l 2 A ) 2 × 1 l 2 B ln(a 2 B Λ 2 N B ) − 1 l 2 A ln(a 2 A Λ 2 N A ) .(41)
If both polymers have the same gyration radius R g = a √ N Eq. (41) simplifies to
δρ A (z = 0) = 3 4π 2 Λ ρ A ρ B l 2 A l 2 B (ρ A l 2 B + ρ B l 2 A ) 2 1 l 2 B − 1 l 2 A ln(Λ 2 R 2 g ). (42)
The excess concentration at the wall for polymer blend differing only in degrees of polymerization is derived from Eq. (41) as The latter shows that the shorter polymers are present in excess at the wall. Notice that the excess concentration depends logarithmically on the number of segments N . The contribution to the excess concentration at z = 0 associated with graphs b and c in Fig. 2 reads
δρ A (z = 0) = 3 4π 2 Λ l 2 ρ A ρ B (ρ A + ρ B ) 2 ln N B N A .δρ A (z = 0) = − 1 8(6π) 3/2 l 2 A ρ A (ρ A l 2 B + ρ B l 2 A ) 2 ×[ l 5 B ρ A √ N A ln( 4 9 a 2 A Λ 2 N A ) + l 5 A ρ B √ N B ln( 4 9 a 2 B Λ 2 N B )].(43)
Due to the factor N −1/2 the latter vanishes for large N . Note that for conformationally asymmetric polymers of the same gyration radius the sign of Eq. (43) is opposite to that of Eq. (42). The increase of δρ A (z = 0) with N agrees qualitatively with the results of numerical simulations and calculations using the integral equation theory [4].
To compute δρ A (z) for arbitrary z one should perform the inverse Laplace transform of Eq. (40). Since it cannot be performed analytically, we have used a numerical routine (Durbin) for inverse Laplace transform in Mathematica. The results of the numerical calculation of the excess concentration of stiffer polymers δρ A (z) for different values of the degrees of polymerization of more flexible polymer are shown in Fig. 6. It shows that the increase of N B results in an increase of the excess concentration of the A polymer. For N B < N A the concentration of A polymers is still in excess in the vicinity of the wall, but becomes lower than in the bulk for intermediate distances, i.e., the B polymers are in excess at these distances. These results are in agreement with numerical simulations and computations using the integral equation theory [4]. Figure 7 shows the result of the computation of the excess concentration of the shorter polymers in a polymer blend consisting of chemically identical polymers, which differ only in their degrees of polymerization. Figure 7 shows that shorter polymers are present in excess in the vicinity of the wall. This finding is in qualitative agreement with the result predicted in Ref. [34] and observed in Refs. [14] and [35,36,37]. The excess of shorter polymers in the case under con-sideration is compatible with the excess of the solvent at the wall in a polymer solution. The latter corresponds to the limit, when the polymerization degree of shorter polymers tends to unity. However, to describe this limit one has to take into account the higher-order terms in the perturbation series for the concentration profile.
Note that the both cases we have considered above (l A = l B , R gA = R gB and l A = l B , N A = N B ) follow from the general formula (40).
We now will give a qualitative explanation of the different behavior of polymers in the blend under the influence of a hard wall. A single polymer in a dilute solution obeys the Dirichlet boundary condition. As a consequence of the boundary condition the number of configurations available to the polymer chain lowers with the decrease of the distance to the wall. The latter results in an entropic repulsion of the polymer from the wall, and is responsible for the vanishing of the density at the wall. According to this the solvent molecules are favored in the vicinity of the wall with respect to the polymer monomers. A simple calculation using the distribution function obeying the Dirichlet boundary condition shows that the force acting on the free end of the polymer at a given distance to the wall is controlled by the gyration radius of the polymer R g = a √ N . A completely different behavior takes place in the case of incompressible polymer melts, where the entropic repulsion from the wall is balanced by the melt pressure with the consequence that the density is uniform. However, there is a difference in the behavior of the polymers in the vicinity of the wall for melt composed of different polymers. We consider first a polymer blend composed of polymers which differ only in degrees of polymerization. In a layer with the thickness equal to the gyration radius of larger polymers, the larger polymer experiences the entropic force from the wall while the shorter polymer does not. Due to this the larger polymer increases its distance to the wall, which will be occupied by shorter ones, in order that the total density will remain constant. The asymmetry in the behavior of polymers in the vicinity of the wall appears even in a polymer melt composed of identical polymers. According to the above argument the monomers of a polymer coil, which has contacts with the wall, are disfavored with respect to the ends of polymer coils which do not have contacts with the wall. Due to this the polymer ends are expected to be present in excess in the vicinity of the wall. The effect of the distribution of polymer ends on the surface tension was studied in Ref. [38]. A quantitative study of the distribution of polymer ends using the self-consistent field theory was performed in Ref. [39].
For polymers with different statistical segment lengths, but the same gyration radius the difference in the behavior in the vicinity of the wall can be explained qualitatively as follows. The monomer density of a polymer coil is given by ρ c = N/R 3 g = a −2 /R g , while the surface density of a coil is ρ s = ρ c R g = a −2 . Therefore, the surface density ρ s of the stiffer polymer is smaller. It is likely to expect that the repulsive effect of the wall on the coil is proportional to ρ s . According to this the repulsive effect of the wall is stronger for more flexible polymers. This is the reason that the monomers of stiffer polymers will be favored in the vicinity of the wall. The surface enrichment δρ A is expected to be proportional to the differences of surface densities, i.e., δρ A ∼ ρ B s −ρ A s , which agrees with our quantitative result (42). According to this qualitative consideration the difference in surface densities ρ B s − ρ A s is a drive for the conformational asymmetry. Since the monomers within the layer of thickness R g are affected by the wall, we expect that the excess concentration will depend on R g . However, the logarithmic dependence on R g in Eq.
(42) is difficult to derive using only the hand wavy arguments. Note that in the above computation of the excess concentration δρ A (z) we have taken into account the lowestorder correction in the series in powers of effective potentials. The effective potentials according to Eq. (29) are inversely proportional to the density, so that the perturbation expansion in powers of effective potentials is a series in inverse powers of the density. However, since the polymer melt has a fixed density, the inverse density is not a small parameter. The magnitude of the first-order correction can be controlled by considering polymers having the same gyration radius and small differences in l A and l B , or polymers with small differences in N A and N B for l A = l B . However, it is not clear without explicit computations, if the 2nd order term is smaller than the 1st order one under the above conditions. From the general point of view one would expect the following bounds on the total effect of the perturbation series. As already mentioned above for polymers differing only in degrees of polymerization the effect of the whole perturbation series should recover in the limit N A ≪ N B the behavior in polymer solutions, where the polymer concentration will tend to zero in approaching the surface. For polymers differing in flexibility the concentration of the stiffer polymer at the wall cannot exceed the total density of the polymer blend in bulk. In other words the concentration of the more flexible polymer cannot be negative. This determines the upper limit of applicability of our results given by Eqs. (41) and (42).
IV. CONCLUSIONS
To summarize, we have generalized the Edwards' collective description of dense polymer systems in terms of effective potentials to polymer blends in the presence of a surface. Using this formalism we have studied an incompressible athermic polymer blend of conformationally asymmetric polymers, which differ in statistical segment lengths, in the presence of a hard wall. We have computed the excess concentrations of constituents to the first order in powers of effective potentials. We have found that stiffer polymers are in excess in the vicinity of the surface, and that the concentration excess at the surface depends logarithmically on the degrees of polymerization. For polymer blends differing only in degrees of polymerization the shorter polymers are in excess at the wall. Our results are in agreement with numerical results available in the literature. The present method can be applied in a straightforward way to study the behavior of polymer blends and copolymer melt in the presence of selective surfaces, to study the dimensions of polymer molecules in the melt, the distribution of polymer ends, etc.
PACS numbers: 61.25.Hq, 68.47.Pe, 83.80.Tc.
FIG. 1 :
1Examples of graphs associated with the expression(20).
FIG. 2 :
2Examples of graphs contributing to the monomer concentration: graphs a and b are first order and c is second order in V eff . After renormalization the continuous line is associated with the effective propagator. Graph d with only one insertion of V eff is identically zero after renormalization of internal lines.
FIG. 3 :
3Vertex with two insertions generated by the second term of Eq.(33).
FIG. 4 :
4The lowest-order graphs contributing to the selfenergy. The continuous lines are associated with the effective one-polymer propagator Gr,α.
FIG. 5 :
5The Feynman diagrams giving the leading contribution to the excess monomer concentration.
FIG. 6 :
6(Color online) Concentration profile of A polymers as a function of the distance to the surface for different values of NB, and lA = 1.5, lB = 1, Λ −1 = 1.55, ρA = ρB = 0.5. The continuous line: NA = NB = 10 4 ; dashes: NB = 5×10 4 ; dots: NB = 2×10 3 . The inset shows the concentration profile in the vicinity of the surface as a function of the distance measured in units of lA.
FIG. 7 :
7(Color online) Concentration profile of A polymers as a function of the distance to the surface for different values of NB, and lA = lB = 1.5, NA = 10 4 , Λ −1 = 1.55, ρA = ρB = 0.5. The continuous line: NB = 2×10 4 ; dashes: NB = 5×10 4 .
AcknowledgmentsWe would like to thank H. Angerman, and A. Johner for useful discussions. A financial support from the Deutsche Forschungsgemeinschaft, SFB 418 is gratefully acknowledged.
. K Binder, Acta Polymerica. 46204K. Binder, Acta Polymerica 46, 204 (1995).
. G H Fredrickson, Macromolecules. 202535G.H. Fredrickson, Macromolecules 20, 2535 (1987).
K Binder, Phase Transitions and Critical Phenomena. C. Domb and J.L. LebowitzNew YorkAcademicK. Binder, in Phase Transitions and Critical Phenomena, edited by C. Domb and J.L. Lebowitz (Academic, New York, 1983).
. A Yethiraj, S K Kumar, A Hariharan, K S Schweizer, J. Chem. Phys. 1004691A. Yethiraj, S.K. Kumar, A. Hariharan, and K.S. Schweizer, J. Chem. Phys. 100, 4691 (1994).
. L Leibler, Macromolecules. 131602L. Leibler, Macromolecules 13, 1602 (1980).
. E Helfand, Y Tagami, J. Chem. Phys. 563592E. Helfand and Y. Tagami, J. Chem. Phys. 56, 3592 (1972).
. K F Freed, J. Chem. Phys. 1033230K.F. Freed, J. Chem. Phys. 103, 3230 (1995).
. M D Foster, M Sikka, N Singh, F S Bates, S K Satija, C F Majkrzak, J. Chem. Phys. 968605M.D. Foster, M. Sikka, N. Singh, F.S. Bates, S.K. Satija, and C.F. Majkrzak, J. Chem. Phys. 96, 8605 (1992).
. M Sikka, N Singh, A Karim, F S Bates, Phys. Rev. Lett. 70307M. Sikka, N. Singh, A. Karim, and F.S. Bates, Phys. Rev. Lett. 70, 307 (1993).
. G H Fredrickson, J P Donley, J. Chem. Phys. 978941G.H. Fredrickson and J.P. Donley, J. Chem. Phys. 97, 8941 (1992).
. A Yethiraj, Phys. Rev. Lett. 742018A. Yethiraj, Phys. Rev. Lett. 74, 2018 (1995).
. S K Kumar, A Yethiraj, K S Schweizer, F A M Leermakers, J. Chem. Phys. 10310332S.K. Kumar, A. Yethiraj, K.S. Schweizer, and F.A.M.Leermakers, J. Chem. Phys. 103, 10332 (1995).
. D T Wu, G H Fredrickson, J.-P Carton, J. Chem. Phys. 1046387D.T. Wu, G.H. Fredrickson, and J.-P. Carton, J. Chem. Phys. 104, 6387 (1996).
. D G Walton, A M Mayes, Phys. Rev. E. 542811D.G. Walton and A.M. Mayes, Phys. Rev. E 54, 2811 (1996).
Macromolecules. J P Donley, D T Wu, G H Fredrickson, 302167J.P. Donley, D.T. Wu, and G.H. Fredrickson, Macro- molecules 30, 2167 (1997).
. O N Tretinnikov, K Ohta, Langmuir. 14915O.N. Tretinnikov and K. Ohta, Langmuir 14, 915 (1998).
. S Tripathi, W G Chapman, Phys. Rev. Lett. 9487801S. Tripathi and W.G. Chapman, Phys. Rev. Lett. 94, 087801 (2005).
. A Yethiraj, C K Hall, J. Chem. Phys. 953749A. Yethiraj and C.K. Hall, J. Chem. Phys. 95, 3749 (1991).
. S F Edwards, Proc. Phys. Soc. 88265S.F. Edwards, Proc. Phys. Soc. 88, 265 (1966).
. S F Edwards, J. Phys. A. 81670S.F. Edwards, J. Phys. A 8, 1670 (1975).
. P G De Gennes, Rep. Prog. Phys. 32187P.G. de Gennes, Rep. Prog. Phys. 32, 187 (1969).
. A Silberberg, J. Chem. Phys. 482835A. Silberberg, J. Chem. Phys. 48, 2835 (1968).
J Cloizeaux, G Jannink, Their Modeling, and Structure. OxfordOxford University PressJ. des Cloizeaux and G. Jannink, Polymers in Solution, Their Modeling, and Structure (Oxford University Press, Oxford, 1990).
. S Stepanow, Macromolecules. 288233S. Stepanow, Macromolecules 28, 8233 (1995).
. M G Brereton, T A Vilgis, J. Phys. A. 50245M.G. Brereton and T.A. Vilgis, J. Phys. A 50, 245 (1989).
. T A Vilgis, R Borsali, Macromolecules. 233172T.A. Vilgis and R. Borsali, Macromolecules 23, 3172 (1990).
. I Y Erukhimovich, Comput. Theor. Polym. 8133I.Y. Erukhimovich et al., Comput. Theor. Polym. 8, 133 (1998).
. A A Fedorenko, S Stepanow, unpublishedA.A. Fedorenko, and S. Stepanow, (unpublished).
. A N Semenov, A Johner, Eur. Phys. J. E. 12469A.N. Semenov and A. Johner, Eur. Phys. J. E 12, 469 (2003).
. A Cavallo, M Müller, J P Wittmer, A Johner, K Binder, J. Phys.: Condens. Matter. 171697A. Cavallo, M. Müller, J.P. Wittmer, A. Johner, and K. Binder, J. Phys.: Condens. Matter 17, 1697 (2005).
. N Rehse, C Wang, M Hund, M Geoghegan, R Magerle, G Krausch, Eur. Phys. J. E. 469N. Rehse, C. Wang, M. Hund, M. Geoghegan, R. Magerle, and G. Krausch, Eur. Phys. J. E 4, 69 (2001).
. W Zhao, M H Rafailovich, J Sokolov, L J Fetters, R Plano, M K Sanyal, S K Sinha, B B Sauer, Phys. Rev. Lett. 701453W. Zhao, M.H. Rafailovich, J. Sokolov, L.J. Fetters, R. Plano, M.K. Sanyal, S.K. Sinha, and B.B. Sauer, Phys. Rev. Lett. 70, 1453 (1993).
An Introduction to the Theory of Superfluidity. I M Khalatnikov, W. A. BenjaminNew YorkI.M. Khalatnikov, An Introduction to the Theory of Su- perfluidity (W. A. Benjamin, New York, 1965).
. A Hariharan, S K Kumar, T P Russel, Macromolecules. 233584A. Hariharan, S.K. Kumar, and T.P. Russel, Macro- molecules 23, 3584 (1990).
. T F Schaub, G J Kellog, A M Mayes, R Kulasekere, J F Ankner, H Kaiser, Macromolecules. 293982T.F. Schaub, G.J. Kellog, A.M. Mayes, R. Kulasekere, J.F. Ankner, and H. Kaiser, Macromolecules 29, 3982 (1996).
. P P Hong, F J Boerio, S D Smith, Macromolecules. 27596P.P. Hong, F.J. Boerio, and S.D. Smith, Macromolecules 27, 596 (1994).
. I Hopkinson, F T Kiff, R W Richards, S Affrossman, M Hartshorne, R A Pethrick, H Munro, J R P Webster, Macromolecules. 28627I. Hopkinson, F.T. Kiff, R.W. Richards, S. Affross- man, M. Hartshorne, R.A. Pethrick, H. Munro, and J.R.P. Webster, Macromolecules 28, 627 (1995).
. P G De Gennes, C. R. Acad. Sci. III. 3071841SerieP. G. de Gennes, C. R. Acad. Sci. III 307, Serie II, 1841 (1988).
. D T Wu, G H Fredrickson, J.-P Carton, A Ajdari, L Leibler, J. Polym. Sci.: Part B. 332373D.T. Wu, G.H. Fredrickson, J.-P. Carton, A. Ajdari, and L. Leibler, J. Polym. Sci.: Part B 33, 2373 (1995).
|
[] |
[
"Cycle 23 Variation in Solar Flare Productivity",
"Cycle 23 Variation in Solar Flare Productivity"
] |
[
"Hugh Hudson \nSpringer ••••\n\n",
"Lyndsay Fletcher \nSpringer ••••\n\n",
"Jim Mctiernan \nSpringer ••••\n\n"
] |
[
"Springer ••••\n",
"Springer ••••\n",
"Springer ••••\n"
] |
[] |
The NOAA listings of solar flares in cycles 21-24, including the GOES soft X-ray magnitudes, enable a simple determination of the number of flares each flaring active region produces over its lifetime. We have studied this measure of flare productivity over the interval 1975-2012. The annual averages of flare productivity remained approximately constant during cycles 21 and 22, at about two reported M or X flares per region, but then increased significantly in the declining phase of cycle 23 (the years 2004-2005). We have confirmed this by using the independent RHESSI flare catalog to check the NOAA events listings where possible. We note that this measure of solar activity does not correlate with the solar cycle. The anomalous peak in flare productivity immediately preceded the long solar minimum between cycles 23 and 24.
|
10.1007/s11207-013-0384-7
|
[
"https://arxiv.org/pdf/1401.6474v1.pdf"
] | 119,111,425 |
1401.6474
|
bee00cd9df27a9bcea5ad27e99629555bbbc7f17
|
Cycle 23 Variation in Solar Flare Productivity
24 Jan 2014
Hugh Hudson
Springer ••••
Lyndsay Fletcher
Springer ••••
Jim Mctiernan
Springer ••••
Cycle 23 Variation in Solar Flare Productivity
24 Jan 201410.1007/•••••-•••-•••-••••-•Solar PhysicsSolar cycle, Flares
The NOAA listings of solar flares in cycles 21-24, including the GOES soft X-ray magnitudes, enable a simple determination of the number of flares each flaring active region produces over its lifetime. We have studied this measure of flare productivity over the interval 1975-2012. The annual averages of flare productivity remained approximately constant during cycles 21 and 22, at about two reported M or X flares per region, but then increased significantly in the declining phase of cycle 23 (the years 2004-2005). We have confirmed this by using the independent RHESSI flare catalog to check the NOAA events listings where possible. We note that this measure of solar activity does not correlate with the solar cycle. The anomalous peak in flare productivity immediately preceded the long solar minimum between cycles 23 and 24.
Introduction
The unusual behavior of solar activity during the sunspot minimum of 2008 has excited much interest. In general solar activity does exhibit long-term variability, extending to time scales exceeding that of the Hale cycle. Albregtsen and Maltby (1981) found that the ratio of umbra/photosphere brightness ratio varied with phase in the solar cycle; recently Penn and Livingston (2006) have noted a systematic change in umbral magnetic field intensity as well (cf. Watson et al., 2011). The latter discovery spans the maximum of cycle 23, leading up to the unexpectedly extended minimum between cycles 23 and 24 (see, e.g., papers in IAU Symposium No. 286;Mandrini and Webb 2012). This "extended minimum" had precedents, but not within the modern era (roughly speaking, beginning with the introduction of the F10.7 index by Covington in 1947; see, e.g., Tapping (1987). Accordingly much new information has surfaced, ranging from variations away from a supposed basal level of total solar irradiance in the minima (Fröhlich, 2011) to an unprecedented level of cosmic-ray flux (Mewaldt et al., 2010). The various observations suggest the existence of heretofore unknown properties of the solar magnetic field and its variation, both on global and local scales.
In this paper we report another effect: the variation of the flare productivity of a given active region. Here we use this term simply to mean the number of flares per active region (cf. Abramenko, 2005), rather than anything to do with a region's magnetic structure. There is an extensive literature identifying flare occurrence with local properties of an active region, including both intrinsic properties such as helicity injection during flux emergence (e.g., Heyvaerts and Priest, 1984;Rust, 1994;Low, 1996) and global properties such as the coronal environment of an active region (e.g., Török and Kliem, 2005;Dalla et al., 2007;Jing et al. 2010). Without prejudging the observational evidence for such processes, we have simply studied the NOAA and RHESSI (Reuven Ramaty Solar Spectroscopic Imager) databases on flare occurrence as seen in soft X-rays by the GOES (Geostationary Operational Environmental Satellite) detectors; see Wheatland (2001) for background information on this approach. The number of major flares per active region remained approximately constant during cycles 21 and 22, but then exhibited a distinct variation towards the end of cycle 23 as described here. For the initial period of cycle 24, to the time of writing at the end of 2012, the flare productivity appears to have returned to its prior levels.
NOAA Database
The primary source of information for our assessment of flare productivity is the NOAA "events" database. Since 1991 a version of these data has been directly available via SolarSoft (Freeland and Handy, 1998); the earlier databases used here began in 1975 and were obtained directly from NOAA Web archives. We have adopted the GOES soft X-ray classification as a standard referenceand accumulate statistics separately for the C and (M,X) ranges. Note that the soft X-ray photometers on the GOES spacecraft have differed slightly from one to another over the years, but that cross-calibration has generally been possible (e.g., White et al., 2005). We return to the issue of database reliability in Section 3. Figure 1 shows the total content of these databases, plotting all C, M, and X-class flare positions vs time. These records extend from 1 September 1975, through 11 December 2012, and (redundantly) contain 82,344 total entries of all GOES classes. We screened these by eliminating redundancy and be taking only events for which the NOAA region number was listed, and also by removing some obvious outliers in heliolatitude. This step was purely cosmetic, since all of the 86 flares thus eliminated were B-class events and our work here is with C-class and above. Finally we eliminated events with heliolongitude outside the range ±75 • , in order to minimize the effects of solar tilt at the extreme limb. This resulted in a sample of (20,143, 3,791, and 345) flares of (C, M, and X)-class respectively.
We have looked at the flare productivity by region by simply plotting the number of flares identified with a given flaring region as one-year averages ( undercounting at C-class: the higher background produced by a major flare makes it harder to detect a concurrent minor one, or one that occurs during the gradual decay of such an event. Wheatland (2001) discusses this effect in detail and terms it "obscuration." Our simple distinction does not eliminate the bias completely because it depends on the flare occurrence pattern, which is definitely not random. However the arbitrary break point between GOES Cand M-class flares allows a qualitative distinction in the solar-cycle pattern of flare productivity to emerge. Figure 2 shows that the region productivity of (M,X)-class flares remained near or below two flares per region, but with a significant increase in 2004-2005. Similar increases did not occur at the corresponding phases of cycles 21 or 22. This anomalous behavior is formally significant at above the 3σ level. In the first two annual means of cycle 24, the flare productivity returned to earlier levels. Figure 3 give a different view of the flare productivity by region, in the form of histograms of productivity values for individual regions. These amplify the results found in Figure 2 by showing the distribution in flare productivity of the most productive regions. They reveal a strong flattening of the distribution of flare productivity in 2004-2005, which confirms the conclusion drawn from the mean flare productivities. Table 1 lists the ten most flare-productive regions in the second half of the cycle 23 maximum, with peak sunspot areas and magnetic classifications on the date of maximum area. "MSH" refers to the peak group sunspot area listed, in millionths of the solar hemisphere, and the magnetic classification refers to the time of this peak area. As expected from previous studies of flaring patterns (e.g., Gaizauskas, 1982;Zirin and Liggett, 1987;Schrijver, 2007), these regions were all complex and mostly (8/10) classified as having δ configurations. They were not necessarily the regions with the largest areas, though, underscoring the requirement for other factors (e.g., magnetic complexity) in flare productivity. These regions produced 29 of the 30 X-class flares reported during this interval. Figure 2. Flare productivity for individual active regions, shown on the left for C-class events, and on the right for M and X. The striking systematic variation in the lower-left panel is an artifact due to the obscuration of weaker flares by stronger ones (Wheatland, 2001). Years with fewer than ten regions have been omitted for clarity here.
RHESSI Observations
The databases we have used have been generated outside our control, and probably contain both random and systematic errors. The early databases appear to have had some keyboard entry errors as well. We have therefore sought to check the recent data against the RHESSI flare catalog (available through SolarSoft or as a text file at http://hesperia.gsfc.nasa.gov/hessidata/dbase/hessi flare list.txt). This provides an independent location of each event, but done entirely through an automated pipeline reduction that identifies RHESSI flare events and tabulates their properties. The positions are unambiguous and come from direct imaging in higher-energy X-rays than GOES is sensitive to. The hard X-ray sources are physically smaller and thus better defined for this purpose. The RHESSI data began in 2002, and so we can check the annual mean flare productivity in the same manner as for the NOAA database. Note that the original NOAA flare listings derived positions from Hα flare associations. We find, in Figure 5, a confirmationn of the NOAA flare-productivity anomaly in cycle 23 by use of this independent database. We note also that the RHESSI flare identifications show an increase in flare productivity at the C-class level, beginning roughly with the October 2003 events and continuing through the extended solar minimum period. We do not think that this effect could be seen in the lower-left panel of Figure 2, both because of the miscounting problem and the lower sensitivity of GOES as compared with RHESSI during quiet times. This suggests that a more comprehensive study of the RHESSI hard X-ray flare statistics, including new data from cycle 24, will be interesting.
Conclusions
The NOAA database, as checked by the RHESSI data after 2002, shows longterm variability of the flare productivity. Specifically, active regions in 2004-2005 had flare productivities (as we define them) about twice as large as those at other times. The most flare-productive regions immediately prior to this epoch were significantly less productive in general, with no regions in the first half of cycle 23 producing more than 10 M or X-class flares. This limit is strikingly smaller than that of prior or subsequent intervals. These effects do not appear to depend repeatably on solar-cycle phase, since the 2004-2005 increase was unique in the time interval since 1975. Wheatland (2001) had noted the existence of significant variations in the occurrence rates in individual active regions, and the pattern we have found seems consistent with that. We do not have any speculations as to the origin of this effect, but its timing just prior to the "anomalous" cycle minimum between cycles 23 and 24 suggests a possible relationship. On the plausible idea that flare occurrence results from twists imposed on the solar magnetic field prior to eruption, this effect might provide a clue to the dynamo action in the solar interior. The lack of a solar-cycle dependence in the flare productivity of active regions deserves mention. Except for the surge in 2004-2005, the number of (M, X)-class flares, in regions that produce them, appears to remain in the range 1-3 flares per region at all other times. This is consistent with the idea that the ability of an active region to produce an energetic flare is intrinsic to its own structure, rather than to interactions with other structures.
Databases now being produced, such as that from RHESSI and the "Latest Events" catalog of S. Freeland (http://www.lmsal.com/solarsoft/last events/) offer much superior metadata, and it would be possible to extend the latter to cover global event occurrence by use of positions from the STEREO (Solar Terrestrial Relations Observatories) data (e.g., Kaiser et al., 2008). This would help to clarify the heliographic biases present in event identifications inevitably present in any Earth-based observational material. We suggest revisiting this question in about 2020, when the statistics for cycle 24 will be complete.
Figure 2 )Figure 1 .
21. We plot C and (M,X)-classes separately because of the systematic The full NOAA database, as extracted from the pre-SolarSoft files (blue) and the SolarSoft database (red), the latter taken as definitive. The vertical lines mark the years2004-2005.
Figure 3 .
3Histograms of flare productivity, comparing cycles 21, 22, and 23 (left; the orange solid line is cycle 21, the blue dotted line cycle 22, and the red dashed line cycle 23), and two epochs in cycle 23 (right; blue dashed line for data prior to 2004, red solid line for 2004-2005). The histograms have been equalized by normalizing to the number of events in cycle 21 on the left, and to the number of events in the epoch prior to 2004, in cycle 23, on the right.
Figure 4 .
4Histogram of region areas (2004-2005), with vertical lines showing the maximum areas of the most flare-productive regions during the latter half of cycle 23.
Figure 4
4compares the areas of the flare-productive regions with the histogram of areas for all region reports in the latter half of cycle 23.
Figure 5 .
5Numbers of flares and mean flare productivities as inferred from the RHESSI flare catalog: left, C-class; right, (M,X)-class. The lower-right panel of this Figure can be compared with the lower-right panel of Figure 2.
Table 1 .
1The most flare-productive active regions in the latter half of cycle 23NOAA
Date
Area [MSH] Hemisphere
Class
M X
10536
7 January 2004
980
S
βγδ
1
1
10635
20 June 2004
550
S
βγ
1
1
10649
17 July 2004
530
S
βγδ
10
6
10652
22 July 2004
2010
N
βγδ
16
1
10656
11 August 2004
1360
S
βγδ
20
1
10696
5 November 2004
910
N
βγδ
11
2
10720
15 January 2005
1630
N
βδ
15
5
10786
8 July 2005
420
N
βγδ
5
1
10808
13 September 2005
1430
S
βγδ
19
9
10822
18 November 2005
810
S
βγ
5
1
10
100
1000
10000
Region area [MSH]
1
10
100
1000
Number of region-days 2004-2005
SSL, University of California, Berkeley, CA, USA; email: [email protected] 2 School of Physics and Astronomy, University of Glasgow, UK
Acknowledgements Authors Hudson and McTiernan acknowledge support from NASA under Contract NAS5-98033 for RHESSI. Author Fletcher was supported by STFC rolling grant ST/I001808/1 and by the EC-funded FP7 project HESPE (FP7-2010-SPACE-1-263086).
Relationship between magnetic power spectrum and flare productivity in solar active regions. V I Abramenko, 10.1086/431732Astrophys. J. 629Abramenko, V.I.: 2005, Relationship between magnetic power spectrum and flare productivity in solar active regions. Astrophys. J. 629, 1141 -1149. doi:10.1086/431732.
Solar cycle variation of sunspot intensity. F Albregtsen, P Maltby, 10.1007/BF00167551Solar Phys. 71Albregtsen, F., Maltby, P.: 1981, Solar cycle variation of sunspot intensity. Solar Phys. 71, 269 -283. doi:10.1007/BF00167551.
Flare productivity of newly-emerged paired and isolated solar active regions. S Dalla, L Fletcher, N A Walton, 10.1051/0004-6361:20077177Astron. Astrophys. 468Dalla, S., Fletcher, L., Walton, N.A.: 2007, Flare productivity of newly-emerged paired and isolated solar active regions. Astron. Astrophys. 468, 1103 -1108. doi:10.1051/0004-6361:20077177.
Data analysis with the SolarSoft system. S L Freeland, B N Handy, 10.1023/A:1005038224881Solar Phys. 182Freeland, S.L., Handy, B.N.: 1998, Data analysis with the SolarSoft system. Solar Phys. 182, 497 -500. doi:10.1023/A:1005038224881.
Total solar irradiance: What have we learned from the last three cycles and the recent minimum?. C Fröhlich, 10.1007/s11214-011-9780-1Space Sci. Rev. Fröhlich, C.: 2011, Total solar irradiance: What have we learned from the last three cycles and the recent minimum? Space Sci. Rev.. doi:10.1007/s11214-011-9780-1.
The relation of solar flares to the evolution and proper motions of magnetic fields. V Gaizauskas, 10.1016/0273-1177(82)90175-2Adv. Space Res. 2Gaizauskas, V.: 1982, The relation of solar flares to the evolution and proper motions of magnetic fields. Adv. Space Res. 2, 11 -30. doi:10.1016/0273-1177(82)90175-2.
Coronal heating by reconnection in DC current systems -A theory based on Taylor's hypothesis. J Heyvaerts, E R Priest, Astron. Astrophys. 137Heyvaerts, J., Priest, E.R.: 1984, Coronal heating by reconnection in DC current systems -A theory based on Taylor's hypothesis. Astron. Astrophys. 137, 63 -78.
Free magnetic energy and flare productivity of active regions. J Jing, C Tan, Y Yuan, B Wang, T Wiegelmann, Y Xu, H Wang, 10.1088/0004-637X/713/1/440Astrophys. J. 713Jing, J., Tan, C., Yuan, Y., Wang, B., Wiegelmann, T., Xu, Y., Wang, H.: 2010, Free magnetic energy and flare productivity of active regions. Astrophys. J. 713, 440 -449. doi:10.1088/0004-637X/713/1/440.
The STEREO mission: An introduction. M L Kaiser, T A Kucera, J M Davila, St, O C Cyr, M Guhathakurta, E Christian, 10.1007/s11214-007-9277-0Space Sci. Rev. 136Kaiser, M.L., Kucera, T.A., Davila, J.M., St. Cyr, O.C., Guhathakurta, M., Christian, E.: 2008, The STEREO mission: An introduction. Space Sci. Rev. 136, 5 -16. doi:10.1007/s11214-007-9277-0.
. R P Lin, B R Dennis, G J Hurford, D M Smith, A Zehnder, P R Harvey, D W Curtis, D Pankow, P Turin, M Bester, A Csillaghy, M Lewis, N Madden, H F Van Beek, M Appleby, T Raudorf, J Mctiernan, R Ramaty, E Schmahl, R Schwartz, S Krucker, R Abiad, T Quinn, P Berg, M Hashii, R Sterling, R Jackson, R Pratt, R D Campbell, D Malone, D Landis, C P Barrington-Leigh, S Slassi-Sennou, C Cork, D Clark, D Amato, L Orwig, R Boyle, I S Banks, K Shirey, A K Tolbert, D Zarro, F Snow, K Thomsen, R Henneck, A Mchedlishvili, P Ming, M Fivian, J Jordan, R Wanner, J Crubb, J Preble, M Matranga, A Benz, H Hudson, R C Canfield, G D Holman, C Crannell, T Kosugi, A G Emslie, N Vilmer, J C Brown, C Johns-Krull, M Aschwanden, T Metcalf, A Conway, 10.1023/A:1022428818870The Reuven Ramaty High-Energy Solar Spectroscopic Imager (RHESSI). Solar Phys. 210Lin, R.P., Dennis, B.R., Hurford, G.J., Smith, D.M., Zehnder, A., Harvey, P.R., Curtis, D.W., Pankow, D., Turin, P., Bester, M., Csillaghy, A., Lewis, M., Madden, N., van Beek, H.F., Appleby, M., Raudorf, T., McTiernan, J., Ramaty, R., Schmahl, E., Schwartz, R., Krucker, S., Abiad, R., Quinn, T., Berg, P., Hashii, M., Sterling, R., Jackson, R., Pratt, R., Campbell, R.D., Malone, D., Landis, D., Barrington-Leigh, C.P., Slassi-Sennou, S., Cork, C., Clark, D., Amato, D., Orwig, L., Boyle, R., Banks, I.S., Shirey, K., Tol- bert, A.K., Zarro, D., Snow, F., Thomsen, K., Henneck, R., McHedlishvili, A., Ming, P., Fivian, M., Jordan, J., Wanner, R., Crubb, J., Preble, J., Matranga, M., Benz, A., Hudson, H., Canfield, R.C., Holman, G.D., Crannell, C., Kosugi, T., Emslie, A.G., Vilmer, N., Brown, J.C., Johns-Krull, C., Aschwanden, M., Metcalf, T., Conway, A.: 2002, The Reuven Ramaty High-Energy Solar Spectroscopic Imager (RHESSI). Solar Phys. 210, 3 -32. doi:10.1023/A:1022428818870.
Solar activity and the corona. B C Low, 10.1007/BF00146338Solar Phys. 167Low, B.C.: 1996, Solar activity and the corona. Solar Phys. 167, 217 -265. doi:10.1007/BF00146338.
2012, Comparative Magnetic Minima: Characterizing Quiet Times in the Sun and Stars, IAU Symp. Mandrini, C.H., Webb, D.F.286Mandrini, C.H., Webb, D.F. (eds.): 2012, Comparative Magnetic Minima: Characterizing Quiet Times in the Sun and Stars, IAU Symp. 286.
Record-setting cosmic-ray intensities in. R A Mewaldt, A J Davis, K A Lave, R A Leske, E C Stone, M E Wiedenbeck, W R Binns, E R Christian, A C Cummings, G A De Nolfo, M H Israel, A W Labrador, T T Von Rosenvinge, 10.1088/2041-8205/723/1/L1Astrophys. J. Lett. 723Mewaldt, R.A., Davis, A.J., Lave, K.A., Leske, R.A., Stone, E.C., Wiedenbeck, M.E., Binns, W.R., Christian, E.R., Cummings, A.C., de Nolfo, G.A., Israel, M.H., Labrador, A.W., von Rosenvinge, T.T.: 2010, Record-setting cosmic-ray intensities in 2009 and 2010. Astrophys. J. Lett. 723, L1 -L6. doi:10.1088/2041-8205/723/1/L1.
Temporal changes in sunspot umbral magnetic fields and temperatures. M J Penn, W Livingston, 10.1086/508345Astrophys. J. Lett. 649Penn, M.J., Livingston, W.: 2006, Temporal changes in sunspot umbral magnetic fields and temperatures. Astrophys. J. Lett. 649, L45 -L48. doi:10.1086/508345.
Spawning and shedding helical magnetic fields in the solar atmosphere. D M Rust, 10.1029/94GL00003Geophys. Res. Lett. 21Rust, D.M.: 1994, Spawning and shedding helical magnetic fields in the solar atmosphere. Geophys. Res. Lett. 21, 241 -244. doi:10.1029/94GL00003.
A characteristic magnetic field pattern associated with all major solar flares and its use in flare forecasting. C J Schrijver, 10.1086/511857Astrophys. J. Lett. 655Schrijver, C.J.: 2007, A characteristic magnetic field pattern associated with all major solar flares and its use in flare forecasting. Astrophys. J. Lett. 655, L117 -L120. doi:10.1086/511857.
Recent solar radio astronomy at centimeter wavelengths -The temporal variability of the 10.7-cm flux. K F Tapping, 10.1029/JD092iD01p00829J. Geophys. Res. 92Tapping, K.F.: 1987, Recent solar radio astronomy at centimeter wavelengths - The temporal variability of the 10.7-cm flux. J. Geophys. Res. 92, 829 -838. doi:10.1029/JD092iD01p00829.
Confined and ejective eruptions of kink-unstable flux ropes. T Török, B Kliem, 10.1086/462412Astrophys. J. Lett. 630Török, T., Kliem, B.: 2005, Confined and ejective eruptions of kink-unstable flux ropes. Astrophys. J. Lett. 630, L97 -L100. doi:10.1086/462412.
Evolution of sunspot properties during solar cycle 23. F T Watson, L Fletcher, S Marshall, 10.1051/0004-6361/201116655Astron. Astrophys. 53314Watson, F.T., Fletcher, L., Marshall, S.: 2011, Evolution of sunspot properties during solar cycle 23. Astron. Astrophys. 533, A14. doi:10.1051/0004-6361/201116655.
Rates of flaring in individual active regions. M S Wheatland, Solar Phys. 203Wheatland, M.S.: 2001, Rates of flaring in individual active regions. Solar Phys. 203, 87 -106.
Updated expressions for determining temperatures and Emission measures from GOES soft X-ray measurements. S M White, R J Thomas, R A Schwartz, 10.1007/s11207-005-2445-zSolar Phys. 227White, S.M., Thomas, R.J., Schwartz, R.A.: 2005, Updated expressions for determining tem- peratures and Emission measures from GOES soft X-ray measurements. Solar Phys. 227, 231 -248. doi:10.1007/s11207-005-2445-z.
Delta spots and great flares. H Zirin, M A Liggett, 10.1007/BF00147707Solar Phys. 113Zirin, H., Liggett, M.A.: 1987, Delta spots and great flares. Solar Phys. 113, 267 -281. doi:10.1007/BF00147707.
|
[] |
[
"Strong pair correlation in small metallic nanoclusters: the energy spectrum",
"Strong pair correlation in small metallic nanoclusters: the energy spectrum"
] |
[
"Yurii N Ovchinnikov \n) L.D.Landau Institute for Theoretical Physics\nRussian Academy of Sciences\n117334MoscowRussia\n\nMax-Planck Institute for Physics of Complex Systems\nD-01187DresdenGermany\n",
"Vladimir Z Kresin \nLawrence Berkeley Laboratory\nUniversity of California at Berkeley\n94720CA\n"
] |
[
") L.D.Landau Institute for Theoretical Physics\nRussian Academy of Sciences\n117334MoscowRussia",
"Max-Planck Institute for Physics of Complex Systems\nD-01187DresdenGermany",
"Lawrence Berkeley Laboratory\nUniversity of California at Berkeley\n94720CA"
] |
[] |
The electronic shell structure in small metallic nanoclusters leads to high level degeneracy, which is strongly beneficial for the appearance of pair correlation. This results in a high value of T c as well as in the appearance of a superconducting gap which causes a strong modification of the energy spectrum. The electronic energy spectrum becomes strongly temperature dependent. Consequently, specific experiments to demonstrate the presence of pair correlation can be proposed.This paper is concerned with the superconducting state of small metallic nanoclusters (N ≅ 10 2 -10 3 , where N is the number of the delocalized electrons).The appearance of pair correlation in such clusters was described by us in[1]. In this paper we focus on the nanocluster energy spectrum. It will be shown below that the energy gap parameter drastically affects the spectrum and gives it a strong temperature dependence. This fact allows us to propose a specific experiment to detect the presence of pair correlation.
|
10.1140/epjb/e2005-00349-2
|
[
"https://arxiv.org/pdf/cond-mat/0508294v1.pdf"
] | 119,419,964 |
cond-mat/0508294
|
bfb21fa1ee7823b89475e50844629357cb59c1ed
|
Strong pair correlation in small metallic nanoclusters: the energy spectrum
Yurii N Ovchinnikov
) L.D.Landau Institute for Theoretical Physics
Russian Academy of Sciences
117334MoscowRussia
Max-Planck Institute for Physics of Complex Systems
D-01187DresdenGermany
Vladimir Z Kresin
Lawrence Berkeley Laboratory
University of California at Berkeley
94720CA
Strong pair correlation in small metallic nanoclusters: the energy spectrum
1
The electronic shell structure in small metallic nanoclusters leads to high level degeneracy, which is strongly beneficial for the appearance of pair correlation. This results in a high value of T c as well as in the appearance of a superconducting gap which causes a strong modification of the energy spectrum. The electronic energy spectrum becomes strongly temperature dependent. Consequently, specific experiments to demonstrate the presence of pair correlation can be proposed.This paper is concerned with the superconducting state of small metallic nanoclusters (N ≅ 10 2 -10 3 , where N is the number of the delocalized electrons).The appearance of pair correlation in such clusters was described by us in[1]. In this paper we focus on the nanocluster energy spectrum. It will be shown below that the energy gap parameter drastically affects the spectrum and gives it a strong temperature dependence. This fact allows us to propose a specific experiment to detect the presence of pair correlation.
The shell structure of electronic states in clusters, similar to that in nuclei and atoms, was discovered in [2], see, e.g., the reviews [3,4]. A remarkable feature of many metallic nanoclusters is that their shape, and consequently the energy spectrum strongly depends on the number of delocalized electrons N. "Magic" clusters which contain completely filled electronic shells are characterized by a spherical shape. If the highest occupied shell (HOS) is not completely full, the cluster undergoes a Jahn-Teller distortion, so that its shape becomes ellipsoidal.
Because of the shell structure, the cluster energy spectrum is very different from that expected in a model of equally-spaced energy levels.. For "magic" clusters this leads to a high degeneracy of the HOS. For example, for nanoparticles with N=168 (in this case the orbital momentum of the HOS is L=7) the degeneracy is g=2(2L+1)=30. Such high degeneracy is favorable for pairing.
The pairing picture is similar to that in nuclei [5], see the review [6]. The importance of shell structure for pairing in nanoclusters was indicated in [7], and especially in [8].
If the shell is slightly incomplete, it is still realistic for it to have a relatively small level splitting, and the impact of pairing remains strong. Remarkably, pairing in nanoclusters leads to the possibility to observe a superconducting state with T C much higher than that in bulk samples. Qualitatively, such high values of T c are due to the high degeneracy of the HOS, i.e., there is a sharp peak in the density of states at the Fermi level; this is similar to the picture introduced in [9].
Pairing is caused by the electron-vibrational interaction. The equation for the pairing order parameter has the form [1]:
∆ ω n ( )Z = λT νV˜ Ω 2 Ω 2 + ω n − ω n ' ( ) 2 s ∑ ω n ' ∑ • ∆ ω n ( ) ω n ' 2 + ∆ 2 ω n ' ( ) + ξ s 2(1)
Here ω n = (2n + 1)πT ; we employ the method of thermodynamic Green's functions (see, e.g., [10]), ˜ Ω is the characteristic vibrational frequency, ξ s = E s − µ is the electronic energy referred to the chemical potential, the index "s" labels different energy levels, V is the cluster volume, λ=ην is the bulk coupling constant [11],ν is the density of states, η = I 2 / M˜ Ω 2 is the Hopfield parameter, <I> is the electron-ion matrix element averaged over the states involved in the pairing, Z is the renormalization function (we shall not write out the explicit expression for Z).
The values of T c for several clusters, e.g., for In, Nb, Zn, were calculated in our paper [1]. Here we consider T c for Ga and Cd nanoclusters.
Indeed, both types of metallic clusters have been observed to display clear shell structure (see, e.g., [12], [13]). Subsequently, we focus on the gap parameter and its temperature dependence; this problem was not discussed in [1]. The evaluation of the spectrum is interesting for its own sake, and, in addition, will allow us to propose an interesting experiment (see below).
Eq. (1) can be written in the following dimensionless form:
φ x n ( ) = δ K(x n n ' ∑ , x n ' )φ x n ' ( ) ; n,n ' >0(2)
Here K x n , y n '
( ) =˜ λ f + + f − − 4x n 2 δ nn ' f + f − ( ) x n ' 2 + χ s 2 + φ 2 (x n ' ) [ ] s ∑ −1 f ± = 1 + x n ± x n ' ( ) 2 [ ] −1 ;˜ λ = λ / 2π˜ Ω νV (3) x n = ω n˜ Ω −1 ; φ x n ( ) = ∆ x n ( )˜ Ω −1 ; δ = 2πT˜ Ω ; χ s = ξ s˜ Ω −1
Eqs.(1)-(3) are valid for neutral clusters as well as for ions. Note that for neutral clusters ˜ λ can be written in the form:
˜ λ = λε F 2πN ( ) −1 ;ε F = E F˜ Ω −1 .
For "magic" clusters Eq. (3) contains a summation over different complete
shells, so that 2 − > g j j ∑ s ∑ ,
where g j is the degeneracy of the j th shell. If the shell is incomplete, the label "s" corresponds to the projection of the angular momentum.
The position of the chemical potential is determined by conservation of the total number of electrons which can be expressed by the relation
N = 2T G ω n ,s ( ) s ∑ n ∑ exp iω n τ ( ) τ →0(4)
where the thermodynamic Green's function G(ω n ,s) is
G ω n ,s ( ) = − iω n + ξ s ( ) ω n 2 + ∆ 2 (ω n ) + ξ s 2 [ ] −1(5)
At T=T c one should put Φ=0 in expression (3) (cf. [1]), so that T c can be calculated from the equation
Det | 1-δK(x n , y n ' )|=0(6)
At T=0K the summation over n in Eq,(2) can be replaced by integration. Eq.
(4) can be written in the form
N = 2 1 + exp −E s / T ( ) [ ] s ∑ −1 u s + + 1 + exp E s / T ( ) [ ] −1 v s (7) u s ,v s = 0.5 1 m ξ s / E s ( ); E s = ξ s 2 + E 0 s ( ) 2 [ ] 1/ 2 ; E 0 s is determined by the equation E 0 s = ∆ i ξ s 2 + (E 0 s ) 2 ( ), or [see Eq.(3)] ε 0 s = Φ[i((ε 0 s ) 2 + χ s 2 ) 1 / 2 ].
An analysis of Eq.
(2) at T=0K allows to determine the order parameter ∆(ω), which enters the expression for the thermodynamic Green's function. As is known, the retarded Green's function, whose poles correspond to the energy spectrum, is the analytical continuation of G(ω, s ). As an example, we consider clusters with N=168. As was mentioned above, this choice is determined by large value of the angular momentum of the complete shell (L=7), and therefore by the high degeneracy. It is also very important that for this N the energy spacing between the HOS and the lowest unoccupied shell (LUS)
is relatively small. Both of these factors are favorable for pairing.
As an example, consider the Ga 56 clusters (each Ga atom has 3 valence electrons). Using the method described above, we obtain for the critical temperature the value T c ≈140K. This greatly exceeds that for the bulk, where Clusters with partially unoccupied shells undergo a Jahn-Teller shape distortion which splits the degenerate level. On the other hand, removal of electrons from the HOS strongly affects the position of the chemical potential, and this factor turns out to be favorable for pairing. The best scenario would correspond to nanoclusters with slightly incomplete shells (e.g., with N=166) and small shape deviations from sphericity. In this case one expect weak level splitting. For example, the HOS becomes a set of close levels classified by the projection of their angular momentum m. The picture of splitting is similar to that in atomic nuclei (cf. [14]). To calculate the magnitude of the splitting, one can use the following expression [15]:
δE L m = −2E L 0 ( ) A{−1 + 3(2L + 1) −1 [ (L + 1) 2 − m 2 2L + 3 + L 2 − m 2 2L − 1 ]}(8)
where A is the deformation parameter. An explicit expression for the deformation parameter can be found [16] by minimizing the total energy δE=δE el. +δE def , where δE el is described by Eq. (8) and δE def =3A 2 V(c 11 -c 12 ), with c 11 and c 12 the elastic constants (see, e.g., [17]), and V the volume. For N=166 the highest occupied states correspond to L=7, |m |≤ 7, and we obtain A=0.55E H /V(c 11 -c 12 ).
With the use of Eqs. (6)-(8) one can calculate Tc for the case of a slightly incomplete shell. For example, for Cd clusters with N=166 we obtain Tc≈90K.
For analogous Zn clusters we have Tc≈120 K. A detailed calculation will be described elsewhere.
Let us now turn to the evaluation of the gap parameter and the energy spectrum. As was mentioned above, the gap parameter is described by Eq.
The solution is determined by the parameters α, β, and µ which can be calculated by an iterative method and with the expression (9) as a trial function.
The values of α, β 0 and µ can be obtained by minimization of the quantity <Φ 2 -Φ 1 >/<Φ 1 >; Φ 1 and Φ 2 correspond to the first and second iterations. As a result, we obtain for Zn clusters (N=168) α=6.8×10 -2 , β 0 =0.9. For analogous Ga clusters we obtain α=7×10 -2 , β 0 =0.7.
Note also that in the absence of pair correlation the smallest excitation energy ∆E min;0 (HOS-LUS interval) at T=0K is equal to ∆E=E L -E H (L≡ LUS, H≡HOS). The chemical potential is located between HOS and LUS, and its position is described by the parameter ˜ µ such that µ = E H +˜ µ (E L − E H ). In the absence of pairing ˜ µ = 0.5 at T=0K.
Pairing has a strong effect on the spectrum (∆E min;0 ∆E min;p ; the label"p" stands for pairing). The value of ∆E min;p is determined by the relation ∆E min;p = U L +U H (10)
Here obtain ∆E min;p ≈ 95 meV, so that the value ∆E min;p is noticeably larger than ∆E min; 0.
U L =˜ Ω 2 (ε 0 L ) 2 +˜ µ 2 (E L − E H ) 2 [ ] 1/ 2 U H =˜ Ω 2 (ε 0 H ) 2 + (1−˜ µ ) 2 (E L − E H ) 2 [ ] 1 / 2 (10')
A similar influence of pairing on the excitation energy is found for clusters of other metals such as Ga, Cd. For example, for Ga clusters we obtain ∆E min; 0 ≈103 meV, ∆E min;p ≈120 meV. For Cd clusters ∆E min; 0 ≈74 meV, ∆E min;p ≈84.5 meV. A detailed calculations for other metallic nanoclusters will be described elsewhere.
The effect of pairing on the spectrum is much stronger for clusters with slightly incomplete shells, such as those, e.g., with N=166. As mentioned above, Eq. (2) also can be used to evaluate the energy gap parameter near T c . In this region Φ n <<x n , so that an additional term ∝Φ n 2 should be kept. Calculation leads to the following Ginzburg-Landau expression for the thermodynamic potential:
Ω s = a −τβ 2 + (2C) −1 β 4 [ ](11)
Here τ=1-T/T c ,and hence β 2 =Cτ. For example, after long but straightforward calculations one obtains the following parameter values for N=168: a= 0.6 eV, C=2.7 (for Ga);
a=0.2 eV, C= 1.3 (for Cd); a= 0.76 eV, C=2.9 (for Zn).
Based on Eq. (11), it is possible to estimate the role of fluctuations (cf. [18,19]). It is worth noting that the large values of T c and the gap parameter lead to a relatively small coherence length that is comparable with the cluster size; the situation is similar to that in the high T c cuprates. A straightforward calculation
shows that the broadening of the transition is on the order of δT c /T c ≈5-9%. A width of this magnitude noticeably exceeds that of bulk superconductors, but is still relatively small.
The phenomenon of pair correlation discussed here is promising for the creation of high T c tunneling networks. It would probably require special method of growing isolated clusters in a matrix without a strong disturbance of their shapes and spectra (see, e.g., [20]).
Let us discuss the fundamental question of possible manifestations of pair correlations in small nanoclusters and the possibility of their experimental observation. The phenomenon can manifest itself in odd-even effects for cluster spectra and in their magnetic properties. Such an effect has been observed in [21], but for much larger particles (N≈10 4 -10 5 ). The effect described here is caused by shell structure which results in high values of T c .
The following experiment can be proposed. As described above, pairing results in a strong temperature dependence of the excitation spectrum. At T>T c the minimum excitation energy is given by ∆E min;0 = E L -E H . At T<T c , pairing modifies ∆E min; , and at low temperatures, close to T=0K, the excitation energy strongly exceeds that in the region T>T c . This shift is especially dramatic for clusters with slightly unoccupied shells. Such a change in the excitation energy may be observed experimentally and would represent a strong manifestation of pair correlation. By generating beams of isolated metallic clusters at different temperatures (see, e.g., [22]) in combination with mass spectroscopic size selection would allow one to focus on clusters of specific size at various temperatures. A measurement of the energy spectrum, in particular a determination of ∆E min , for example by the photoelectron spectroscopy technique (see, e.g., [23]), would reveal a strong temperature dependence of the spectrum. For example, for Ga clusters (N=168, T c ≈140K) one should observe a large difference in ∆E min in the low temperature region near T=0K and for T>T c ≈140K. For Cd clusters with N=166 a large difference should be observed for spectra in the low temperature region and for T>T c ≈90K. The use of Ga or Cd nanoclusters for such experiments looks reasonable, because these materials are superconducting and, as mentioned above, the shell structure of their electronic states has been confirmed experimentally. An experiment of this type would be both realistic and informative.
∫
Based on Eqs.(6,7), one can calculate the value of T c . Eqs. ) and (7) allow us to evaluate the gap parameter at T=0K. In addition, based on the general equation(1), one can investigate the temperature dependence of the gap. One can see directly from Eqs.(1)-(3) that the values of these quantities for specific clusters are determined by the following parameters: ˜ Ω , N, λ, E F , ξ s . These are known from experimental measurements or from calculations.
≈1.1K. The high shell degeneracy is the crucial factor leading to such an increase in T c . We have used the following parameter values: ˜ Ω = 325K , N=168, λ b ≈ 0.4, E F = 10.4 eV, E H ≈ 12.6 eV. For Cd nanoclusters (˜ Ω =209K, N=168, λ b ≈0.38, E F =7.47 eV, E H ≈9 eV ) we obtain T c ≈ 73.5K (bulk T c b ≈0.56K) .
over n replaced by integration over x. The solution can be sought in
ε 0 L
0and ε 0 H are the gap parameters for the H and L states [see Eqs. (2),(3)]. An analysis based on Eqs. (2),(4) and the values of α, β (see above) leads to the following results for Zn (N=168): ε 0 H =1.1; ε 0 L = 1.55; ˜ µ =0.63. In the absence of pair correlation the square-well model gives ∆E min;0 ≈70meV. Pairing leads to a noticeable increase in the magnitude of ∆E. In accordance with Eqs. (7),(8), we
for some clusters of this kind one may expect to see only a small deviation from sphericity and consequently a small degree of energy level splitting. Because the uppermost level of the set formed by the splitting of the HOS is not fully occupied, the absorption edge ∆E min; 0 is not large. For example, an estimate based on Eq.(8)leads to a value of ~ 7 meV for the Cd cluster (N=166). Here pairing will lead to a drastic change in the spectrum because of the formation of an energy gap. For example, a calculation of the energy gap for such Cd clusters leads to a threshold value of ∆E min;p ≈41 meV. Such a significant effect of pairing can be detected experimentally (see below).
The authors are very grateful to J. Friedel
. Y Ovchinnikov, V Kresin, Eur. Phys. J. B. 455Y.Ovchinnikov and V.Kresin, Eur. Phys. J. B 45, 5 (2005)
. W Knight, K Clemenger, W Heer, W Saunders, M Chou, M L Cohen, Phys. Rev. Lett. 522141W.Knight, K.Clemenger, W.de Heer, W.Saunders, M.Chou, M. L.Cohen, Phys. Rev. Lett. 52, 2141 (1984).
. W A De Heer, Rev. Mod. Phys. 65611W. A. de Heer, Rev. Mod. Phys. 65, 611 (1993)
V V Kresin, W D Knight, Pair Correlations in Many-Fermion Systems. V.Z.Kresin, Ed. (PlenumNew York245V.V.Kresin and W.D.Knight, in Pair Correlations in Many- Fermion Systems, V.Z.Kresin, Ed. (Plenum, New York, 1998), p.245
. A Bohr, B Mottelson, D Pines, Phys.Rev. 110936A.Bohr, B.Mottelson, D.Pines, Phys.Rev.110,936 (1958);
. S Belyaev, Mat.Fys.Medd. S.Belyaev, Mat.Fys.Medd.
. Dan, Selsk, 31131Dan.Selsk.31, 131 (1959);
. A , Nucl.Phys. 13655A.Migdal, Nucl.Phys. 13, 655(1959)
The Nuclear Many-Body Problem. P Ring, P Schuck, SpringerNew YorkP.Ring, P.Schuck,The Nuclear Many-Body Problem, Springer, New York (1980)
Novel Superconductivity. W Knight, S.Wolf and V.KresinPlenum47New YorkW.Knight, in "Novel Superconductivity",S.Wolf and V.Kresin, Eds., Plenum, New York, 1987, p.47
. J Friedel, J.Phys. 2959J.Friedel, J.Phys. 2, 959 (1992).
. J Labbe, S Barisic, J Friedel, Phys.Rev.Lett. 191039J.Labbe,S.Barisic, and J.Friedel, Phys.Rev.Lett. 19, 1039(1967)
A Abrikosov, L Gor'kov, I Dzyaloshinski, Methods of Quantum Field Theory in Statistical Physics. New YorkDoverA. Abrikosov, L.Gor'kov, and I.Dzyaloshinski, Methods of Quantum Field Theory in Statistical Physics (Dover, New York, 1975)
. W Mcmillan, Phys. Rev. 167331W.McMillan, Phys. Rev. 167, 331 (1968)
. M Pellarin, B Baguenard, C Bordas, M Broyer, J Lerme, J Vialle, Phys.Rev. 4817645M.Pellarin, B.Baguenard, C.Bordas, M.Broyer, J.Lerme, J.Vialle, Phys.Rev.B48,17645 (1993);
. B Baguenard, M Pellarin, C Bordas, J Lerme, J Vialle, M Broyer, Chem. Phys. Letters. 20513B.Baguenard, M.Pellarin, C.Bordas, J.Lerme, J.Vialle, M.Broyer, Chem. Phys. Letters 205, 13 (1993)
. I Katakuse, T Ichihara, Y Fujita, T Matsuo, T Sakurai, H Matsuda, Int.J. of Mass Spectrometry and Ion Processes. 69109I.Katakuse, T.Ichihara, Y.Fujita, T.Matsuo, T.Sakurai,H.Matsuda, Int.J. of Mass Spectrometry and Ion Processes 69,109 (1986);
. M Ruppel, K Rademann, Chem. Phys. Letters. 197280M.Ruppel and K.Rademann, Chem. Phys. Letters 197,280(1992)
A , Qualitative Methods in Quantum Theory. New YorkPergamon PressA.Migdal, Qualitative Methods in Quantum Theory, Pergamon Press, New York (2000)
L Landau, E Lifshitz, Quantum Mechanics. Ch.VI ,Pergamon, New YorkL.Landau and E.Lifshitz, Quantum Mechanics, Ch.VI ,Pergamon, New York (1988)
. Y Ovchinnikov, V V Kresin, preprintY.Ovchinnikov and V.V.Kresin (preprint)
Single Crystal Elastic Constants and Calculated Aggregate Properties. G Simmons, H Wang, MIT PressCambridgeG.Simmons and H.Wang, Single Crystal Elastic Constants and Calculated Aggregate Properties, MIT Press, Cambridge (1971)
M Tinkham, Introduction to Superconductivity. McGraw-Hill, New YorkM.Tinkham, Introduction to Superconductivity, McGraw-Hill, New York (1996)
A Larkin, A Varlamov, Theory of Fluctuations in Superconductors. New YorkOxford Univ. PressA.Larkin and A.Varlamov, Theory of Fluctuations in Superconductors, Oxford Univ. Press, New York (2004)
. L Adams, B Lang, A Goldman, cond-mat/0502559L.Adams, B.Lang, A.Goldman, cond-mat/0502559
. M Tinkham, J Hergenrother, J Lu, Phys. Rev. B. 5112649M.Tinkham, J.Hergenrother, J. Lu, Phys. Rev. B 51, 12649 (1995)
. R Moro, X Xu, S Yin, W De Heer, Science. 3001265R.Moro, X.Xu, S.Yin, W.de Heer, Science,300,1265 (2003)
. G Wrigge, M Hoffman, B V Issendorff, Eur.Phys.J. D. 2423G.Wrigge, M. Astruk Hoffman, B.v. Issendorff, Eur.Phys.J. D 24,23(2003)
|
[] |
[
"Enhanced Nearest Neighbor Classification for Crowdsourcing",
"Enhanced Nearest Neighbor Classification for Crowdsourcing"
] |
[
"Jiexin Duan .email:[email protected]. \nDepartment of Statistics\nPurdue University\n\n",
"Xingye Qiao [email protected] \nDepartment of Mathematical Sciences\nBinghamton University\n\n",
"Guang Cheng [email protected]. \nDepartment of Statistics\nUCLA\nBinghamton University\nState\n\nDepartment of Statistics\nUniversity of New York\n13902BinghamtonNY\n\nUniversity of California\n90095Los AngelesCA\n",
") "
] |
[
"Department of Statistics\nPurdue University\n",
"Department of Mathematical Sciences\nBinghamton University\n",
"Department of Statistics\nUCLA\nBinghamton University\nState",
"Department of Statistics\nUniversity of New York\n13902BinghamtonNY",
"University of California\n90095Los AngelesCA"
] |
[] |
In machine learning, crowdsourcing is an economical way to label a large amount of data. However, the noise in the produced labels may deteriorate the accuracy of any classification method applied to the labelled data. We propose an enhanced nearest neighbor classifier (ENN) to overcome this issue. Two algorithms are developed to estimate the worker quality (which is often unknown in practice): one is to construct the estimate based on the denoised worker labels by applying the kNN classifier to the expert data; the other is an iterative algorithm that works even without access to the expert data. Other than strong numerical evidence, our proposed methods are proven to achieve the same regret as its oracle version based on high-quality expert data. As a technical by-product, a lower bound on the sample size assigned to each worker to reach the optimal convergence rate of regret is derived.
| null |
[
"https://arxiv.org/pdf/2203.00781v1.pdf"
] | 247,218,497 |
2203.00781
|
1b26bf6cabcc7962bfa884f51c6c90dbe87ca803
|
Enhanced Nearest Neighbor Classification for Crowdsourcing
26 Feb 2022
Jiexin Duan .email:[email protected].
Department of Statistics
Purdue University
Xingye Qiao [email protected]
Department of Mathematical Sciences
Binghamton University
Guang Cheng [email protected].
Department of Statistics
UCLA
Binghamton University
State
Department of Statistics
University of New York
13902BinghamtonNY
University of California
90095Los AngelesCA
)
Enhanced Nearest Neighbor Classification for Crowdsourcing
26 Feb 20221Crowdsourcingnearest neighbor classificationregret analysisworker quality * Senior Financial ModelerMoody's AnalyticsIncNewarkCA 94560
In machine learning, crowdsourcing is an economical way to label a large amount of data. However, the noise in the produced labels may deteriorate the accuracy of any classification method applied to the labelled data. We propose an enhanced nearest neighbor classifier (ENN) to overcome this issue. Two algorithms are developed to estimate the worker quality (which is often unknown in practice): one is to construct the estimate based on the denoised worker labels by applying the kNN classifier to the expert data; the other is an iterative algorithm that works even without access to the expert data. Other than strong numerical evidence, our proposed methods are proven to achieve the same regret as its oracle version based on high-quality expert data. As a technical by-product, a lower bound on the sample size assigned to each worker to reach the optimal convergence rate of regret is derived.
Introduction
In light of the needs of a large amount of labeled data as the training sets, machine learning researchers pay increasing attentions to crowdsourcing services such as the Amazon Mechanical Turk 1 (AMT). In crowdsourcing, many independent and relatively inexpensive workers produce their labels that collectively determine a solution by aggregating these crowd opinions. Ideally, the ground truth labels are inferred from these noisy labels. In the literature, many methods (Dawid and Skene, 1979;Raykar et al., 2009;Whitehill et al., 2009) build probabilistic models for the crowdsourcing process and then derive the labels using Expectation Maximization-type algorithms (Dempster et al., 1977). Recently, classifiers that predict the labels for future observations directly from the crowdsourcing data has also been proposed (Dekel and Shamir, 2009;Wauthier and Jordan, 2011;Kajino et al., 2012a).
A commonly recognized challenge to classification using crowdsourcing data is the low quality of the workers (Sheng et al., 2008;Wauthier and Jordan, 2011). Previous proposals heavily depend on prior knowledge of the ground truth distribution (Raykar et al., 2010;Yan et al., 2010). Additionally, many methods (Kajino et al., 2012b;Wang and Zhou, 2015) require the availability of the so-called expert data, whose labels are generated by ground truth distribution. To overcome the issue of low-quality workers, in this article, we propose a nonparametric classification method based on crowdsourcing data that requires neither the expert data nor prior knowledge of the ground truth distribution.
The nearest neighbor (NN) classifier (Fix and Hodges Jr, 1951;Cover and Hart, 1967) is among the conceptually simplest and prevalent classification methods. Its statistical properties have been studied in Devroye et al. (1994); Samworth (2012); Chaudhuri and Dasgupta (2014) To the best of our knowledge, there is no theoretical study on how NN classifiers work with crowdsourcing data.
We proposed a new NN classifier for crowdsourcing data that overcomes the noise in the low-quality worker labels. Our major contribution is the investigation of a type of crowdsourcing method where the worker labels data are first enhanced (hence dubbed as "ENN") and then a test data prediction is made through a weighting scheme to aggregate the enhanced labels. This concise enhancement effort can substantially reduce the noise in worker labels. It has a potential to generalize to other methods than the NN classifier.
As the second contribution, we derive an asymptotic expansion form of the regret of the ENN classifier. This technical result is a nontrivial extension from Samworth (2012). Specifically, we enhance the noisy worker data with different quality and sizes, which leads to remainder terms bounded in a nontrivial way. With carefully chosen weights, the regret of ENN achieves the same optimal regret on the expert data as the "oracle" optimal weighted nearest neighbor (OWNN) (Samworth, 2012), in terms of both the rate of convergence and the multiplicative constant. Here, we define an "oracle" classifier as the classifier trained on an expert data set with the sample size. Cannings et al. (2020) analyzed a special case with only one worker sample, and they assume that the Bayes classifier given noisy labels predicts as well as its ground truth version. This unrealistic assumption is not required in our analysis because of the use of the enhancing technique.
Our proposed ENN requires quantifying the worker quality, which is often unknown in practice. Our third contribution is the development of two estimators for the worker quality. One method (ENN2) constructs the estimators based on the denoised worker labels through applying kNN classifier to the expert data. Unlike previous worker quality estimation methods, which had no statistical guarantee, ENN2 is proven to achieve the same regret as ENN with known worker quality. The other method (ENN3) uses ENN to estimate the worker quality in an iterative manner, and works well even without access to the expert data.
In summary, we have made the following contributions: (1) A denoising enhancement to the worker data labels, which can be easily extended to other classifiers.
(2) A solid theoretical study of the statistical guarantee for the crowdsourcing data classification.
(3) Repetition of instance is not required, lowering the cost of label collection.
(4) Expert data is not required for ENN and ENN3, which is more practical for crowdsourcing data.
The rest of this article is organized as follows. Section 2 introduces the setting and notations. The asymptotic expansion form for the regret is presented in Section 3, followed by some comparisons between ENN and the oracle WNN. Section 4 focuses on the estimation of worker quality. Section 5 and Section 6 include numerical experiments and some concluding discussions.
Preliminaries
Consider s workers and n instances in the crowdsourcing problem. Let J j ⊆ {1, . . . , n} be an index set of instances that the j-th worker has labeled, and n j = |J j | be the number of instances the j-th worker has labeled. In total, the crowdsourcing data has N = s j=1 n j observations. P j defined on R d × {0, 1} represents the joint distribution of the labeled data from the j-th worker. The ground truth distribution is denoted as P 0 . We observe data from s workers,
D C = ∪ s j=1 D j where D j = {(X j i , Y j i )} i∈J j iid ∼ P j .
Y j i is the label tagged by the j-th worker to the i-th instance. Denote the probability that an instance is labeled as class r by worker j as π j r := P j (Y = r). The conditional distribution of X j given Y j = r is denoted as P j r for r = 0, 1. Hence, the marginal distribution of X by worker j is
P j = π j 1 P j 1 + (1 − π j 1 )P j 0 .
As the instances are randomly assigned to workers, we assume all worker data and the ground truth data share the same marginal distribution, i.e.,P j =P , for j = 0, . . . , s. Given x, the probability that worker j would label it to be class 1 (i.e., the regression function) is defined as,
η j (x) = P j (Y = 1|X = x), for j ∈ {0, . . . , s}.
To model the labeling process, we assume the well known two-coin model (Raykar et al., 2010;Kajino et al., 2012b) below. The sensitivity and the specificity 2 for worker j are defined as a j = P j (Y j = 1|Y 0 = 1), and b j = P j (Y j = 0|Y 0 = 0), respectively. Therefore, we have the following relationship between the jth worker's regression function and the ground truth regression function:
η j (x) =a j η 0 (x) + (1 − b j )(1 − η 0 (x)).
(1)
A worker who always gives the labels based on the η 0 (x) (i.e., a j = b j = 1) is called an expert. Our goal is to design classifiers φ: R d → {0, 1}, based on crowdsourced data, which minimizes the classification risk R(φ) = P 0 (φ(X) = Y ), under the ground truth distribution P 0 . The theoretical minimizer of R(φ) is the so-called Bayes classifier φ * (x) = 1 η 0 (x) ≥ 1/2 with the corresponding Bayes risk R(φ * ). For any classifier φ n := Ψ(D) obtained by following a classification procedure Ψ given the data D, its regret is defined as:
Regret(Ψ) = E D [R( φ n )] − R(φ * ),
where E D is with respect to the distribution of the data D.
We now introduce a general weighted nearest neighbor (WNN) classifier. For a query point x, let (X (1) , Y (1) ), (X (2) , Y (2) ), . . . (X (n) , Y (n) ) be the sequence of observations with ascending distance to x, and denote w ni as the (non-negative) weight assigned to the i-th neighbor of
x with n i=1 w ni = 1. Define S n,wn (x) := n i=1 w ni Y i as the WNN estimate of η 0 (x). The WNN prediction is thus φ n,wn (x) = 1 S n,wn (x) ≥ 1/2 ,
where w n denotes the weight vector. When w ni = k −1 for 1 ≤ i ≤ k, or 0 for i > k, WNN reduces to the standard kNN classifier, denoted as φ n,k (x). Denote the WNN classifier on the expert data with size N and on the crowdsourcing data D C with the same size as φ 0 N,w N (x) and φ C N,w N (x), respectively. Proposition 1 in Samworth (2012) provides an asymptotic expansion of the WNN regret on the expert data.
Proposition 1. (Asymptotic Regret for WNN) Assuming (A1)-(A4) stated in Appendix S.I, for each β ∈ (0, 1/2), we have, uniformly for w N ∈ W N,β ,
Regret( φ 0 N,w N ) = B 1 N i=1 w 2 N i + B 2 N i=1 α i w N i N 2/d 2 {1 + o(1)},(2)as N → ∞, where α i = i 1+ 2 d − (i − 1) 1+ 2 d . Constants B 1 , B 2 and W N,β 3 are defined in
Enhanced crowdsourcing classification
In this section, we propose an enhanced version of nearest neighbor classifier (ENN) and further prove that the ENN and its oracle counterpart share the same asymptotic regret, given that the weight in each worker is carefully chosen. After a transformation of (1), we have
η j (x) − a j − b j 2 − 1/2 = (a j + b j − 1)(η 0 (x) − 1/2).(3)
(3) suggests that the worker data and ground truth distribution can have different decision boundaries (set of x with η 0 (x) or η j (x) = 1/2). This assumption is weaker than those in Cai and Wei (2019) which requires the same decision boundaries. For example, for points on the ground truth decision boundary η 0 (x) = 1/2, we have η j (x) = a j −b j 2 + 1/2. This means there exists a bias a j −b j 2 in the j-th worker data when her sensitivity and specificity are different. When a j > b j , with a higher probability, worker j would label the instance to be class 1 than to class 0. In addition, the deviation from the decision boundary η 0 (x) − 1/2, scaled by a multiplicative factor a j + b j − 1, is always smaller than 1 for non-expert data, suggesting that instance x is more difficult to classify when a j = b j since it is closer to the decision boundary. Therefore, it is necessary to enhance the labels for better performance. Illuminated by another transformation of (1)η
j (x) := η j (x) + b j − 1 a j + b j − 1 = η 0 (x),
we can derive the enhanced labels adjusted by the worker quality. We propose to enhance label according to (4) in Algorithm 1, which removes the noise due to worker quality. The main idea of ENN in Algorithm 1 is straightforward: (1) the enhanced labels are derived to take into account the noise in worker data;
(2) a local WNN regression estimator is obtained based on the data for each worker with enhanced labels;
(3) the final classifier is an outcome of the weighted voting over the s local WNN predictions.
Remark 1. In Algorithm 1, if the worker quality is unknown, we can estimate it by Algorithm 2 or Algorithm 3 to be stated later. Note that S E n j ,wn j (x) may be negative in a worker dataset with small size under an extreme marginal distribution of X. However, its negative value does not affect its contribution in (5) for decision making.
Our first main result, Theorem 1, gives an asymptotic expansion for the regret of ENN. Note that neither variance nor bias terms depends on worker quality a j and b j .
Theorem 1. (Asymptotic Regret for ENN) Assume the same conditions as in Proposition 1. We have uniformly for w n j ∈ W n j ,β , for β ∈ (0, 1/2), a j + b j > 1, as n j → ∞,
Regret( φ E n j ,s,wn j ) = B 1 s j=1 n j N 2 n j i=1 w 2 j,i + B 2 s j=1 n j N n j i=1 α i w j,i n 2/d j 2 {1 + o(1)}.(6)
Remark 2. a j + b j > 1 means a worker gives label with more than 50% correctness on average. Otherwise, we consider this worker as an adversary which should be dropped.
Algorithm 1 Enhanced Nearest Neighbor with crowdsourced data (ENN) Input: Crowdsourced data {D j } s j=1 , weight vector w j,i , worker sensitivity a j and specificity b j , and query x. Output: ENN.
1: for j = 1 to s do 2:
Enhanced labels for the j-th worker data:
Y j (i) = Y j (i) + b j − 1 a j + b j − 1 .(4)
3:
Local WNN estimator S E n j ,wn j (x) = n j i=1 w j,iỸ j (i) . 4: end for 5: Weighted voting of local WNN estimators φ E n j ,s,w j (x) = 1 s j=1 W j S E n j ,w j (x) ≥ 1/2 ,(5)
where the worker weight W j = n j /N . 6: return: φ E n j ,s,wn j (x).
In contrast with Proposition 1, the first term in the asymptotic regret of ENN in Theorem 1 is reduced by a factor of (n j /N ) 2 , while the squared bias term becomes the weighted average of bias from each worker data.
We know that the minimal asymptotic regret of the oracle kNN ('oracle' means the classifier is obtained from the expert data with size N ; we use K to denote the number of neighbors, emphasizing its global nature) is achieved when Samworth (2012). Consider a variant of ENN in which kNN is trained at each worker data, dubbed as ENN(k). An intuitive choice for k, the number of local neighbors for each local kNN classifier, here is (n j /N )K * , so that globally about K * neighbors are used. Theorem 1 implies that the optimal local choice of k j in ENN(k) (which gives rise to the same regret as the optimal oracle kNN) is indeed the above intuitive choice. Given the weight vector, Theorem 2 affords an asymptotic regret comparison between the ENN and the oracle WNN, as implied by Proposition 1 and Theorem 1. Theorem 2 says that given an oracle WNN which uses the expert data only, one can find an ENN with matching regret. It is encouraging that this can be done without incurring any regret loss, whether on the rate level or the multiplicative constant.
K = K * := dB 1 4B 2 d/(d+4) N 4/(d+4)
Theorem 2. (Asymptotic Regret Comparison between ENN and Oracle WNN) Assume the conditions in Theorem 1. Given an oracle WNN classifier with weights w N on an expert data with size N, denoted as φ 0 N,w N (x), there exists an ENN classifier with weight w n j on the crowdsourcing data, so that as n j → ∞,
Regret( φ E n j ,s,wn j ) Regret( φ 0 N,w N ) −→ 1,
uniformly for w n j ∈ W n j ,β and w N ∈ W N,β satisfying
s j=1 n j N 2 n j i=1 w 2 j,i / N i=1 w 2 N i −→ 1 and (7) s j=1 n j N n j i=1 α i w j,i n 2/d j / N i=1 α i w N i N 2/d −→ 1.(8)
Theorem 2 says if the local weights for ENN are chosen to align with the oracle weights according to (7) and (8), then ENN can achieve the same regret as the oracle WNN.
As an illustration, we show how to find the local weights by applying the results in Theorem 2 to the OWNN method, which is the best oracle WNN method due to Samworth (2012), whose global weights are defined as
w * i (N, m * ) = 1 m * 1 + d 2 − dα i 2(m * ) 2/d , if i = 1, . . . , m * , 0, if i = m * + 1, . . . , N,(9)
where
m * = d(d + 4) 2(d + 2) d d+4 B 1 B 2 d d+4 N 4 d+4 .
According to (7) and (8), the local weights in the optimal ENN (that can achieve the same OWNN regret convergence rate N −4/(d+4) ) should be set as w * j,i := w * i (n j , l * j ), where l * j = (n j /N )m * .
Interestingly, the above scaling factor is the same as that in the case of ENN(k) discussed earlier.
Corollary 1 summarizes the above findings, and further discovers, in (ii), the lower bound for the size of each worker data in ENN.
Corollary 1. (Optimal ENN) Suppose the conditions in Theorem 1 hold. (i) If n j /N d/(d+4) → ∞, the asymptotic minimum regret of ENN is achieved by setting w * j,i = w * i (n j , l * j ) with l * j defined in (10) and w * i (·, ·) as in (9). In addition, we have as n j → ∞,
Regret( φ E n j ,s,w * n j )/Regret( φ 0 N,w * N ) → 1. (ii) If n j = O(N d/(d+4) ), then uniformly for w n j ∈ W n j ,β , lim inf n j →∞ Regret( φ E n j ,s,wn j )/Regret( φ 0 N,w * N ) → ∞.
The upper bound on n j in (ii) makes sense, as if the size of each worker data is too small, the bias and variance would be too large. In the special case that all n j are equal, we have a sharp bound that n/N d/(d+4) → ∞ (i.e., s = o(N 4/(d+4) )). This result is the same as the one for W-DiNN in Duan et al. (2020).
Estimation of worker quality
We propose two methods to estimate worker quality, a j and b j . One method requires access to a set of expert data, and is proven to achieve the same statistical guarantee as if a j and b j were known. The other method applies ENN to estimate the worker quality in an iterative manner, and it works well even without access to the expert data.
In Algorithm 2, we estimate the worker quality by applying the kNN classifier on a set of expert data to relabel each worker data. The new labels are used as the substitutions for ground truth to estimate the worker quality.
Algorithm 2 ENN2 with worker quality estimation (expert data required) Input: Crowdsourcing data {D j } s j=1 where D s is an expert data, with a s = b s = 1, local weight vector w j,i . Output: ENN2, estimated worker sensitivity a j and specificity b j , for j ∈ {1, . . . , s}.
1: for j = 1 to s − 1 do 2:
Derive predicted labels φ ns,k (X j i ) for all X j i in D j using kNN (k = n 4/(d+4) s ) on the expert data D s .
3:
Estimate the worker quality:
a j = n j i=1 1 φ ns,k (X j i ) = 1, Y j i = 1 n j i=1 1 φ ns,k (X j i ) = 1 , and b j = n j i=1 1 φ ns,k (X j i ) = 0, Y j i = 0 n j i=1 1 φ ns,k (X j i ) = 0
. 4: end for 5: Derive φ E2 n j ,s,wn j (x) using Algorithm 1 with a j = a j and b j = b j (j ∈ {1, . . . , s}) where a s = b s = 1. 6: return: φ E2 n j ,s,wn j (x); a j and b j , for j ∈ {1, . . . , s}.
Plug the estimated worker quality a j and b j from Algorithm 2 to Algorithm 1, we obtain the ENN with estimated worker quality (ENN2) which has a similar statistical guarantee as ENN. Theorem 3 gives an asymptotic expansion formula for the regret of the ENN classifier given weight vector w n j based on estimated a j and b j from Algorithm 2. Specifically, when the size of expert data has a higher order than each worker data, ENN2 can achieve the same asymptotical regret as ENN as in Theorem 1 when the worker quality was known.
Theorem 3. (Asymptotic Regret for ENN with estimated worker quality) Assuming the same conditions as in Theorem 1, n j /n s = o(1) for j ∈ {1, . . . , s − 1}, we have for each β ∈ (0, 1/2), a j + b j > 1, as n j → ∞,
Regret( φ E2 n j ,s,wn j ) = B 1 s j=1 n j N 2 n j i=1 w 2 j,i + B 2 s j=1 n j N n j i=1 α i w j,i n 2/d j 2 {1 + o(1)},
uniformly for w n j ∈ W n j ,β .
Remark 3. In Theorem 3, the assumption n j /n s = o(1) for j ∈ {1, . . . , s − 1} is used to bound the order of remainder terms due to worker quality estimation.
Algorithm 3 ENN3 with worker quality estimation (expert data not required) Input: Crowdsourcing data {D j } s j=1 , local weight vector w j,i , and stop criteria c. Output: ENN3; estimated worker sensitivity a j and specificity b j , for j ∈ {1, . . . , s}.
1: Initialization: a j = b j = 1, for j ∈ {1, . . . , s}. 2: for l = 1, 2, . . . do 3:
for j = 1 to s do 4:
Derive predicted labels φ E n j ,s,wn j (X j i ) for all X j i in D j using ENN in Algorithm 1.
5:
Estimate worker quality:
a j = n j i=1 1 φ E n j ,s,wn j (X j i ) = 1, Y j i = 1 n j i=1 1 φ E n j ,s,wn j (X j i ) = 1 , b j = n j i=1 1 φ E n j ,s,wn j (X j i ) = 0, Y j i = 0 n j i=1 1 φ E n j ,s,wn j (X j i ) = 0 . 6: end for 7: Compute ∆ = 1 2s s j=1 (| a j − a j | + | b j − b j |). 8: Update: a j = a j , b j = b j for j ∈ {1, . . . , s}. 9:
if ∆ ≤ c then 10:
Break.
11:
end if 12: end for 13: Derive φ E3 n j ,s,wn j (x) using Algorithm 1 with a j = a j and b j = b j , for j ∈ {1, . . . , s}. 14: return: φ E3 n j ,s,wn j (x), a j and b j , for j ∈ {1, . . . , s}.
Algorithm 2 requires the size of expert data has a higher order than other worker data. However, this generally does not hold as expert data does not exist or has relatively smaller size in practice. Therefore, we propose a more practical algorithm that does not need expert data to estimate the worker quality. Specifically, we apply Algorithm 1 to derive predicted labels for each worker to substitute the ground truth labels and update the worker quality iteratively. The main idea of this estimation procedure (summarized in Algorithm 3) is straightforward:
(1) initialize all a j and b j with 1;
(2) derive predicted labels for each worker data by applying ENN on the crowdsourcing data;
(3) update a j and b j by comparing observed labels and predicted labels for the j-th worker; (4) iterate until convergence.
There are several advantages to this procedure. We do not need expert data to estimate worker quality, unlike most previous methods. It also converges quickly in practice if we choose a suitable stop criterion, such as 2%.
Numerical studies
In this section, we check the accuracy of the ENN methods using simulations and real examples. All experiments are conducted in R environment on HPC clusters with two 12-core Intel Xeon Gold Skylake processors and two 10-core Xeon-E5 processors, with memory between 96 and 128 GB.
Simulations
In the simulated studies, we compare ENN methods with naive kNN, oracle kNN, and oracle OWNN from different aspects. Here, naive kNN denotes kNN classifiers on the original crowdsourcing data directly, and oracle kNN denotes classifiers run on the expert data with size N . In comparing ENN(k) (kNN is trained at each worker data) with the oracle kNN, we aim to verify the main results in Theorem 2, namely, the ENN can attain the same performance as the oracle method. In comparing the ENN methods with optimal local weights and the oracle OWNN method, we aim to verify the sharpness of upper bound on the number of workers in Corollary 1. It is verified by showing that the difference in performance between the ENN methods and the oracle OWNN deviates when the theoretical upper bound is exceeded.
Three settings are considered for ground truth distribution. Simulation 1 allows a relatively easy classification task, Simulation 2 examines the bimodal effect, and Simulation 3 combines bimodality with dependence between variables. In Simulation 1, N = 5 j=1 n j = 20000 and d = 4, 6, 8. As ground truth distribution, the two classes are generated as P 0
1 ∼ N (0 d , I d ) and P 0 0 ∼ N ( 2 √ d 1 d , I d )
with the class probability π 0 1 = P(Y = 1) = 1/3. The worker data are generated with (1) with different settings of quality and sizes in Table 1. Simulation 2 has the same setting as Simulation 1, except both classes are bimodal with P 0
1 ∼ 0.5N (0 d , I d ) + 0.5N (3 d , 2I d ) and P 0 0 ∼ 0.5N (1.5 d , I d ) + 0.5N (4.5 d , 2I d ).
Simulation 3 has the same setting as Simulation 2, except P 0 1 ∼ 0.5N (0 d , Σ) + 0.5N (3 d , 2Σ) and P 0 0 ∼ 0.5N (1.5 d , Σ) + 0.5N (4.5 d , 2Σ) with π 0 1 = 1/2, and Σ the Toeplitz matrix whose (1, j)th entry is 0.6 j−1 . When comparing the kNN methods, the number of neighbors K in the oracle kNN is chosen as K = N 0.7 . The number of local neighbors in ENN(k) is chosen as k j = (n j /N )K as suggested by Theorem 2. These k values are truncated at 1, since we cannot have a fraction of an observation. In comparing with the oracle OWNN method, the m * parameter in OWNN is tuned using cross-validation. The parameter l j in ENN for each worker data is chosen as l * j = (n j /N )m * as stated in Corollary 1. The test set is independently generated with 1000 observations under the ground truth distribution for both comparisons. We repeat the simulation for 1000 times for each quality setup and d. We compare our proposed ENN methods with two benchmark NN classifiers (naive kNN and oracle kNN) in the first part of simulations. We consider ten groups of worker quality setups. Table 1 illustrates the setup of worker data with the remarks commenting on the purpose of each setup. There is no expert data in setups 1-5, while the worker 5 is an expert in setups 6-10. All methods will run on each setup except that ENN2 is not applicable on setups 1-5.
The comparison between the risks of the four methods (two kNN and two ENN) on crowdsourcing data with no expert (setups 1-5) is reported in Figure 1. For all quality setups, the risk is similar between ENN1(k) and the oracle kNN, while ENN3(k) has a small gap with both. Naive kNN has a significantly larger risk as the original worker data contains some noise due to low worker quality. These verify the main results in Section 3 and Section 4. Therefore, ENN1 and ENN3 can achieve a similar performance as if the entire training data are labelled by an expert. Similar conclusions can be made for setups 6-10, shown in Figure S1. Moreover, if there exists expert data with a large size, ENN2(k) performs well even with the worker quality unobserved. Table S1 and Table S2 show good estimation accuracy of the worker quality based on ENN2 and ENN3, respectively.
On the other hand, we apply a special worker data setup (five expert data with equal size Since the comparison with the oracle OWNN is meant to verify the sharp upper bound for γ in the optimal weight setting (Corollary 1), we carefully tune the weights in the oracle OWNN method in order to reach the optimality. Figure 2 shows the comparison of risks for ENN and oracle OWNN methods. Our focus here is when the ENN method starts to have significantly worse performance than the oracle OWNN, and the answers lie in the upper bounds in Corollary 1. For simplicity, we set d = 4, which leads to an upper bound of 4/(d + 4) = 0.5 for the ENN method. This upper bound is shown as vertical lines in Figure 2. Specifically, the ENN has almost the same performance as the OWNN method for γ ≤ 0.4. However, ENN does not perform well enough for γ ≥ 4/(d + 4) = 0.5 when compared to OWNN. These verify the results in Corollary 1.
Real examples
In this section, we empirically check the accuracy of ENN compared with four benchmark methods: the naive kNN, naive OWNN methods, and two existing crowdsourcing methods (we denoted as DS and LFC). The DS method (Dawid and Skene, 1979) applies confusion matrix and EM algorithm on the labels of the training set to estimate the truth labels. Based on updated labels from DS, we apply kNN on the testing set for prediction. The LFC method (Raykar et al., 2010) is a combination of a two-coin logistic model and EM algorithm. We have retained benchmark data sets Fire (Abid and Izeboudjen, 2019), Ionosphere (Sigillito et al., 1989), Musk1 (Dietterich et al., 1997), Breast (Street et al., 1993) Table 1. The test sample sizes are set as (total sample size)/5. Parameters in the naive kNN and OWNN are tuned using cross-validation, and the parameters k j in ENN(k) for each worker data are set using bridging formula stated in our theorems. The empirical risk is calculated over 1000 replications.
In Figure 3, we compare the empirical risk (test error) of ENN3(k) relative to LFC, DS, naive OWNN and kNN. From Figure 3, we can see that the ENN3(k) outperforms the other four benchmark methods in all cases. Both naive kNN and OWNN methods have significantly poor performance under different quality setups, especially on the setup 3, which has a lower level of worker quality. The ENN method significantly enhances the case of poor quality. Lastly, we note that a larger sample size N generates a more stable ENN method performance among different quality setups. As a larger sample size may increase the estimation accuracy for worker quality, the enhanced effect of ENN on the noisy worker data will thereby improve the performance.
Discussions
There are a couple of interesting directions to be pursued in the future. The first two are extensions to the multicategory classification problem and high-dimensional data. The third direction is related to a realistic attack paradigm named adversarial examples that received a lot of recent attention (Szegedy et al., 2013;Papernot et al., 2016). Some worker data may contain adversarial examples in practice, which might violate our quality assumption a j + b j > 1. It leaves us to wonder how to take advantage of the quality-related nature of ENN to detect and deal with adversarial samples. In addition, it is also an interesting direction to explore strategies to relax the assumption that worker quality does not depend on the feature vector.
Supplementary Materials
S.I Appendix 1: Assumptions (A1) -(A4)
For a smooth function g, we writeġ(x) for its gradient vector at x. The following conditions are assumed throughout this paper. (A1) The set R ⊂ R d is a compact d-dimensional manifold with boundary ∂R.
(A2) The set S = {x ∈ R : η 0 (x) = 1/2} is nonempty. There exists an open subset U 0 of R d which contains S such that: (1) η 0 is continuous on U \U 0 with U an open set containing R; (2) the restriction of the conditional distributions of X, P 0 1 and P 0 0 , to U 0 are absolutely continuous with respect to Lebesgue measure, with twice continuously differentiable Randon-Nikodym derivatives f 0 1 and f 0 0 . (A3) There exists ρ > 0 such that R d x ρ dP (x) < ∞. In addition, for sufficiently small δ > 0, inf x∈RP (B δ (x))/(a d δ d ) ≥ C 0 > 0, where a d = π d/2 /Γ(1 + d/2), Γ(·) is gamma function, and C 0 is a constant independent of δ.
(A4) For all x ∈ S, we haveη 0 (x) = 0, and for all x ∈ S ∩ ∂R, we have∂ η 0 (x) = 0, where ∂η 0 is the restriction of η 0 to ∂R.
S.II Appendix 2: Definitions of a 0 (x), B 1 , B 2 , W n j ,β and W N,β For a smooth function g: R d → R, denote g m (x) as its m-th partial derivative at x and g mk (x) the (m, k)-th element of its Hessian matrix at
x. Let c m,d = v: v ≤1 v 2 m dv,f = π 0 1 f 0 1 + (1 − π 0 1 )f 0 0 . Define a 0 (x) = d m=1 c m,d {η 0 m (x)f m (x) + 1/2η 0 mm (x)f (x)} a 1+2/d df (x) 1+2/d .
Moreover, define two distribution-related constants
B 1 = Sf (x) 4 η 0 (x) dVol d−1 (x), B 2 = Sf (x) η 0 (x) [a 0 (x)] 2 dVol d−1 (x),
where Vol d−1 is the natural (d − 1)-dimensional volume measure that S inherits as a subset of R d . According to Assumptions (A1)-(A4) in Appendix S.I, B 1 and B 2 are finite with B 1 > 0 and B 2 ≥ 0, with equality only when a 0 (x) = 0 on S. In addition, for β > 0, we define W n j ,β as the set of w j satisfying:
(w.1) n j i=1 w 2 j,i ≤ n −β j ; (w.2) n −4/d j ( n j i=1 α i w j,i ) 2 ≤ n −β j , where α i = i 1+ 2 d − (i − 1) 1+ 2 d ; (w.3) n 2/d j n j i=k j 2 +1 w j,i / n j i=1 α i w j,i ≤ 1/ log n j with k j 2 = n 1−β j ; (w.4) n j i=k j 2 +1 w 2 j,i / n j i=1 w 2 j,i ≤ 1/ log n j ; (w.5) n j i=1 w 3 j,i /( n j i=1
w 2 j,i ) 3/2 ≤ 1/ log n j . When n j in (w.1)-(w.5) is replaced by N , we can define the set W N,β . Table S1 and Table S2 illustrate the comparison of true and estimated worker quality based on Algorithm 2 (ENN2) and Algorithm 3 (ENN3), respectively. Figure S1 shows the comparison of risks for setups 6-10. Sim d a 1 a 1 a 2 a 2 a 3 a 3 a 4 a 4 a 5
S.III Appendix 3: Additional numerical results
a 5 b 1 b 1 b 2 b 2 b 3 b 3 b 4 b 4 b 5 b 5 1 4 0.Sim d a 1 a 1 a 2 a 2 a 3 a 3 a 4 a 4 a 5 a 5 b 1 b 1 b 2 b 2 b 3 b 3 b 4 b 4 b 5 b 5 1 4 0.
S.IV Proof of Theorem 1
For the sake of simplicity, we omit w n j in the subscript of such notations as φ E n j ,s,w j and S j n j ,w j . WriteP 0 = π 0 1 P 0 1 − (1 − π 0 1 )P 0 0 . We have
Regret( φ E n j ,s ) = E[R( φ E n j ,s )] − R(φ * ) = R π 0 1 P φ E n j ,s (x) = 0 − 1 φ * (x) = 0 dP 0 1 (x) + R (1 − π 0 1 ) P φ E n j ,s (x) = 1 − 1 φ * (x) = 1 dP 0 0 (x) = R P φ E n j ,s (x) = 0 − 1 η 0 (x) < 1/2 dP 0 (x).
Without loss of generality, we consider the j-th worker data of D C :
D j = {(X j i , Y j i ), i = 1, . . . , n j }. Given X = x, we define (X j (i) , Y j (i) ) such that X j (1) − x ≤ X j (2) − x ≤ . . . ≤ X j (n) − x .
Denote the estimated regression function on the j-th enhanced worker data as
S E n j (x) = n j i=1 w j,iỸ j (i) , whereỸ j (i) = Y j (i) +b j −1
a j +b j −1 is the enhanced label. Denote the weighted average of estimated regression function from s worker data as 1 w 1,1 , W 1 w 1,2 . . . , W 1 w 1,n 1 , . . . , W s w s,1 , W s w s,2 . . . , W s w s,ns }.
S E n j ,s (x) = s j=1 W j S E n j (x) = s j=1 W j n j i=1 w j,iỸ j (i) . We can also write S E n j ,s (x) as S E n j ,s (x) = s j=1 W j n j i=1 w j,iỸ j (i) = s j=1 n j i=1 W j w j,iỸ j (i) = N l=1 w N lỸl , where N = s j=1 n j , {Ỹ 1 ,Ỹ 2 , . . .Ỹ N } ={Ỹ 1 (1) ,Ỹ 1 (2) , . . . ,Ỹ (1) (n 1 ) , . . . ,Ỹ s (1) ,Ỹ s (2) , . . . ,Ỹ s (ns) }, {w N 1 , w N 2 , . . . w N N } ={W
The ENN classifier is defined as
φ E n j ,s (x) = 1 S E n j ,s (x) ≥ 1/2 . Since P φ E n j ,s (x) = 0 = P S E n j ,s (x) < 1/2 , the regret of ENN becomes Regret( φ E n j ,s ) = R P(S E n j ,s (x) < 1/2) − 1 η 0 (x) < 1/2 dP 0 (x).
In the expert data, denote the boundary S = {x ∈ R : η 0 (x) = 1/2}. For > 0, let S = {x ∈ R d : η 0 (x) = 1/2 and dist(x, S) < }, where dist(x, S) = inf x 0 ∈S x − x 0 . We will focus on the set
S = x 0 + tη 0 (x 0 ) η 0 (x 0 ) : x 0 ∈ S , |t| < . Let µ j n j (x) = E{S E n j (x)}, [σ j n j (x)] 2 = Var{S E n j (x)}, and n j = n −β/(4d) j
. Denote s 2 n j = n j i=1 w 2 j,i and t n j = n −2/d j n j i=1 α i w j,i . From Lemma 2, we have uniformly for w n j ∈ W n j ,β ,
sup x∈S n j |µ j n j (x) − η 0 (x) − a 0 (x)t n j | = o(t n j ), sup x∈S n j [σ j n j (x)] 2 − 1 4 s 2 n j = o(s 2 n j ). Let µ n j ,s (x) = E{S E n j ,s (x)}, σ 2 n j ,s (x) = Var{S E n j ,s (x)}. We have µ n j ,s (x) = E{S E n j ,s (x)} = E{ s j=1 W j S E n j (x)} = s j=1 W j µ j n j (x), σ 2 n j ,s (x) = Var{S E n j ,s (x)} = Var{ s j=1 W j S E n j (x)} = s j=1 (W j ) 2 [σ j n j (x)] 2 . Denote n j ,s = min j∈{1...s} { n j }, s 2
n j ,s = s j=1 W 2 j s 2 n j and t n j ,s = s j=1 W j t n j . From Lemma 3, we have, uniformly for w n j ∈ W n j ,β , We organize our proof in three steps. In
Step 1, we decompose the integral over R ∩ S n j ,s as an integral along S and an integral in the perpendicular direction; in Step 2, we focus on the complement set R\S n j ,s ; Step 3 combines the results and applies a normal approximation in S n j ,s to yield the final conclusion.
Step 1: For x 0 ∈ S and t ∈ R, denote x t 0 = x 0 + tη 0 (x 0 )/ η 0 (x 0 ) . Denote ψ 0 = π 0 1 f 0 1 − (1 − π 0 1 )f 0 0 ,f = π 0 1 f 0 1 + (1 − π 0 1 )f 0 0 as the Radon-Nikodym derivatives with respect to Lebesgue measure of the restriction ofP 0 andP 0 to S n j ,s for large n j respectively. Similar to Samworth (2012), we consider a change of variable from x to x t 0 . By the theory of integration on manifolds and Weyl's tube formula (Gray, 2004), we have, uniformly for w n j ∈ W n j ,β ,
R∩S n j ,s P(S E n j ,s (x) < 1/2) − 1 η 0 (x) < 1/2 dP 0 (x) = S n j ,s − n j ,s ψ(x t 0 ) P S E n j ,s (x t 0 ) < 1/2 − 1 t < 0 dtdVol d−1 (x 0 ){1 + o(1)}.
Step 2: Bound the contribution to regret from R\S n j ,s . We show that sup wn j ∈W n j ,β R\S n j ,s
P S E n j ,s (x) < 1/2 − 1 η 0 (x) < 1/2 dP 0 (x) = o(s 2 n j ,s + t 2 n j ,s ).
Applying Hoeffding's inequality to S E n j ,s (x), uniformly for w n j ∈ W n j ,β and x ∈ R\S n j ,s , we have
|P(S E n j ,s (x) < 1/2) − 1 η 0 (x) < 1/2 | ≤ exp −2(µ n j ,s (x) − 1/2) 2 N l=1 (w N l − 0) 2 ≤ exp −2(c 10 N −β/(4d) /4) 2 s j=1 (W j ) 2 s 2 n j ) ≤ exp −c 2 10 N −β/(2d) 8 s j=1 (W j ) 2 n −β j = exp −c 2 10 N −β/(2d) 8 s j=1 (n j /N ) 2 n −β j = exp −c 2 10 N −β/(2d) 8N −β s j=1 (n j /N ) 2−β ≤ exp −c 2 10 N −β/(2d) 8N −β = o(s 2 n j ,s + t 2 n j ,s ).
The second inequality holds by Lemma 4 and c 10 is a positive constant. The last inequality holds by generalized mean inequality and β ∈ (0, 1/2). This completes Step 2.
Step 3: In the end, we will show S n j ,s − n j ,s
ψ 0 (x t 0 ) P S E n j ,s (x t 0 ) < 1/2 − 1 t < 0 dtdVol d−1 (x 0 )
= B 1 s 2 n j ,s + B 2 t 2 n j ,s + o(s 2 n j ,s + t 2 n j ,s ). Applying Taylor expansion, we have, for x 0 ∈ S,
ψ 0 (x t 0 ) = ψ 0 (x 0 ) +ψ 0 (x 0 ) T (x t 0 − x 0 ) + o(x t 0 − x 0 ) =ψ 0 (x 0 ) Tη 0 (x 0 ) η 0 (x 0 ) t + o(t) = ψ 0 (x 0 ) t + o(t),
where the above second equality holds by definition of x t 0 , and the third equality holds by Lemma 5.
Hence,
S n j ,s − n j ,s ψ 0 (x t 0 ) P S E n j ,s (x t 0 ) < 1/2 − 1 t < 0 dtdVol d−1 (x 0 ) (S.3) = S n j ,s − n j ,s t ψ 0 (x 0 ) P S E n j ,s (x t 0 ) < 1/2 −1 t < 0 dtdVol d−1 (x 0 ){1 + o(1)}. Next, we decompose S n j ,s − n j ,s t ψ 0 (x 0 ) P S E n j ,s (x t 0 ) < 1/2 − 1 t < 0 dtdVol d−1 (x 0 ) (S.4) = S n j ,s − n j ,s t ψ 0 (x 0 ) Φ 1/2 − µ n j ,s (x t 0 ) σ n j ,s (x t 0 ) −1 t < 0 dtdVol d−1 (x 0 ) + R 11 . Let Z l = (w N lỸl − w N l E[Ỹ l ])/σ n j ,s (x) and V = N l=1 Z l .
Note that E(Z l ) = 0, Var(Z l ) < ∞, and Var(V ) = 1. The nonuniform Berry-Esseen Theorem (Grigor'eva and Popov, 2012) implies that there exists a constant c 11 > 0, such that
P(V ≤ By) − Φ(y) ≤ c 11 A B 3 (1 + |y| 3 ) ,
where A = N l=1 E|Z l | 3 and N l=1 E|Z l | 2 ) 1/2 . In the case of ENN, we have
A = N l=1 E| w N l Y l − w N l E[Y l ] σ 3 n j ,s (x) | 3 ≤ N l=1 16|w N l | 3 s 3 n j ,s = 16 N l=1 w 3 N l s 3 n j ,s , B = ( N l=1
Var(Z l )) 1/2 = Var(V ) = 1.
Denote c 12 = 16c 11 , we have
sup x 0 ∈S sup t∈[− n j ,s, n j ,s] P S E n j ,s (x t 0 ) − µ n j ,s (x t 0 ) σ n j ,s (x t 0 ) ≤ y − Φ(y) ≤ N l=1 w 3 N l s 3 n j ,s c 12 1 + |y| 3 .
Similar to Samworth (2012), by (S.1) and (S.2), we have there exists constants c 13 , c 14 > 0 such that, uniformly for w n ∈ W n,β , inf x 0 ∈S inf c 13 tn j ,s≤|t|≤ n j ,s
1/2 − µ n j ,s (x t 0 ) σ n j ,s (x t 0 )
≥ c 14 |t| s n j ,s . Therefore, we have n j ,s − n j ,s The inequality above leads to |R 11 | = o(s 2 n j ,s + t 2 n j ,s ).
|t| ψ 0 (x 0 ) P S E n j ,s (x t 0 ) < 1/2 − Φ 1/2 − µ n j ,s (x t 0 ) σ n j ,s (x t 0 ) dt ≤ |t|≤c 13 tn j ,s |t| ψ 0 (x 0 ) c 13 N l=1 w 3 NNext, we decompose S n j ,s − n j ,s t ψ 0 (x 0 ) Φ 1/2 − µ n j ,s (x t 0 ) σ n j ,s (x t 0 ) − 1 t < 0 dtdVol d−1 (x 0 ) (S.5) = S n j ,s − n j ,s t ψ 0 (x 0 ) Φ −2t η 0 (x 0 ) − 2a 0 (x 0 )t n j ,s s n j ,s − 1 t < 0 dtdVol d−1 (x 0 ) + R 12 .
Denote r = t/s n j ,s and r x 0 = −a 0 (x 0 )tn j ,s η 0 (x 0 ) sn j ,s . According to Lemma 3 , for a sufficiently small ∈ (0, inf x 0 ∈S η 0 (x 0 ) ) and a large n j , for all w n j ∈ W n j ,β , x 0 ∈ S and r ∈ [− n j ,s /s n j ,s , n j ,s /s n j ,s ], similar to Samworth (2012), we have
1/2 − µ n j ,s (x rsn j ,s 0 ) σ n j ,s (x rsn j ,s 0 ) − [−2 η 0 (x 0 ) (r − r x 0 )] ≤ 2 (|r| + t n j ,s /s n j ,s ).
In addition, when |r| ≤ t n j ,s /s n j ,s ,
Φ 1/2 − µ n j ,s (x rsn j ,s 0 ) σ n j ,s (x rsn j ,s 0 ) − Φ − 2 η 0 (x 0 ) (r − r x 0 ) ≤ 1,
and when t n j ,s /s n j ,s < |r| < n j ,s /s n j ,s ,
Φ 1/2 − µ n j ,s (x rsn j ,s 0 ) σ n j ,s (x rsn j ,s 0 ) − Φ − 2 η 0 (x 0 ) (r − r x 0 ) ≤ 2 (|r| + t n j ,s /s n j ,s )φ( η 0 (x 0 ) |r − r x 0 |),
where φ is the density function of standard normal distribution. Therefore, we have n j ,s − n j ,s
|t| ψ 0 (x 0 ) Φ 1/2 − µ n j ,s (x t 0 ) σ n j ,s (x t 0 ) − Φ −2t η 0 (x 0 ) − 2a 0 (x 0 )t n j ,s s n j ,s dt = ψ 0 (x 0 ) s 2 n j ,s n j ,s/sn j ,s − n j ,s/sn j ,s |r| Φ 1/2 − µ n j (x rsn j ,s 0 ) σ n j (x rsn j ,s 0 ) − Φ − 2 η 0 (x 0 ) (r − r x 0 ) dr ≤ ψ 0 (x 0 ) s 2 n j ,s |r|≤ tn j ,s/sn j ,s |r|dr + 2 ∞ −∞ |r|(|r| + t n j ,s /s n j ,s )φ( η 0 (x 0 ) |r − r x 0 |)dr = o(s 2 n j ,s + t 2 n j ,s ).
The inequality above leads to R 12 = o(s 2 n j ,s + t 2 n j ,s ).
By (S.3), (S.4) and (S.5), we have
S n j ,s − n j ,s ψ 0 (x t 0 ) P S E n j ,s (x t 0 ) < 1/2 − 1 t < 0 dtdVol d−1 (x 0 ) (S.6) = S n j ,s − n j ,s t ψ 0 (x 0 ) Φ −2t[ η 0 (x 0 ) − 2a 0 (x 0 )t n j ,s s n j ,s −1 t < 0 dtdVol d−1 (x 0 ) + o(s 2 n j ,s + t 2 n j ,s ).
Finally, after replacing t = us n j ,s /2 in (S.6), we have, up to o(s 2 n j ,s + t 2 n j ,s ) difference,
Regret( φ E n j ,s ) = s 2 n j ,s 4 S ∞ −∞ ψ 0 (x 0 ) u Φ − [ η 0 (x 0 ) u − 2a 0 (x 0 )t n j ,s s n j ,s − 1 u < 0 dudVol d−1 (x 0 ) = s 2 n j ,s 2 S ∞ −∞ η 0 (x 0 ) f (x 0 )u Φ − η 0 (x 0 ) u − 2a 0 (x 0 )t n j ,s s n j ,s (S.7) − 1 u < 0 dudVol d−1 (x 0 ) = B 1 s 2 n j ,s + B 2 t 2 n j ,s (S.8) = B 1 s j=1 n j N 2 n j i=1 w 2 j,i + B 2 s j=1 n j N n j i=1 α i w j,i n 2/d j 2 .
(S.7) holds by Lemma 5, and (S.8) can be calculated by Lemma 6. This completes the proof of Theorem 1.
S.V Proof of Theorem 2
From Theorem 1 and Proposition 1, we have, for large n j ,
Regret( φ E n,s,wn ) Regret( φ 0 N,w N ) = B 1 s j=1 n j N 2 n j i=1 w 2 j,i + B 2 s j=1 n j N n j i=1 α i w j,i n 2/d j 2 {1 + o(1)} B 1 N i=1 w 2 N i + B 2 N i=1 α i w N i N 2/d 2 {1 + o(1)} →1, as n j → ∞.
The last equality holds by (7) and (8). This completes the proof of Theorem 2.
S.VI Proof of Corollary 1
Denote a n b n if b n = O(a n ), a n b n if b n = o(a n ), a n b n if a n b n and b n a n . To find the optimal value of (6), we write its Lagrangian as
L(w n j ) = s j=1 n j N n j i=1 α i w j,i n 2/d j 2 + λ s j=1 n j N 2 n i=1 w 2 j,i + s j=1 ν j ( n j i=1 w j,i − 1), where λ = B 1 /B 2 .
Since all the weights are nonnegative, we denote l * j = max{i : w * j,i > 0}. Setting the derivative of L(w n j ) to be 0, we have
∂L(w n j ) ∂w j,i = 2 n j N α i n 2/d j s j=1 n j N l * j i=1 α i w j,i n 2/d j + 2λ n j N 2 w j,i + ν j = 0. (S.9)
Dividing n j /N on both sides of (S.9), we have
2 α i n 2/d j s j=1 n j N l * j i=1 α i w j,i n 2/d j + 2λ n j N w j,i +ν j = 0, (S.10)
whereν j = ν j / n j N . (i) Summing (S.10) from i = 1 to l * j , (ii) multiplying (S.10) by α i and then summing from i = 1 to l * j , we have
2n −2/d j (l * j ) 1+2/d s j=1 n j N l * j i=1 α i w j,i n 2/d j + 2λ n j N +ν j l * j =0, (S.11) 2n −2/d j l * j i=1 α 2 i s j=1 n j N l * j i=1 α i w j,i n 2/d j + 2λ n j N l * j i=1 α i w j,i +ν j (l * j ) 1+2/d =0. (S.12)
(iii) Multiplying (S.11) by l * j 2/d and then subtracting (S.12), we have
n −2/d j (l * j ) 1+4/d − l * j i=1 α 2 i s j=1 n j N l * j i=1 α i w j,i n 2/d j + λ n j N l * j 2/d − λ n j N l * j i=1 α i w j,i =0.((l * j ) 1+4/d − l * j i=1 α 2 i − λ s j=1 n j N l * j i=1 α i w j,i n 2/d j + λ s j=1 n j N n −2/d j l * j 2/d =0.
Therefore, we have
s j=1 n j N l * j i=1 α i w j,i n 2/d j = λ s j=1 n j N n −2/d j l * j 2/d λ − s j=1 n −4/d j (l * j ) 1+4/d − l * j i=1 α 2 i .
(S.14)
Plugging (S.14) back into (S.11), we havẽ
ν j = − 2λn −2/d j (l * j ) 2/d s j=1 n j N n −2/d j l * j 2/d λ + s j=1 n −4/d j l * j i=1 α 2 i − (l * j ) 1+4/d − 2λ n j N (l * j ) −1 . (S.15)
Plugging (S.14) and (S.15) back into (S.10), we have
w * j,i = 1 l * j + ((l * j ) 2/d − α i )(N/n j )n −2/d j s j=1 n j N n −2/d j l * j 2/d λ + s j=1 n −4/d j l * j i=1 α 2 i − (l * j ) 1+4/d . (S.16) Here w * j,i is decreasing in i, since α i is increasing in i and λ + s j=1 n −4/d j l * j i=1 α 2 i − (l * j ) 1+4/d > 0 from Lemma 7.
Next we solve for l * j . According to the definition of l * j , we only need to find the last l j such that w * j,l > 0. Using the results from Lemma 7, solving this equation reduces to finding the l * j such that
(1 + 2 d )(l * j − 1) 2/d ≤ λ + 4 d(d+4) s j=1 n −4/d j (l * j ) 1+4/d {1 + O( 1 l * j )} l * j (N/n j )n −2/d j s j=1 n j N n −2/d j l * j 2/d + (l * j ) 2/d ≤ (1 + 2 d )(l * j ) 2/d . (S.17)
Dividing both sides of (S.17) by (l * j ) 2/d , we have for large n j
(1 + 2 d ) l * j − 1 l * j 2/d ≤ λ + 4 d(d+4) s j=1 n j (l * j /n j ) 1+4/d {1 + O( 1 l * j )} N (l * j /n j ) 1+2/d s j=1 n j N n −2/d j l * j 2/d + 1 ≤ 1 + 2 d . (S.18)
By the squeeze theorem, the value of l * j /n j doesn't depend on j. Therefore, (S.18) can be simplied to
(1 + 2 d ) l * j − 1 l * j 2/d ≤ λ N (l * j /n j ) 1+4/d + 4 d(d + 4) {1 + O( 1 l * j )} + 1 ≤ 1 + 2 d .
Therefore, for large n j , we have
l * j = d(d + 4) 2(d + 2) d d+4 λ d d+4 n j N d d+4 = d(d + 4) 2(d + 2) d d+4 B 1 B 2 d d+4 n j N d d+4 , l * j i=1 α i w j,i (l * j ) 2/d , and l * j i=1 w 2 j,i 1 l * j .
Due to Assumption (w.1) in Section S.II, we have l * j → ∞ as n j → ∞. When n j N d/(d+4) , plugging l * j and (S.27) into (S.16) yields the optimal weight and Regret( φ E n j ,s,w * n j
)/Regret( φ 0 N,w * N ) → 1.
Denote H(w n j ) as the Hessian matrix of L(w n j ). We have
∂ 2 L(w n j ) ∂w 2 j,i =2 n j N 2 α i n 2/d j 2 + 2λ n j N 2 , and ∂ 2 L(w n j ) ∂w j,i ∂w j ,i =2 n j N n j N α i n 2/d j α i n 2/d j , if (j, i) = (j , i ).
For any nonzero vector X l * = (x 1,1 , ..., x 1,l * 1 , . . . , x s,1 , ..., x s,l * s ) T , we have
X T l * H(w n )X l * = s j=1 s j =1 2 n j N n j N l * j i=1 l * j i =1 α i n 2/d j α i n 2/d j x j,i x j ,i + 2λ s j=1 n j N 2 l * j i=1 x 2 j,i =2 s j=1 n j N l * j i=1 α i n 2/d j x j,i 2 + 2λ s j=1 n j N 2 l * j i=1 x 2 j,i > 0.
Therefore, H(w n j ) is positive definite, and this verifies that the above optimal value achieves the global minimum. Next, we analyze the case of n j = O(N d/(d+4) ). Due to Assumption (w.1) in Section S.II, we have l * j → ∞ as n j → ∞. Therefore, we have as n j → ∞,
s j=1 n j N n j i=1 α i w j,i n 2/d j 2 s j=1 n j N (l * j ) 2/d n 2/d j 2 s j=1 n j N 1 N d (d+4) 2 d 2 N −4/(d+4) .
Samworth (2012) showed that
Regret( φ N,w * N ) N −4/(d+4) . (S.19)
Therefore, applying (S.19), we have, as n j → ∞,
Regret( φ E n j ,s,wn j ) Regret( φ N,w * N ) B 1 s j=1 n j N 2 n j i=1 w 2 j,i + B 2 s j=1 n j N n j i=1 α i w j,i n 2/d j 2 N −4/(d+4) B 2 s j=1 n j N n j i=1 α i w j,i n 2/d j 2 N −4/(d+4) → ∞.
This completes the proof of Corollary 1.
S.VII Proof of Theorem 3
In this section, we apply similar notations as those in Section S.IV. For the sake of simplicity, we omit w n j in the subscript of such notations as φ E2 n j ,s,wn j and S E2 n j ,s,wn j . We have
Regret( φ E2 n j ,s ) = R P φ E2 n j ,s (x) = 0 − 1 η 0 (x) < 1/2 dP 0 (x).
Denote the estimated regression function on the j-th enhanced worker data with estimated worker quality as
S E2 n j (x) = n j i=1 w j,iY j (i) , whereY j (i) = Y j (i) + b j −1 a j + b j −1
is the enhanced label with estimated worker quality from Algorithm 2. Similarly, denote the weighted average of estimated regression function from s enhanced worker data as
S E2 n j ,s (x) = s j=1 W j S E2 n j (x) = s j=1 W j n j i=1 w j,iY j (i)
. Therefore, the ENN2 classifier is defined as
S.VIII Lemmas
In this section, we provide some lemmas.
• Lemma 1-Lemma 6 are used for proving Theorem 1.
• Lemma 7 is used for proving Corollary 1.
• Lemma 8-Lemma 12 are used for proving Theorem 3.
Lemma 1. We haveη
j (x) = η 0 (x) andã j (x) = a 0 (x).
Proof of Lemma 1: First, we havẽ
η j (x) = E j (Ỹ j |X j = x) = E j ( Y j + b j − 1 a j + b j − 1 |X j = x) = E j (Y j |X j = x) + b j − 1 a j + b j − 1 = η j (x) + b j − 1 a j + b j − 1 = a j η 0 (x) + (1 − b j )(1 − η 0 (x)) + b j − 1 a j + b j − 1 = η 0 (x).
Next, we haveã
j (x) = d m=1 c m,d {η j m (x)f m (x) + 1/2η j mm (x)f (x)} a 1+2/d df (x) 1+2/d = d m=1 c m,d {η 0 m (x)f m (x) + 1/2η 0 mm (x)f (x)} a 1+2/d df (x) 1+2/d = a 0 (x).
Lemma 2. Uniformly for w n j ∈ W n j ,β , we have
sup x∈S n j |µ j n j (x) − η 0 (x) − a 0 (x)t n j | = o(t n j ), sup x∈S n j [σ j n j (x)] 2 − 1 4 s 2 n j = o(s 2 n j ).
Proof of Lemma 2: We have
µ j n j (x) = n j i=1 w j,i E X [η j (X j (i) )] = n j i=1 w j,i E X [η 0 (X j (i) )] =µ 0 n j (x). (S.23)
The second equality holds by Lemma 1. Similarly, we have
[σ j n j (x)] 2 = n j i=1 w 2 j,i E X [η j (X j (i) )] − (E X [η j (X j (i) )]) 2 = n j i=1 w 2 j,i E X [η 0 (X j (i) )] − (E X [η 0 (X j (i) )]) 2 =[σ 0 n j (x)] 2 . (S.24)
In addition, Samworth (2012) showed that, uniformly for w n ∈ W n,β ,
sup x∈S n j |µ 0 n j (x) − η 0 (x) − a 0 (x)t n j | = o(t n j ), (S.25) sup x∈S n j [σ 0 n j (x)] 2 − 1 4 s 2 n j = o(s 2 n j ). (S.26)
Therefore, applying (S.23), (S.24), (S.25) and (S.26), we have, uniformly for w n j ∈ W n j ,β ,
sup x∈S n j |µ j n j (x) − η 0 (x) − a 0 (x)t n j | = o(t n j ), sup x∈S n j [σ j n j (x)] 2 − 1 4 s 2 n j = o(s 2 n j ).
Lemma 3. Uniformly for w n j ∈ W n j ,β , we have
sup x∈S n j ,s |µ n j ,s (x) − η 0 (x) − a 0 (x)t n j ,s | = o(t n j ,s ), sup x∈S n j ,s σ 2 n j ,s (x) − 1 4 s 2 n j ,s = o(s 2 n j ,s ).
Proof of Lemma 3: We have
sup x∈S n j ,s |µ n j ,s (x) − η 0 (x) − a 0 (x)t n j ,s | = sup x∈S n j ,s | s j=1 W j µ j n j (x) − s j=1 W j η 0 (x) − a 0 (x) s j=1 W j t n j | = sup x∈S n j ,s | s j=1 W j [µ j n j (x) − η 0 (x) − a 0 (x)t n j ]| ≤ s j=1 W j sup x∈S n j |[µ j n j (x) − η 0 (x) − a 0 (x)t n j ]| = o( s j=1 W j t n j ) = o(t n j ,s ). The last inequality holds by triangle inequality and n j ,s = min Lemma 4. There exists a constant c 10 > 0 such that, for a sufficiently large n j , and uniformly for w N ∈ W N,β , we have inf x∈R\S n j ,s µ n j ,s (x) − 1/2 ≥ c 10 N β/(4d) /4. Proof of Lemma 4 : Samworth (2012) showed that, there exists a constant c 10 > 0 such that, for a sufficiently large N , and uniformly for w N ∈ W N,β , inf x∈R\S N µ n j ,s (x) − 1/2 ≥ c 10 N /4, where N = N β/(4d) . Therefore, we have inf x∈R\S N µ n j ,s (x) − 1/2 ≥ c 10 N β/(4d) /4.
As N ≥ n j ≥ n j ,s = min j∈{1...s} { n j }, we have inf x∈R\S n j ,s µ n j ,s (x) − 1/2 ≥ c 10 N β/(4d) /4. Lemma 5. For x 0 ∈ S, we have 2f (x 0 ) η 0 (x 0 ) = ψ 0 (x 0 ) andψ 0 (x 0 ) Tη0 (x 0 ) = η 0 (x 0 ) ψ 0 (x 0 ) .
Proof of Lemma 5: By η 0 = P 0 (Y 0 = 1|X 0 = x) = π 0 1 f 0 1 π 0 1 f 0 1 +(1−π 0 1 )f 0 0 , we havė η 0 = π 0 1 (1 − π 0 1 )(ḟ 0 1 f 0 0 − f 0 1ḟ 0 0 ) (π 0 1 f 0 1 + (1 − π 0 1 )f 0 0 ) 2 . For x 0 ∈ S, π 0 1 f 0 1 (x 0 ) = (1 − π 0 1 )f 0 0 (x 0 ) = 1 2f (x 0 ), we havė η 0 (x 0 ) = π 0 1 (1 − π 0 1 )(ḟ 0 1 (x 0 )f 0 0 (x 0 ) − f 0 1 (x 0 )ḟ 0 0 (x 0 )) [π 0 1 f 0 1 (x 0 ) + (1 − π 0 1 )f 0 0 (x 0 )] 2 = 1/2(π 0 1ḟ 0 1 (x 0 ) − (1 − π 0 1 )ḟ 0 0 (x 0 )) f (x 0 ) =ψ 0 (x 0 )
2f (x 0 ) .
Therefore,
2f (x 0 ) η 0 (x 0 ) = ψ 0 (x 0 ) anḋ Next, we have
ψ 0 (x 0 ) Tη0 (x 0 ) =2f (x 0 )η 0 (x 0 ) Tη0 (x 0 ) = η 0 (x 0 ) ψ 0 (x 0 ) .|R 812 | =|E j Y j (i) + b j − 1 a j + b j − 1 − Y j (i) + b j − 1 a j + b j − 1 | = 1 a j + b j − 1 E j ( b j − b j ) = O(t 2 ns + s 2 ns ) = o(t 2 n + s 2 n ). (S.30)
The last second equality holds by Lemma 9. Combining (S.29) and (S.30), we have
|R 81 | = sup x∈S n j | n j i=1 w j,i E j (Y j (i) −Ỹ j (i) )| = o(t 2 n + s 2 n ). (S.31)
Therefore, combining (S.28) and (S.31), we have sup x∈S n j |μ j n j (x) − η 0 (x) − a 0 (x)t n j | = o(t n j + t 2 n + s 2 n ) = o(ť n j ).
This completes the proof of Lemma 8.
Lemma 9. Given a and b derived from Algorithm 2, we have, uniformly for w n j ∈ W n j ,β , |E j ( a j − a j )| =O(t 2 ns + s 2 ns ), and |E j ( b j − b j )| =O(t 2 ns + s 2 ns ). Proof of Lemma 9: We decompose
|E j ( a j − a j )| =|E j n j i=1 1 φ ns,k (X j i ) = 1, Y j i = 1 n j i=1 1 φ ns,k (X j i ) = 1 − E j (1 Y j = 1, Y 0 = 1 E 0 (Y 0 ) = 1 n j |E j n j i=1 1 φ ns,k (X j i ) = 1, Y j i = 1 E 0 (Y 0 ) − n j i=1
1 φ ns,k (X j i ) = 1 E j (1 Y j = 1, Y 0 = 1 (1/n j ) n j i=1 1 φ ns,k (X j i ) = 1 E 0 (Y 0 ) ≤c 9 |E j ((1/n j ) n j i=1 1 φ ns,k (X j i ) = 1, Y j i = 1 )E 0 (Y 0 ) − E j ((1/n j ) n j i=1 1 φ ns,k (X j i ) = 1 )E j (1 Y j = 1, Y 0 = 1 =c 9 |E j (1 φ ns,k (X j ) = 1, Y j = 1 )E 0 (Y 0 ) − E j ( φ ns,k (X j i ))E j (1 Y j = 1, Y 0 = 1 ≤c 9 |E j (1 φ ns,k (X j ) = 1, Y j = 1 )E 0 (Y 0 ) − E j (1 φ ns,k (X j ) = 1, Y j = 1 )E j ( φ ns,k (X j i ))| + c 9 |E j (1 φ ns,k (X j ) = 1, Y j = 1 )E j ( φ ns,k (X j i ))) − E j ( φ ns,k (X j i ))E j (1 Y j = 1, Y 0 = 1 ) =R 91 + R 92 , where c 9 is a positive constant.
Next, we have |R 91 | = E j (1 φ ns,k (X j ) = 1, Y j = 1 )E 0 (Y 0 ) − E j (1 φ ns,k (X j ) = 1, Y j = 1 )E j ( φ ns,k (X j )) =E j (1 φ ns,k (X j ) = 1, Y j = 1 )|E j ( φ ns,k (X j )) − E 0 (Y 0 )| ≤|E j ( φ ns,k (X j )) − E 0 (Y 0 )| =|E X [P(S 0 ns (X) < 1/2) − 1 η 0 (X) < 1/2 ]| = O(t 2 ns + s 2 ns ).
(S.32)
The above last equality holds by applying Proposition 1 for the kNN classifier on the s-th worker data. Next, we have |R 92 | = E j (1 φ ns,k (X j ) = 1, Y j = 1 )E j ( φ ns,k (X j )) − E j (1 Y j = 1, Y 0 = 1 E j ( φ ns,k (X j )) ≤ E j (1 φ ns,k (X j ) = 1, Y j = 1 ) − E j (1 Y j = 1, Y 0 = 1 ) =|E j (1 φ ns,k (X j ) = 1, Y j = 1, Y 0 = 0 + 1 φ ns,k (X j ) = 0, Y j = 1, Y 0 = 1 )| ≤|E j (1 φ ns,k (X j ) = 1, Y 0 = 0 + 1 φ ns,k (X j ) = 0, Y 0 = 1 )| =|E j (1 φ ns,k (X j ) = 1 − 1 Y 0 = 1 )| =|E X [P(S 0 ns (X) < 1/2) − 1 η 0 (X) < 1/2 ]| = O(t 2 ns + s 2 ns ).
(S.33)
The above last equality holds by applying Proposition 1 for the kNN classifier on the s-th worker data. Therefore, combining (S.32) and (S.33), we have |E j ( a j − a j )| = O(t 2 ns + s 2 ns ).
Similarly, we have
|E j ( b j − b j )| = O(t 2 ns + s 2 ns ).
This completes the proof of Lemma 9.
Lemma 10. Uniformly for w n j ∈ W n j ,β , we have The above last equality holds by Lemma 2.
; Gottlieb et al. (2014); Gadat et al. (2016); Sun et al. (2016); Döring et al. (2017); Xue and Kpotufe (2017). See extensive surveys of k-NN classifiers in Devroye et al. (2013); Biau and Devroye (2015); Chen et al. (2018). Applications of the NN classifier in crowdsourcing data have been studied in Diab et al. (2012); Hwang and Lee (2012); Burrows et al. (2013); Li et al. (2019).
Figure 1 :
1Risk (with standard error bar marked) of all methods (except ENN2) and the Bayes rule without expert data. The x-axis indicates different settings with worker quality. Top/middle/bottom: Simulation 1/2/3; left/middle/right: d = 4/6/8.
Figure 2 :
2Risk of optimal ENN, oracle OWNN and the Bayes rule for different γ. Left/middle/right: Simulation 1/2/3, d = 4. Upper bound for number of worker data in optimal ENN (γ = 4/(d + 4) = 1/2) is shown as a vertical line. 4000) to verify the sharp upper bound for the number of worker data. Under this setup, the upper bound simplifies to γ = 4/(d + 4) (s = N γ ).
,ILPD (Ramana et al., 2012), Parkinson(Sakar et al., 2013), Biodeg (Mansouri et al., 2013, Retinopathy (Antal and Hajdu, 2014), and Spambase (Cranor and LaMacchia, 1998), from the UCI machine learning repository (Dua and Graff, 2017). FollowingYan et al. (2010) andRaykar et al. (2010), we simulate five workers according to the two-coin model described in Section 2 with the quality setups 1-5 defined in
Figure 3 :
3Risk (with standard error bar marked) of ENN3(k), LFC, DS, naive OWNN, and naive kNN on real data. The x-axis indicates different settings with worker quality. Dataset name, size and dimension are illustrated on top left.
Dietterich, T. G.,Lathrop, R. H., and Lozano-Pérez, T. (1997), "Solving the multiple instance problem with axis-parallel rectangles," Artificial intelligence, 89, 31-71.Döring, M.,Györfi, L., and Walk, H. (2017), "Rate of convergence of k-nearest-neighbor classification rule," The Journal of Machine LearningResearch, 18, 8485-8500. Dua, D. and Graff, C. (2017), "UCI Machine Learning Repository," .Duan, J., Qiao, X., and Cheng, G. (2020), "Statistical Guarantees of Distributed Nearest Neighbor Classification," Advances in Neural Information Processing Systems, 33.Fix, E. and Hodges Jr, J. L. (1951), "Discriminatory analysis-nonparametric discrimination: consistency properties," Tech. rep., California Univ Berkeley. Gadat, S., Klein, T., and Marteau, C. (2016), "Classification in general finite dimensional spaces with the K-nearest neighbor rule," The Annals of Statistics, 982-1009. Gottlieb, L.-A., Kontorovich, A., and Nisnevitch, P. (2014), "Near-optimal sample compression for nearest neighbors," in Advances in Neural Information Processing Systems, pp. 370-378. Gray, A. (2004), Tubes, Basel: Birkhäuser. Grigor'eva, M. and Popov, S. (2012), "An upper bound for the absolute constant in the nonuniform version of the Berry-Esseen inequalities for nonidentically distributed summands," in Doklady Mathematics, Springer, vol. 86, pp. 524-526. Hwang, K. and Lee, S.-Y. (2012), "Environmental audio scene and activity recognition through mobile-based crowdsourcing," IEEE Transactions on Consumer Electronics, 58, 700-705. Kajino, H., Tsuboi, Y., and Kashima, H. (2012a), "A convex formulation for learning from crowds," in Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, pp. 73-79. Kajino, H., Tsuboi, Y., Sato, I., and Kashima, H. (2012b), "Learning from crowds and experts," in Workshops at the Twenty-Sixth AAAI Conference on Artificial Intelligence. Li, J., Yu, H., Zhang, L., and Wen, G. (2019), "Double weighted K-nearest voting for label aggregation in crowdsourcing learning," Multimedia Tools and Applications, 78, 33357-33374. Mansouri, K., Ringsted, T., Ballabio, D., Todeschini, R., and Consonni, V. (2013), "Quantitative structure-activity relationship models for ready biodegradability of chemicals," Journal of chemical information and modeling, 53, 867-878. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., and Swami, A. (2016), "The limitations of deep learning in adversarial settings," in 2016 IEEE European symposium on security and privacy (EuroS&P), IEEE, pp. 372-387. Ramana, B. V., Babu, M. S. P., and Venkateswarlu, N. (2012), "A critical comparative study of liver patients from USA and INDIA: an exploratory analysis," International Journal of Computer Science Issues (IJCSI), 9, 506.
Figure S1 :
S1Risk (with standard error bar marked) of all methods and the Bayes rule, with expert data. The x-axis indicates different settings with worker quality. Top/middle/bottom: Simulation 1/2/3; left/middle/right: d = 4/6/8.
sup x∈S n j ,s |µ n j ,s (x) − η 0 (x) − a 0 (x)t n j ,s | = o(t n j ,s ),(S.1) sup x∈S n j ,s σ 2 n j ,s (x) − 1 4 s 2 n j ,s = o(s 2 n j ,s ). (S.2)
j ,s + t 2 n j ,s ).
( s j=1 (W j ) 2 s 2 n j ) = o(s 2 n j ,s ). The last inequality holds by triangle inequality and n j ,s = min j∈{1...s} { n j }. The last second equality holds by Lemma 2.
Lemma 6 .
6(Sun et al., 2016) For any distribution function G, constant a, and constant b > (−bu − a) − 1 u < 0 du
Table 1 :
1Quality and size setups for worker data.
Table S1 :
S1Comparison of true and estimated worker quality based on Algorithm 2.
Table S2 :
S2Comparison of true and estimated worker quality based on Algorithm 3.
https://www.mturk.com/mturk/welcome
In this paper, we assume that worker quality a j and b j are both constants, depending only on the unobserved ground truth label, but not on x, i.e., worker j has the same quality on all instances.3 In the case of kNN, it means k satisfies max(n β , (log n) 2 ) ≤ k ≤ min(n (1−βd/4) , n 1−β ).
Appendix S.II.We remark that the first term in (2) can be viewed as the variance component of regret, and the second term the squared bias. By minimizing the asymptotic regret (2) over weights,Samworth (2012)obtained the optimal weighted nearest neighbor (OWNN) classifier.n j ,s (x) = 1 S E2 n j ,s (x) ≥ 1/2 .Since P φ E2 n j ,s (x) = 0 = P S E2 n j ,s (x) < 1/2 , the regret of ENN2 becomesRegret( φ E2 n j ,s ) = R P(S E2 n j ,s (x) < 1/2) − 1 η 0 (x) < 1/2 dP 0 (x).Letμ j n j (x) = E{S E2 n j (x)}, [σ j n j (x)] 2 = Var{S E2 n j (x)}. Denoteš 2 n j = s 2 n j + (s 2 n + t 2 n ) anď t n j = t n j + (s 2 n + t 2 n ), whereñ = max j∈{1,...s−1} n j . From Lemma 8 and Lemma 10, we have uniformly for w n j ∈ W n j ,β ,Letμ n j ,s (x) = E{S E2 n j ,s (x)},σ 2 n j ,s (x) = Var{S E2 n j ,s (x)}. We havějš 2 n j andť n j ,s = s j=1 W jťn j . From Lemma 11, we have, uniformly for w n j ∈ W n j ,β ,Comparing (S.20) and (S.1), we find thatμ n j ,s (x) and µ n j ,s (x) have a similar property. In addition, comparing (S.21) and (S.2),σ 2 n j ,s (x) and σ 2 n j ,s (x) also have a similar property. Therefore, after substituting µ n j ,s (x) and σ 2 n j ,s (x) byμ n j ,s (x) andσ 2 n j ,s (x) in Step 1, Step 2 and Step 3 of Section S.IV, we have, up to o(š 2 n j ,s +ť 2 n j ,s ) difference,Therefore, applying Lemma 12 and (S.22), we haveThis completes the proof of Theorem 3.Lemma 7.(Sun et al., 2016)GivenProof of Lemma 8: First, we decomposeThe above first equality holds by Lemma 2. Next, we haveNext, we havewhere the last second equality holds by Lemma 9.The above last second inequality holds by (S.31). Therefore, we haveThis completes the proof of Lemma 10.Lemma 11. Uniformly for w n j ∈ W n j ,β , we have sup x∈S n j ,sProof of Lemma 11: We haveThe last inequality holds by triangle inequality and n j ,s = min j∈{1...s} { n j }. The last second equality holds by Lemma 10.Lemma 12. Uniformly for w n j ∈ W n j ,β , we havě s 2 n j ,s +ť 2 n j ,s = O(s 2 n j ,s + t 2 n j ,s ).Proof of Lemma 12: We havě s 2 n j ,s +ť 2j=1 (W j ) 2 s 2 n j + ( s j=1 W j t n j ) 2 + s j=1 (W j ) 2 (s 2 n + t 2 n ) + 2 s j=1 W j (s 2 n + t 2 n )t n j + [ s j=1 W j (s 2 n + t 2 n )] 2 =s 2 n j ,s + t 2 n j ,s + O s j=1 (W j ) 2 (s 2 n j + t 2 n j ) + O s j=1 W j (s n j + t n j )t n j + O [ s j=1 W j (s n j + t n j )] 2 =O(s 2 n j ,s + t 2 n j ,s ).The last second equality holds by n j /n s = O(1),ñ = max j∈{1,...s−1} n j and t 2 ns + s 2 ns = O(s 2 n j + t 2 n j ) = o(s n j + t n j ).
Supervised learning from multiple experts: whom to trust when everyone lies a bit. V C Raykar, S Yu, L H Zhao, A Jerebko, C Florin, G H Valadez, L Bogoni, L Moy, Proceedings of the 26th Annual international conference on machine learning. the 26th Annual international conference on machine learningRaykar, V. C., Yu, S., Zhao, L. H., Jerebko, A., Florin, C., Valadez, G. H., Bogoni, L., and Moy, L. (2009), "Supervised learning from multiple experts: whom to trust when everyone lies a bit," in Proceedings of the 26th Annual international conference on machine learning, pp. 889-896.
Learning from crowds. V C Raykar, S Yu, L H Zhao, G H Valadez, C Florin, L Bogoni, L Moy, Journal of Machine Learning Research. 11Raykar, V. C., Yu, S., Zhao, L. H., Valadez, G. H., Florin, C., Bogoni, L., and Moy, L. (2010), "Learning from crowds." Journal of Machine Learning Research, 11.
Collection and analysis of a Parkinson speech dataset with multiple types of sound recordings. B E Sakar, M E Isenkul, C O Sakar, A Sertbas, F Gurgen, S Delil, H Apaydin, O Kursun, IEEE Journal of Biomedical and Health Informatics. 17Sakar, B. E., Isenkul, M. E., Sakar, C. O., Sertbas, A., Gurgen, F., Delil, S., Apaydin, H., and Kursun, O. (2013), "Collection and analysis of a Parkinson speech dataset with multiple types of sound recordings," IEEE Journal of Biomedical and Health Informatics, 17, 828-834.
Optimal weighted nearest neighbour classifiers. R J Samworth, The Annals of Statistics. 40Samworth, R. J. (2012), "Optimal weighted nearest neighbour classifiers," The Annals of Statistics, 40, 2733-2763.
Get another label? improving data quality and data mining using multiple, noisy labelers. V S Sheng, F Provost, P G Ipeirotis, Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. the 14th ACM SIGKDD international conference on Knowledge discovery and data miningSheng, V. S., Provost, F., and Ipeirotis, P. G. (2008), "Get another label? improving data quality and data mining using multiple, noisy labelers," in Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 614-622.
Classification of radar returns from the ionosphere using neural networks. V G Sigillito, S P Wing, L V Hutton, K B Baker, Johns Hopkins APL Technical Digest. 10Sigillito, V. G., Wing, S. P., Hutton, L. V., and Baker, K. B. (1989), "Classification of radar returns from the ionosphere using neural networks," Johns Hopkins APL Technical Digest, 10, 262-266.
Nuclear feature extraction for breast tumor diagnosis. W N Street, W H Wolberg, O L Mangasarian, Biomedical image processing and biomedical visualization. 1905Street, W. N., Wolberg, W. H., and Mangasarian, O. L. (1993), "Nuclear feature extraction for breast tumor diagnosis," in Biomedical image processing and biomedical visualization, International Society for Optics and Photonics, vol. 1905, pp. 861-870.
Stabilized Nearest Neighbor Classifier and its Statistical Properties. W W Sun, X Qiao, G Cheng, Journal of the American Statistical Association. 111Sun, W. W., Qiao, X., and Cheng, G. (2016), "Stabilized Nearest Neighbor Classifier and its Statistical Properties," Journal of the American Statistical Association, 111, 1254-1265.
Intriguing properties of neural networks. C Szegedy, W Zaremba, I Sutskever, J Bruna, D Erhan, I Goodfellow, Fergus , R , arXiv:1312.6199arXiv preprintSzegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. (2013), "Intriguing properties of neural networks," arXiv preprint arXiv:1312.6199.
Crowdsourcing label quality: a theoretical analysis. W Wang, Z.-H Zhou, Science China Information Sciences. 58Wang, W. and Zhou, Z.-H. (2015), "Crowdsourcing label quality: a theoretical analysis," Science China Information Sciences, 58, 1-12.
Bayesian bias mitigation for crowdsourcing. F L Wauthier, M Jordan, Advances in neural information processing systems. 24Wauthier, F. L. and Jordan, M. (2011), "Bayesian bias mitigation for crowdsourcing," Advances in neural information processing systems, 24, 1800-1808.
Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. J Whitehill, T.-F Wu, J Bergsma, J Movellan, P Ruvolo, Advances in neural information processing systems. 22Whitehill, J., Wu, T.-f., Bergsma, J., Movellan, J., and Ruvolo, P. (2009), "Whose vote should count more: Optimal integration of labels from labelers of unknown expertise," Advances in neural information processing systems, 22, 2035-2043.
Achieving the time of 1-NN, but the accuracy of k-NN. L Xue, S Kpotufe, arXiv:1712.02369arXiv preprintXue, L. and Kpotufe, S. (2017), "Achieving the time of 1-NN, but the accuracy of k-NN," arXiv preprint arXiv:1712.02369.
Modeling annotator expertise: Learning when everybody knows a bit of something. Y Yan, R Rosales, G Fung, M Schmidt, G Hermosillo, L Bogoni, L Moy, J Dy, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. the Thirteenth International Conference on Artificial Intelligence and StatisticsYan, Y., Rosales, R., Fung, G., Schmidt, M., Hermosillo, G., Bogoni, L., Moy, L., and Dy, J. (2010), "Modeling annotator expertise: Learning when everybody knows a bit of something," in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 932-939.
|
[] |
[
"Ultrafast pump-probe dynamics in ZnSe-based semiconductor quantum-wells",
"Ultrafast pump-probe dynamics in ZnSe-based semiconductor quantum-wells"
] |
[
"Henni Ouerdane \nPhysics Department\nHeriot-Watt University\nEH14 4ASEdinburghUK\n",
"George Papageorgiou \nPhysics Department\nHeriot-Watt University\nEH14 4ASEdinburghUK\n",
"Ian Galbraith \nPhysics Department\nHeriot-Watt University\nEH14 4ASEdinburghUK\n",
"Ajoy K Kar \nPhysics Department\nHeriot-Watt University\nEH14 4ASEdinburghUK\n",
"Brian S Wherrett \nPhysics Department\nHeriot-Watt University\nEH14 4ASEdinburghUK\n"
] |
[
"Physics Department\nHeriot-Watt University\nEH14 4ASEdinburghUK",
"Physics Department\nHeriot-Watt University\nEH14 4ASEdinburghUK",
"Physics Department\nHeriot-Watt University\nEH14 4ASEdinburghUK",
"Physics Department\nHeriot-Watt University\nEH14 4ASEdinburghUK",
"Physics Department\nHeriot-Watt University\nEH14 4ASEdinburghUK"
] |
[] |
Pump-probe experiments are used as a controllable way to investigate the properties of photoexcited semiconductors, in particular, the absorption saturation. We present an experiment-theory comparison for ZnSe quantum wells, investigating the energy renormalization and bleaching of the excitonic resonances. Experiments were performed with spin-selective excitation and above-bandgap pumping. The model, based on the semiconductor Bloch equations in the screened Hartree-Fock approximation, takes various scattering processes into account phenomenologically. Comparing numerical results with available experimental data, we explain the experimental results and find that the electron spin-flip occurs on a time scale of 30 ps.
|
10.1364/josab.19.002022
|
[
"https://arxiv.org/pdf/0807.3092v1.pdf"
] | 118,517,467 |
0807.3092
|
af752cfdf662d7bd8cc66b48b83b74243fbebed5
|
Ultrafast pump-probe dynamics in ZnSe-based semiconductor quantum-wells
19 Jul 2008
Henni Ouerdane
Physics Department
Heriot-Watt University
EH14 4ASEdinburghUK
George Papageorgiou
Physics Department
Heriot-Watt University
EH14 4ASEdinburghUK
Ian Galbraith
Physics Department
Heriot-Watt University
EH14 4ASEdinburghUK
Ajoy K Kar
Physics Department
Heriot-Watt University
EH14 4ASEdinburghUK
Brian S Wherrett
Physics Department
Heriot-Watt University
EH14 4ASEdinburghUK
Ultrafast pump-probe dynamics in ZnSe-based semiconductor quantum-wells
19 Jul 2008numbers: 19001901907110190597032003203207130
Pump-probe experiments are used as a controllable way to investigate the properties of photoexcited semiconductors, in particular, the absorption saturation. We present an experiment-theory comparison for ZnSe quantum wells, investigating the energy renormalization and bleaching of the excitonic resonances. Experiments were performed with spin-selective excitation and above-bandgap pumping. The model, based on the semiconductor Bloch equations in the screened Hartree-Fock approximation, takes various scattering processes into account phenomenologically. Comparing numerical results with available experimental data, we explain the experimental results and find that the electron spin-flip occurs on a time scale of 30 ps.
I. INTRODUCTION
From an experimental point of view, one can investigate the optical properties of semiconductors by exciting carriers (by means of optical pumping or carrier injection) and measuring the absorption of a subsequent probe pulse. By comparison of this spectrum with the linear absorption spectrum, one obtains information on the influence of the excitation on the absorption phenomenon and insight into the electronic and optical properties of the electron-hole plasma. Interpretation of experimental results is, however, nontrivial, given the substantial influence of Coulomb and many-body effects, which give rise to a rich variety of broadening and energy renormalizations. Moreover, the time evolution of the initial electronhole plasma makes the whole problem challenging, both theoretically and numerically. In this paper we present a model that can describe the time evolution of the nonequilibrium electron-hole system but that is also simple enough to account for many dynamical processes that occur, using different polarizations of the pump and probe beams.
Much previous theoretical work has focused on the study the absorption phenomenon in semiconductor quantum-wells in a quasi-equilibrium situation [1,2,3,4,5]. Here, based on the available experimental data, we move beyond such a quasi-equilibrium situation. We include six dynamical processes that lead eventually to a thermal quasi-equilibrium in the electron-hole plasma: relaxation of the hot carriers' distributions toward Fermi-Dirac distributions, thermalization among the carrier gases, plasma cooling, carrier spin-flip, scattering between the light-and heavy-hole bands, and recombination (both radiative and nonradiative). A true microscopic treatment accounting for all these many-body effects would be computationally prohibitive. Instead, we use a phenomenological approach to describe the time evolution of the hot electron-hole plasma. The ab- * Electronic address: [email protected] sorption spectra are evaluated from the time-dependent semiconductor Bloch equations (SBE) in the screened Hartree-Fock approximation. In this work, as we focus on time scales over many picoseconds, we do not need to consider the full coherent dynamics involving the nonlinear scattering of pump light into the probe direction as a result of the nonlinear polarization interaction. The focus we choose allows and justifies the phenomenological treatment of the scattering processes for our qualitative analysis. Varying the delay between the pump and probe beams will allow us to obtain the time evolution of both the bleaching of the exciton peaks and the energy renormalizations, which, compared to experimental data will give an estimation of the time scale of the scattering processes mentioned above. Simultaneously, as we show below, the dynamics of the electron-hole plasma (density, temperature, plasma screening and distribution of each type of carrier in each spin state) can be monitored.
The aim of this paper is to present our model for the time evolution of the electron-hole plasma created by spin-selective excitation in the absorption continuum and study its influence on absorption spectra. The experimental setup and results are described in Section 2. In Section 3, we present our theoretical model for the time evolution of the electron-hole plasma, including the semiconductor Bloch equations that have to be solved numerically together with the rate equations used. We discuss our numerical results, comparing them with experimental data, in Section 4.
II. ULTRAFAST PUMP AND PROBE EXPERIMENTS
A femtosecond laser system consisting of a Beamlok argon-ion (Ar + ) laser, a Tsunami mode-locked Ti:Sapphire laser, a Merlin Q-switched Nd:YLF laser, a Spitfire pulsed Ti:Sapphire regenerative amplifier and an ultrafast kilohertz optical parametric amplifier (OPA), was used for the generation of the ultrafast pump pulses. The Ar + laser and the Merlin laser were the excitation sources for the Tsunami laser and the Spitfire amplifier respectively. The Tsunami output was fed to the Spitfire where it was temporally stretched, amplified, and finally temporally compressed. The Spitfire output provided the pump beam for the frequency conversion processes in the OPA. The overall system was capable of delivering a 1kHz train of ∼ 150 fs pulses and the wavelength was tuned at 459 nm (∼ 2.69 eV). For the generation of the white light continuum, ∼ 5% of the Spitfire output (λ = 800 nm) was focused on a 10-mm-thick quartz cuvette containing deionized water. The pump pulse was used to excite the semiconductor sample, which was mounted on the cryostat and cooled at 4K. The pump power, and therefore the carrier density, could be controlled by the use of a neutral density filter. The pump pulse power incident in the cryostat was 0.06 mW. The changes induced in the transmitted probe pulse energy were measured by an optical spectrum multichannel analyzer as a function of the time delay between the pump and the probe pulses. Using a glass microscope slide to monitor its stability we selected a small portion of the white-light continuum before it fell onto the sample. The spot size radius of the probe beam was 190 µm, and it was considerably smaller than that of the pump in order to probe a region of uniform photoexcited density. Both pump and probe beams were circularly polarized and independently controllable by λ/4 plates. Opposite circular (OCP) and same circular polarization (SCP) configurations were employed. We provided the time resolution by delaying the white light continuum pulses relative to the pump pulses. The experimental work was performed on a ZnSe/ZnCdSe multiple quantum well structure of twenty 4nm-wide wells grown by molecular beam epitaxy on GaAs substrate. The 20% Cd content in the wells produces light-hole-heavy-hole exciton splitting of more than 30 meV.
The absorption spectra, Fig. 1, show that at early times, both the heavy-hole and light-hole exciton peaks are bleached but not shifted much. The broadening that is due to the interaction-induced dephasing plays an important role in both SCP and OCP, but for SCP the heavy-hole exciton peak is more bleached because of the Pauli blocking effect that reduces the oscillator strength. In contrast, the light hole exciton peak is more bleached in OCP. A detailed description and an interpretation of these experimental results are given below with our numerical analysis.
Increasing the delay between the pump and the probe beams suggests that many dynamical processes occur in the electron-hole plasma and change the shape of the absorption spectra. The dynamics of the absorption spectra is shown in Fig. 2, where both the bleaching (∆α/α) and the energy shift of the heavy-hole exciton peak are given as functions of the delay between the pump and the probe. Experimental data show an overall decay of the exciton peak bleaching as well as a convergence of the OCP and the SCP curves. They also show an initial blueshift at early times and an energy shift that brings the resonances back to the linear spectrum exciton resonance. The energy shift exhibits the same type of behavior as the exciton bleaching: The OCP and SCP curves Pump-probe delay (ps)
III. THEORETICAL MODEL
To describe and explain what we observe, we constructed a theoretical model that describes the time evolution of the electron-hole plasma and its influence on the absorption spectra.
A. Polarization dynamics
Inasmuch as the heavy-hole-light-hole band splitting is ∆E cs = 30 meV, we neglect the heavy-hole (hh) and light-hole (lh) coupling. The interband polarization equations [6] are:
∂ ∂t p λ k (t) = −i(e e,k + e λ,k ) p λ k (t) − i(n σ e,k (t) + n σ ′ λ,k (t) − 1) ω λ R,k (t) ,(1)
where the Rabi energies, ω λ R,k , are given by:
ω λ R,k (t) = d λ cv E(t) + q =k V s |k−q| p λ q (t) ,(2)
for λ = hh, lh. The energies e i,k are the renormalized energies evaluated in the static plasmon-pole approximation including the contribution of the pair continuum [1]:
e i,k = ǫ i,k +Σ exc,i (k)+ 1 2 ∆E CH , i = e, hh and lh,(3)
with Σ exc,i the screened exchange self-energy, ∆E CH the Coulomb hole energy [7], and n σ c,k (t) are the occupancy of the carrier of type c with spin σ at the time t.
The temporal envelope of the probe field, E(t), is assumed to be Gaussian and the optical suceptibility χ(ω) is defined as:
χ(ω) =P (ω) ǫÊ(ω) ,(4)
where ǫ is the dielectric constant andP (ω) andÊ(ω) are the Fourier transform of the polarization function P (t) = d cv k p k (t) and of the electric field E(t). It is a complex function whose imaginary part is proportional to the absorption: α(ω) ∝ Imχ(ω) [1].
To solve Eq. (1) one needs knowledge of the carrier distributions to evaluate the phase space filling factor and the plasma temperature and density to calculate the screened Coulomb potential energy V s q entering the definition of the Rabi frequency, Eq. (2). We neglect coherent polarization nonlinearities, as we are considering above bandgap pumping and time scales longer than the dephasing time.
B. Evolution of the carriers distributions
In our model the time evolution of the distribution function, n σ c,k (t), of a carrier c with a spin σ takes several dynamical processes into account. Relaxation of the carrier distributions, carrier spin-flip, recombination and light hole scattering yield the following system of six coupled differential equations (σ = ↑ or σ = ↓, indicating the two spin-states):
+ n σ ′ lh,k − n σ lh,k τ lh sf − n σ lh,k n σ e,k τ e,lh rad − n σ lh,k τ lh nr − n σ lh,k τ lh .(5)
These equations are numerically solved to yield the time dependence of the carrier distributions n σ c,k (t) whose values are 0-1. The various terms that enter into the above system are now described.
Intraband scattering (τ eq terms): The optical pumping creates a population of hot carriers. One of the fastest processes (i.e. subpicosecond time scale [8]) that occurs in each band is the rapid equilibration of these carriers: due to carrier-carrier scattering the initial hot carrier distributions evolve towards Fermi-Dirac quasi-equilibrium distributions.
Extensive work on this specific topic involving the quantum Boltzmann equation can be found in the literature [9,10,11,12].
Here we use a phenomenological approach to describe the time evolution of the hot carrier distributions which is characterized by a relaxation time τ eq associated with intraband scattering. Thus, the quantities n eq,σ c,k in Eq. (5) are Fermi-Dirac distributions describing the quasi-equilibrium for each spin-polarized subsystem, which has the same carrier density and energy as the nonequilibrium distribution. Note that the intraband scattering does not change the carrier densities nor the total kinetic energy.
Carrier thermalization (τ therm terms): A process that also influences the time evolution of the carrier distributions is the thermalization process among carriers of different types. The scattering between electrons and heavy-and light-holes is a process that drives the initial carrier temperatures, T σ c , to a common quasiequilibrium temperature, T eq , which can be different from the lattice temperature, T lat . To evaluate T eq , one needs to calculate the total plasma energy E tot and then compute the corresponding temperature T eq assuming a Fermi-Dirac distribution. To account for the thermalization process, we suppose that the time evolution of the distribution functions is characterized by a phenomenological time τ therm . Thus, the quantities n therm,σ c,k in Eq. (5) are Fermi-Dirac distributions that describe the quasi-equilibrium for each spin-polarized subsystem; they have the same carrier density but an energy corresponding to the common quasi-equilibrium temperature T eq .
Carrier spin-flip (τ sf terms): The spin-flip is a process by which the spin orientation of a carrier is reversed. Models that describe such a phenonemon have been proposed, e.g. Elliot-Yaffet [13,14] and D'Yakonov-Perel [15], but the detailed mechanisms responsible for the spin-flip process are not yet well understood despite extensive studies [16,17,18]. In this work, we only consider the spin-flip phenomenologically with the associated characteristic time τ sf that is of interest to us. The spin-flip introduces a coupling between the spin-states σ and σ ′ for a given type of carrier c, which considerably complicates the solution of Eq. (5). The value of τ sf is poorly known and one of our aims using this model will be to extract this value from the experimental data.
Recombination (τ rad and τ nr terms): We distinguish here between radiative and nonradiative recombinations. Radiative recombination is a process that occurs on relatively long time scale compared to the others described above (τ rad = 1.6 ns in ZnSe). The total observed luminescence intensity I pl (t) is directly linked to the distribution functions of these carriers:
I λ pl (t) ∝ k |d λ cv | 2 n σ e,k (t) n σ ′ λ,k (t) ,(6)
for the heavy and light hole optical transitions, where λ = lh or hh, and d cv is the dipole matrix element. For a detailed study of radiative recombination and spontaneous emssion rate, see Ref. [19].
Detailed calculations with which to evaluate τ rad can be found in Ref. [20]. The term τ nr in the right hand side of Eq. (5) accounts for the nonradiative recombination which is a faster process than the radiative recombination at high plasma density.
Heavy hole and light hole scattering (τ lh terms): Away from zone center, both heavy and light holes are mixtures of the bulk valence band states. This mixture enhances the intersubband scattering between the heavy-and light-hole bands. Many processes that involve other quasi-particles (heavy and light holes, phonons, excitons . . . ) can facilitate this type of scattering, and it is nontrivial to assess the relative importance of each of the processes. To avoid such complication we make a simple approximation, assuming that this scattering is spin independent, e.g., that |3/2, 1/2 lh scatters equally into |3/2, 3/2 hh . When the quasi-equilibrium between the heavy-hole and the light-hole bands is reached, their chemical potentials have to satisfy the relation µ lh = µ hh − ∆E cs because of the band splitting that is due to confinement and strain. Solving this equation at low temperature and for typical electronhole plasma densities, i.e. of the order of 10 11 cm −2 or more, yields a negligible light-hole density: N lh /N ≈ 0. Hence we include a simple decay characterized by the time τ lh in Eqs. (5) to model the light-hole scattering.
C. Plasma cooling
Intraband scattering dominates the fast carrier distribution relaxation but is not a process that dissipates energy. Thus the initial electronhole plasma temperature is determined by the kinetic energy of the nonequilibrium distribution created by the femtosecond pump pulse. Depending on the energy of the excitation pulse, the effective plasma temperature can be well above the lattice temperature. The most important source of energy dissipation is the coupling of the electronic system with the lattice. The plasma cooling can be treated by solution of the quantum Boltzmann equation that describes the carrier-phonon scattering [11,12], but we restrict ourselves to a phenomenological approach. Hence, the loss of carriers' kinetic energy obeys a rate equation:
dE σ tot,c dt = E eq,σ tot,c − E σ tot,c τ cool ,(7)
where E eq,σ tot,c is the total energy per unit area calculated from the quasi-equilibrium distribution n eq,σ c,E at the lattice temperature. Eq. (7) has to be solved numerically.
To evaluate the effective carrier temperatures at each point in time, we use the time evolution of the nonequilibrium energies E σ tot,c (t). We calculate first the quasiequilibrium energies as an explicit function of temperature E eq,σ c (T σ c ); then, equating E eq,σ c (T σ c ) and the nonequilibrium energies E σ tot,c (t) gives the effective temperature T σ c at the time t:
E eq,σ c (T σ c ) = m c 2π 2 e 2π 2 β σ c N σ c /mc − 1 × ∞ 0 E dE e β σ c E + e 2π 2 β σ c N σ c /mc − 1 ,(8)
where β σ c = 1/k B T σ c . Knowledge of the distributions n σ c,k (t) and temperatures T σ c allows us to evaluate the time evolution of the Pauli blocking factor that enters into the equations of motion for the interband polarizations, Eq.(1).
D. Time evolution of the plasma screening
As the plasma temperature and the carrier densities evolve, the plasma screening changes. In this paper we are concerned with the time evolution of the electron-hole plasma and we need to find a simple way to evaluate it in a nonequilibrium situation. This Lindhard formula, which is valid for both equilibrium and nonequilibrium situations [1], is our starting point:
ǫ q (ω) = 1 − V q σ,k c n σ c,k−q − n σ c,k (ω − iδ) + e σ c,k−q − e σ c,k .(9)
For ease is notation we do not explicitly denote the time dependence of the various physical quantities defined above. We have to simplify Eq. (9) as it is not practical to use that equation for numerical purposes because of its continuum of poles. We choose to work in the long wavelength limit q → 0 [1]. Using a nonequilibrium distribution function we find:
ǫ(q → 0, ω) = 1 − V q q 2 ω 2 c,σ N σ c m c ,(10)
which shows that the time dependence of the distribution functions n σ c,k due to the relaxation process does not affect the expression of ǫ(q → 0, ω): it has the same form as in quasi-equilibrium calculations. So, to treat the nonequilibrium problem for the screening, we can calculate the screening from the quasi-equilibrium formulas that one can find in Refs. [1,21]. The time dependence of ǫ(q → 0, ω) is contained in N σ c . To obtain analytical results for the plasma screening we make use of the static plasmon-pole approximation [1], and the screened Coulomb potential given by:
V s q = V q ǫ q .(11)
Note that our definition of the bare 2D Coulomb potential in a quantum well includes the form factor f q to account for the finite well width w [2]. Depending on the pump-probe polarization configuration, we generate different couplings between the ground state and the various spin-states |J, m J . In our analysis we include the two-fold degenerate conduction band |1/2, 1/2 e and |1/2, −1/2 e , the two-fold degenerate heavy hole band |3/2, 3/2 hh and |3/2, −3/2 hh and the two-fold degenerate light hole band |3/2, 1/2 lh and |3/2, −1/2 lh . The selection rules for zinc blende semiconductors are used [22] and the relative populations generated by optical pumping in the continuum are as depicted in Fig. 3. The ratio of 3 between the populations created from the heavy hole and light hole transitions comes from the ratio of the dipole matrix elements which describe the relative strengths of these optical transitions [22].
In principle, the carrier dynamics during the pump process could be included in our model. However, since τ eq ∼ 100 fs and we are not interested in the early coherent regime, we simply assume an initial carrier density with nonthermal Gaussian distributions reflecting the spectral width and the location of the pump pulse. This assumption has no significant impact on our numerical results as τ eq is of the order of 100 fs and saves considerable computational effort. In the SLP situation, electrons with a given spin σ are created from both the light-and heavy-hole transitions. So, the initial distribution for the spin-polarized electron gases in the SLP is the sum of two Gaussian distributions, each centered appropriately.
IV. COMPARISON BETWEEN NUMERICAL AND EXPERIMENTAL RESULTS
To present and discuss the dynamics of the absorption spectra, it is useful first to describe the time evolution of the electron-hole plasma, which is influenced by the dynamical processes mentioned above. We are concerned with the interplay among the various dynamical processes included in our model and their influence on the time evolution of the carrier density, plasma energy and temperature. situations, namely, that when excitations have been performed by circularly polarized light and that when they have been performed by linearly polarized light. The case of circular polarization (CP) can describe both OCP and SCP situations: The dynamics of the carrier gas is the same; only the interaction with the probe field is different.
The lattice temperature is taken to be T lat = 77 K (we find in materials with strong Coulomb effects such as ZnSe, that strongly degenerate cases at 4 K are numerically prohibitive owing to the large number of k-states required near the Fermi energy) and the initial plasma density N = 3 × 10 11 cm −2 . The effective masses we use are m e = m lh = 0.15m 0 and m hh = 0.6m 0 , where m 0 is the mass of the free electron. The dielectric constant is ǫ = 8.8 and the bandgap E g = 2.66 eV. The phenomenological parameters entering Eqs. (5) are: τ eq = 0.1 ps, τ cool = 1 ps, τ therm = 1 ps. These characteristic times only give an order of magnitude and are taken from Ref. [8]. The radiative recombination time τ rad = 1.6 ns is calculated for ZnSe parameters. We assume the nonradiative recombination to be density independent with τ nr = 30 ps taken from experiments, by fitting numerical results to experimental results (see Figs. 2 and 8). Inasmuch as we are pumping in the continuum, the excited valence band states are of a mixed heavy-hole-light-hole character and hence scatter efficiently. We chose τ lh = 0.5 ps for the light hole density decay. The spin-flip times, τ sf = 30 ps, have been chosen also comparing numerical results with experimental data. Various values for τ nr and τ sf have been tried numerically and we estimate the margin of error for the parameters to be ±5 ps.
A. Time evolution of the carrier densities
First we study the time evolution of the carrier densities obtained by solution of Eqs. (5) and shown in Fig. 4. That is the reason why the initially less populated electron gas exhibits a maximum at early times (10 ps). Note that for SLP, the spin-flip process has no effect on the dynamics of the electron densities. Hence, the only contributor to the density decay in the SLP situation is recombination. Also note that the radiative recombination is too slow (1.6 ns) to have much influence on the fast population dynamics.
Heavy holes: Similar comments apply for the heavy hole population dynamics as above with τ hh sf = 30 ps. However, the fast intersubband scattering that drives the light holes into the heavy hole band on a time scale given by τ lh = 0.5 ps has to be considered here. That explains the fast initial rise of both heavy hole densities, before they start decaying. Note that the initial spin down heavy hole density is zero, unlike the electrons whose populations in both spin states is non-zero from the beginning because of optical transitions from both the heavy-and light-hole bands.
Light holes: The time evolution of the light hole populations can also be described similarly as the electron populations with τ lh sf = 30 ps. However, as in the case of the heavy holes, one has to consider the intersubband scattering. As it is a very fast process, the population of light holes decreases on a very short time scale given by τ lh = 0.5 ps. After one picosecond the light hole density is negligible (but non-zero as heavy holes still scatter into the light hole band). Note that in the case of the spin down light hole gas, the population always remains very low: the spin-flip time is comparatively too long to make any significant change. Electrons: The initial total energy of the spin down electron gas in the CP case corresponds to a temperature above the lattice temperature and hence is a decreasing function of time because of the cooling process that occurs on a time scale given by τ e cool = 1 ps. One can observe that the energy of the electron gas with opposite spin is an increasing function of time before cooling down to the lattice energy. It is due to the thermalization process between carriers of different types and also to the fact that the density increases over the same amount of time because of the spin-flip , and hence increases the total energy. In the LP case, we only observe a cooling as the spin-flip has no influence on the time evolution of the densities.
Heavy holes: The initial rise of the carrier densities for both heavy-hole gases as a result of light-hole scattering influences the behavior of the energy, which is an increasing function of time at early times. The thermalization process is also responsible for this rise, as the initial hole gases' energies are below the spin-down electron gas energy. In the case of the spin-down heavy-hole gas the increase occurs on a much longer time scale than for the opposite spin heavy-hole gas. This is the result of the spinflip process that increases the population for the spin-up heavy-hole gas in CP.
Light holes: The initial rise of the energy for the light hole gases is only due to the thermalization process. As the light hole density decay is fast, it only takes a few picoseconds for the light hole gas energy to become negligible. The population of the spin down light hole gas remains very low; so does the energy. Electrons: The time evolution of the electron temperatures follows exactly the time evolution of the electron gas energies, but it converges fast toward the lattice temperature, whereas the electron energies keep decreasing. This behavior is due to the fact that the electron densities also decrease by recombination, keeping the average electron kinetic energy constant.
Heavy holes: The heavy-hole gas energies increase at early times; so do the densities, because of the light-hole scattering. In CP, for the spin-down heavy-hole gas, despite this increase in energy, the average heavy-hole kinetic energy decreases. Hence the effective temperature decreases. Then, because of the thermalization with the electrons and the lattice, the temperature starts increasing. For the spin-up heavy-hole gas we observe a monotonic increase toward the lattice temperature as it gains energy from the thermalization processes with the other carriers and the lattice. However, as the densities starts decreasing, the energy of the heavy holes also starts decreasing. But, as in the case of electrons, there is compensation between the energy loss and the density decay that makes the average heavy-hole kinetic energy constant when it reaches the lattice temperature.
Light holes: The light-hole density decay is so fast compared to the plasma cooling that the average lighthole kinetic energy keeps increasing and the light-hole gases temperatures go beyond the lattice temperature. In fact, the light-hole gases have no time to reach quasiequilibrium with the lattice. We stop calculating the light-hole gases' effective temperatures when their population is small enough and their temperature high enough to have no influence on the absorption spectra (after 1.5 ps).
D. Time evolution of the absorption spectra
In this section we present numerical solution of Eqs. (1) and (5) and discuss the behavior of the excitonic peaks' bleaching as well as their energy shift. First, we compare the calculated absorption spectra in Fig. 7 with experimental data on Fig. 1 for a given delay. The optical pumping is set 30 meV above the band edge, in the continuum, thus creating an initial unbound electron-hole plasma. The experimental and numerical spectra look similar, but one can observe a significant redshift of the OCP exciton peak that is not observed on experimental data. This artificial redshift is mainly due to the screened Hartree-Fock approximation [23] and also to the screening model leading to an overestimation of the bandgap renormalization [7]. The heavy-hole exciton peak is more bleached in SCP than it is in OCP, and it is more blue shifted. This is due to the phase space filling effect that is more important in the SCP configuration. For the light-hole exciton, as in the experiment, we observe the opposite phenomenon: the OCP light-hole exciton peak is more bleached and blue shifted than the SCP light hole exciton peak. This is because the spin-up electrons excited with the σ − -polarized pump, from the heavy hole transition, occupy states that would be created from a light-hole transition with the σ + -polarized pump. So, in the OCP situation, the reduction of the oscillator strength owing to filling of the phase space when one probes the lighthole transition is more important than it is for SCP. The SLP configuration can be seen as an intermediate case between OCP and SCP. E. Heavy hole exciton peak dynamics As we mentioned above, we are concerned with the time evolution of the absorption spectra. So we computed both the bleaching and the energy shift of the exciton resonances as functions of the time delay between the pump and the probe. The numerical results in Fig. 8 show a decay of the heavy-hole exciton peak bleachings and energy shifts for OCP, SLP and SCP. The three curves converge after a few tens of picoseconds, because of spin-flip and recombination. OCP: The initial state of the electronhole plasma given in Fig. 3 shows that the only contribution to heavyhole exciton bleaching is due to plasma screening, which lowers the Coulomb enhancement. The rapid equilibration of the carrier distributions, together with plasma cooling, contributes to the increase of bleaching at early times, i.e., less than 5 ps. Although in this case the carrier spin-flip occurs on the same time scale as the recombination, the latter process is dominant, and, with decreasing plasma density plasma screening and phasespace filling become less important. Thus the amount of bleaching decreases. In terms of energy shift, the initial heavy-hole exciton peak is redshifted. The bandgap renormalization is not strong enough to compensate for the exciton binding energy, which remains large because of the absence of the Pauli blocking effect. The redshift becomes even more important as the plasma screening is increased, because of the fast light-hole scattering. Then, because of the carrier spin flip and the recombination process, the heavy-hole exciton peak shifts toward the blue region and saturates on a longer time scale (from 30 ps). This behavior is qualitatively different from what we observe in the experiments because of the initial large redshift: If the initial heavy-hole exciton peak were not redshifted it would enter the blue region because of the increasing phase-space filling effect owing to the spin flip and to plasma cooling; then, because of recombination, we would observe a shift toward the red region, which would explain the presence of a maximum.
SCP: As well as plasma screening, phase-space filling contributes to bleaching of the heavy-hole exciton peak. Because of thermalization and plasma cooling, we observe a slight increase for short (≤ 2ps) pump-probe delays, but the carrier spin flip decreases the amount of bleaching together with the recombination process. In SCP both processes contribute to decreasing the Pauli blocking filling effect. The initial SCP heavy-hole exciton peak is blueshifted, even though the bandgap renormalization is more important than it is for OCP (the exchange term is nonzero). This is so because the phasespace filling factor decreases the exciton binding energy significantly in the SCP case. Because of the fast relaxation of the distributions and the plasma cooling, the heavy-hole exciton blueshift is even more important at early times (≤ 2ps); then the spin flip and recombination make the heavy-hole exciton peak shift toward the red region. In the SCP situation, carrier spinflip and recombination have the same effects on the exciton peak dynamics, whereas in the OCP case they tend to cancel each other out.
SLP: As for SCP, plasma screening and phase-space filling contribute to heavy-hole exciton peak bleaching. There is an initial small increase that is due to thermalization and plasma cooling, but after a few picoseconds the recombination process starts effectively to influence the bleaching behavior. In SLP the spin-flip process plays no role in the dynamics of the bleaching. The behavior of the energy shift for SLP is similar to but less dramatic than the OCP behavior, as it is not influenced by the spin-flip process.
F. Light hole exciton peak dynamics
In light hole transitions the contribution of the light hole population becomes negligible quickly as the light hole density decays on a vey short time scale, τ lh = 0.5 ps. The behavior of the light-hole exciton peak bleaching and shift is dominated by the electrons. As we mentioned above, the bleaching here is more important for OCP than for SCP. Because of the rapid light-hole density decay, the bleaching decreases at very early times (less than 1 ps), but it increases shortly after that because of electron thermalization and plasma cooling, as shown in Fig. 9. Then, after a few picoseconds, the recombination process dominates the behavior of the bleaching, which decreases monotonically with increasing pumpprobe delay. The three curves converge in ∼30 ps. As far as the energy shift is concerned, the blueshift quickly becomes small because of the fast light-hole density decay.
Calculations made with longer characteristic times for the plasma thermalization and cooling (≥ 5 ps) led to results that are qualitatively quite different from exprimental data: In Fig. 2 Pump-probe delay (ps) exciton bleaching is characterized by a simple decay in OCP and SCP, whereas our calculations show the presence of a maximum for short pumpprobe delay that is due to increased Pauli blocking because of plasma cooling. If the cooling characteristic time is increased, the numerical results show a plateaulike behavior for pumpprobe delays of as much as several picoseconds before the recombination becomes dominant. The early (≤ 5 ps) behavior shown in Fig. 8 for OCP is enhanced, and the SCP bleaching dynamics is qualitatively the same as that of OCP.
V. DISCUSSION AND CONCLUSION
We have presented a model that describes the time evolution of an initial hot electron-hole plasma and its influence on absorption spectra. We considered spin-selective excitation and included various dynamical process in our model. Except for the energy shift of the OCP heavyhole exciton, the numerical results are in good qualitative agreement with the experiment. According to our calculations, the electron spin populations equilibrate on a much longer time scale (30 ps) than the thermalization of the electron-hole plasma (1 ps).
This spin-flip time is consistent with the experimental values for the II−VI quantum wells discussed in Ref. [24]. One would need more-extensive experimental data to obtain more conclusive results for the heavy-and light-hole spin-flip times. We made additional numerical calculations to explore various spin-flip time regimes. The results show that, when the electron spin-flip time is short (1 ps), the OCP and SCP curves converge faster than when it is long, even with a short heavy-hole spin-flip time. This means that the electrons, which are lighter than the heavy holes, always dominate phase-space filling, even if the initial electron gas temperature is much higher than the heavy hole gas's temperature (Fig. 6). This implies that our results are rather insensitive to the specific value of the heavy-hole spin flip time. The light holes do not play a siginifcant role in phase-space filling, as they decay on a very short time scale (0.5 ps). Plasma cooling enhances plasma screening, and together with the relaxation of the distributions it also enhances the Pauli blocking effect. We can observe the influence of these fast processes on the early behavior of the exciton peaks' bleaching and energy shift. The spin-flip process leads to opposite qualitative behavior of the bleaching dynamics, depending on the initial polarization configurations. As shown above, exciton peak bleaching and shift can either increase or decrease because of the spin-flip process, depending on the type of exciton (heavy or light) and on the polarization (OCP or SCP). The radiative recombination occurs on a time scale that is too large (∼1.6 ns in ZnSe) to have any effect on the fast bleaching and energy shift dynamics (below 100 ps). However, the nonradiative recombination is fast enough that an overall decay can be observed on a time scale shorter than 100 ps.
As far as the energy shift is concerned, the model that we used for this study seems not to be good enough to describe qualitatively the OCP and SCP energy shifts. We make two comments on this problem: First, we have restricted ourselves to the screened Hartree-Fock level, neglecting additional correlation terms to account for broadening, which is described here by phenomenological damping. Inclusion of such terms has been shown to correct spurious shifts introduced by the screened Hartree-Fock approximation [23] in SLP. However, the static plasmon-pole approximation that we and many others have used is constructed in just such a way as to produce a constant exciton energy at the screened Hartree-Fock level in SLP. We do this by adding a term to the effective plasmon frequency to account for the effects of the pair continuum. As can be seen from Fig. 7, there is effectively no shift of SLP in our calculations; we also checked our numerics against published GaAs results. Making the natural extension to OCP and SCP, however, leads to extraneous shifts, even when we include this pair continuum term. Our conclusion is therefore that the static plasmon-pole approximation is inadequate for the treatment of spin-polarized populations. Of course one could alter the strength of the pair-continuum term in an ad hoc fashion to correct for this, but that correction would accomplish little. Within our model it is possible to control the values of the phenomenological parameters to obtain further insight into the complex interplay among various dynamical processes as more experimental data become available.
FIG. 2 :
2Measured absorption spectra dynamics. Comparison of OCP and SCP heavy-hole exciton peaks' bleaching and shift.
FIG. 3 :
3turn to the three pump-probe polarization configurations and the initial carrier distributions they produce as the starting point for the solution of Eqs.(5):1. OCPs, for which the pump and the probe beams are both circularly polarized but in opposite senses, 2. SCPs, for which the pump and the probe beams are cicularly polarized in the same sens, 3. SLPs, for which the pump and the probe beams are both linearly senses. Selection rules for optical transitions achieved with polarized pump light
Electrons: The processes responsible for the change in population are the electron spin-flip and the recombination. The population of spin down electrons decreases while the population of electrons with opposite spin increases. The carrier spin-flip is a process that tends to create equal spin populations over a time scale given by τ e sf = 30 ps. Then, the recombination which occurs on the same time scale given by τ nr = 30 ps becomes dominant and drives both populations towards zero.
FIG. 4 :
4Time evolution of the spin-polarized gas populations (10 11 cm −2 ) created by spin-selective excitation. Note the short time scale for the light hole density decay.
FIG. 5 :
5Time evolution of the spin-polarized gases energy loss (eV/µm 2 ). Note the very short time scale for the light hole gases' energy loss.
FIG. 6 :
6Time evolution of the spin-polarized gases' temperatures (K). Note the very short time scale for the light hole temperature evolution.
FIG. 8 :
8Calculated absorption spectra dynamics. Comparison of OCP, SLP and SCP heavy-hole exciton peaks' bleaching and shift.
FIG. 9 :
9Calculated absorption spectra dynamics. Comparison of OCP, SLP and SCP light-hole exciton peaks' bleaching and shift.
the time evolution of the heavy-hole0
10
20
30
40
50
60
70
0
0.1
0.2
0.3
LHX bleaching
Opposite circular polarisation
Same linear polarisation
Same circular polarisation
0
10
20
30
40
50
60
70
Quantum Theory of the Optical and Electronic Properties of Semiconductors. H Haug, S W Koch, World Scientific PublishingSingaporeand references thereinH. Haug and S. W. Koch, Quantum Theory of the Opti- cal and Electronic Properties of Semiconductors, World Scientific Publishing, Singapore (1993), and references therein.
W W Chow, S W Koch, Semiconductor-Laser Fundamentals. BerlinSpringer-VerlagW. W. Chow and S. W. Koch, Semiconductor-Laser Fun- damentals, Springer-Verlag, Berlin (1999).
. S Schmitt-Rink, C Ell, H Haug, Phys. Rev. B. 33S. Schmitt-Rink, C. Ell and H. Haug, Phys. Rev. B 33, 1183-1189 (1986).
. R Binder, I Galbraith, S W Koch, Phys. Rev. B. 44R. Binder, I. Galbraith and S. W. Koch, Phys. Rev. B 44, 3031-3042 (1991).
. M J Snelling, P Perozzo, D C Hutchings, I Galbraith, A Miller, Phys. Rev. B. 49M. J. Snelling, P. Perozzo, D. C. Hutchings, I. Galbraith, A. Miller, Phys. Rev. B 49, 17160-17169 (1994).
. M Lindberg, S W Koch, Phys. Rev. B. 38M. Lindberg and S. W. Koch, Phys. Rev. B 38, 3342- 3350 (1988).
. C Ell, R Blank, S Benner, H Haug, J. Opt. Soc. Am. B. 6C. Ell, R. Blank and S. Benner and H. Haug, J. Opt. Soc. Am. B 6, 2006-2012 (1989).
Hot carriers in semiconductors nanostructures: physics and applications. J. ShahSan DiegoAcademic PressJ. Shah (editor), Hot carriers in semiconductors nanos- tructures: physics and applications, Academic Press, San Diego (1992).
. R Binder, D Scott, A E Paul, M Lindberg, K Henneberger, S W Koch, Phys. Rev. B. 45R. Binder, D. Scott, A. E. Paul, M. Lindberg, K. Hen- neberger and S. W. Koch, Phys. Rev. B 45, 1107-1115 (1992).
. D C Scott, R Binder, S W Koch, Phys. Rev. Lett. 69D. C. Scott, R. Binder and S. W. Koch, Phys. Rev. Lett. 69, 347-350 (1992).
. F Jahnke, S W Koch, Appl. Phys. Lett. 67F. Jahnke and S. W. Koch, Appl. Phys. Lett. 67, 2278- 2280, (1995).
. F Jahnke, S W Koch, Phys. Rev. A. 52F. Jahnke and S. W. Koch, Phys. Rev. A 52, 1712-1727 (1995).
. R J Elliott, Phys. Rev. 96R. J. Elliott, Phys. Rev. 96, 266-279 (1954).
Y Yaffet, Solid State Physics. New YorkAcademic PressY. Yaffet, Solid State Physics, Academic Press, New York (1963).
. M I , V I Perel, Sov. Phys. JETP. 33M. I. D'Yakonov and V. I. Perel, Sov. Phys. JETP 33, 1053-1073 (1971).
. T C Damen, L Viña, J E Cunningham, J Shah, Phys. Rev. Lett. 67T. C. Damen, L. Viña, J.E.Cunningham and J.Shah, Phys. Rev. Lett. 67, 3432-3435 (1991).
. R Ferreira, G Bastard, Phys. Rev. B. 43R. Ferreira and G. Bastard, Phys. Rev. B 43, 9687-9691 (1991).
. M Potemski, E Pérez, D Martin, L Viña, L Gravier, A Fisher, K Ploog, Sol. State Com. 110M.Potemski, E.Pérez, D.Martin, L.Viña, L.Gravier, A.Fisher and K.Ploog, Sol. State Com. 110, 163-168 (1999).
. M Kira, F Jahnke, W Hoyer, S W Koch, Prog. Quantum Electron. 23M. Kira, F. Jahnke, W. Hoyer and S. W. Koch, Prog. Quantum Electron. 23, 189-279 (1999).
. E Rosencher, B Vinter, Optoélectronique, -Csf Thomson, Masson , ParisE. Rosencher and B. Vinter, Optoélectronique, Thomson- CSF and Masson, Paris (1998).
Many-particle theory of highly excited semiconductors. R Zimmermann, Teubner. R. Zimmermann, Many-particle theory of highly excited semiconductors, Teubner, Berlin (1988).
Wave Mechanics Applied to Semiconductor Heterostructures. G Bastard, Leséditions de Physique. G. Bastard, Wave Mechanics Applied to Semiconductor Heterostructures, Leséditions de Physique (1988).
. F Jahnke, M Kira, S W Koch, G Khitrova, E K Lindmark, T R NelsonJr, D V Wick, J D Berger, O Lyngnes, H M Gibbs, K Tai, Phys. Rev. Lett. 77F. Jahnke, M. Kira, S. W. Koch, G. khitrova, E. K. Lind- mark, T. R. Nelson, Jr., D. V. Wick, J. D. Berger, O. Lyngnes, H. M. Gibbs and K. Tai, Phys. Rev. Lett. 77, 5257-5260 (1996).
. D Hägele, M Oestreich, W W Rühle, J Hoffmann, S Wachter, H Kalt, K Ohkawa, D Hommel, Physica B. 272D. Hägele, M. Oestreich, W. W. Rühle, J. Hoffmann, S. Wachter, H. Kalt, K. Ohkawa and D. Hommel, Physica B 272, 338-340 (1999).
|
[] |
[
"Marginal likelihood for parallel series",
"Marginal likelihood for parallel series"
] |
[
"Peter Mccullagh [email protected] \nDepartment of Statistics\nUniversity of Chicago\n60637ChicagoILUSA\n"
] |
[
"Department of Statistics\nUniversity of Chicago\n60637ChicagoILUSA"
] |
[
"Bernoulli"
] |
Suppose that k series, all having the same autocorrelation function, are observed in parallel at n points in time or space. From a single series of moderate length, the autocorrelation parameter β can be estimated with limited accuracy, so we aim to increase the information by formulating a suitable model for the joint distribution of all series. Three Gaussian models of increasing complexity are considered, two of which assume that the series are independent. This paper studies the rate at which the information for β accumulates as k increases, possibly even beyond n. The profile log likelihood for the model with k(k + 1)/2 covariance parameters behaves anomalously in two respects. On the one hand, it is a log likelihood, so the derivatives satisfy the Bartlett identities. On the other hand, the Fisher information for β increases to a maximum at k = n/2, decreasing to zero for k ≥ n. In any parametric statistical model, one expects the Fisher information to increase with additional data; decreasing Fisher information is an anomaly demanding an explanation.
|
10.3150/07-bej119
|
[
"https://arxiv.org/pdf/0810.3978v1.pdf"
] | 5,875,768 |
0810.3978
|
c2e34a3af50223be2b4baa89b6790274799f3bc2
|
Marginal likelihood for parallel series
2008
Peter Mccullagh [email protected]
Department of Statistics
University of Chicago
60637ChicagoILUSA
Marginal likelihood for parallel series
Bernoulli
143200810.3150/07-BEJ119ancillary statisticBartlett identitycombination of informationdecreasing Fisher informationgroup orbitmarginal likelihoodprofile likelihoodrandom orthogonal matrix
Suppose that k series, all having the same autocorrelation function, are observed in parallel at n points in time or space. From a single series of moderate length, the autocorrelation parameter β can be estimated with limited accuracy, so we aim to increase the information by formulating a suitable model for the joint distribution of all series. Three Gaussian models of increasing complexity are considered, two of which assume that the series are independent. This paper studies the rate at which the information for β accumulates as k increases, possibly even beyond n. The profile log likelihood for the model with k(k + 1)/2 covariance parameters behaves anomalously in two respects. On the one hand, it is a log likelihood, so the derivatives satisfy the Bartlett identities. On the other hand, the Fisher information for β increases to a maximum at k = n/2, decreasing to zero for k ≥ n. In any parametric statistical model, one expects the Fisher information to increase with additional data; decreasing Fisher information is an anomaly demanding an explanation.
Introduction
Let x 1 , . . . , x n be points in space or time. At each point x i , the k-variate response Y (x i ) = (Y i1 , . . . , Y ik ) is measured. The values are recorded in matrix form Y = {Y ir } with one column for each of the k series and one row for each of the n points. Each series is a stationary autoregressive process with autocorrelation parameter β, and we aim to estimate this parameter as accurately as possible by pooling information from all k series.
Three Gaussian models are considered, all having moments of the form
E(Y ir ) = 0, cov(Y ir , Y js ) = Γ ij Σ rs(1)
with autocorrelation function Γ. The zero-mean assumption is inconsequential and is made for simplicity of notation. It can be replaced by a standard multivariate regression model (Section 3). The three model variants differ only in the assumptions made about This is an electronic reprint of the original article published by the ISI/BS in Bernoulli, 2008, Vol. 14, No. 3, 593-603. This reprint differs from the original in pagination and typographic detail.
the matrix Σ, which governs the variances and covariances of the k series. These are as follows:
Model I: Σ = σ 2 I k , Model II: Σ = diag{σ 2 1 , . . . , σ 2 k },
Model III: Σ ∈ PD k ,
where PD k is the space of k × k symmetric positive definite matrices. For each model, we study the profile log likelihood for β, show that it satisfies the Bartlett identities and study the rate of change of the Fisher information with k and n.
Model III aims to accommodate correlations among the series in a simple and natural way, but for k ≥ 2n − 1, the number of parameters exceeds the number of observations. This simple counting argument suggests that we might encounter Neyman-Scott phenomena such as bias, inconsistency or inefficiency in the estimation of β (Neyman and Scott (1948)). The failure of profile likelihoods to satisfy the Bartlett identities is the chief explanation for Neyman-Scott phenomena, and the asymptotic bias can often be eliminated by a simple adjustment (Bartlett (1953(Bartlett ( , 1955, Patterson and Thompson (1971), Cox and Reid (1987), McCullagh and Tibshirani (1990)). The fact that the profile likelihood for β in models I-III satisfies the Bartlett identities suggests that Neyman-Scott phenomena should not arise. This intuition is correct for models I and II. However, the marginal likelihood for β in model III illustrates a new anomaly for k > n/2, namely, that the Fisher information can be increased by deleting one or more series.
Although it is sometimes natural, the separability assumption in (1) is very strong, even for version III. Stein (1999Stein ( , 2005 is rightly critical of the use of separable covariances for either purely spatial or spatio-temporal processes. However, the product form of the covariance function is extremely convenient and widely used, and there do exist applications in which this assumption is reasonable. It occasionally happens in agricultural field trials that two observations are made on each plot, for example, yield of grain and yield of straw. Although the two yields are certainly correlated, there is good reason to expect that both processes have very similar spatial autocorrelation functions (McCullagh and Clifford (2006)). Mitchell et al. (2006) give further references to applications and develop a likelihood-ratio test for separability based on independent replicates of the matrix Y . The motivating example for this work arises in a non-spatial context, the estimation of a phylogenetic tree for n species from aligned sequences at multiple homologous loci. Under the model of neutral evolution, the phylogenetic relationship among species is the same at each locus, which implies (1). For further details, see Section 4.
Profile likelihood
The log likelihood for all three models is
l(Γ, Σ; Y ) = − 1 2 log det(Γ ⊗ Σ) − 1 2 tr(Y ′ Γ −1 Y Σ −1 ) = − k 2 log |Γ| − n 2 log |Σ| − 1 2 tr(Y ′ Γ −1 Y Σ −1 ),
using the formula for the determinant of a Kronecker product (Harville (1997), page 350). For fixed Γ, the log likelihood for model III is maximized atΣ Γ = Y ′ Γ −1 Y /n. The log likelihood for model II is maximized at diag(Σ Γ ) and the log likelihood for model I at tr(Σ Γ )I k /k. The profile log likelihood for Γ is
l p (Γ; Y ) = − k 2 log |Γ| − nk 2 log tr(Y ′ Γ −1 Y ) (Model I), − k 2 log |Γ| − n 2 log | diag(Y ′ Γ −1 Y )| (Model II), − k 2 log |Γ| − n 2 log |Y ′ Γ −1 Y | (Model III).
(2)
The assumption k ≤ n is necessary in model III to ensure that the matrix Y ′ Γ −1 Y is positive definite with probability one. The profile log likelihood for model II is a sum over the k series, the contribution of series r being
− 1 2 log |Γ| − n 2 log(Y ′ r Γ −1 Y r ).
This is, in fact, the marginal log likelihood based on the standardized statistic Y r / Y r , where Y r is the rth column of Y (Bellhouse (1978), Tunnicliffe-Wilson (1989), Cruddas, Reid and Cox (1989)). For a one-parameter model with derivative matrix D = dΓ/dβ, the derivative of the profile log likelihood is (Boos and Hughes-Oliver (1998)). The expected value of the ratio is the ratio of expected values, which is
2 ∂l p ∂β = −k tr(W D) + nk tr(Y ′ AY )/ tr(Y ′ W Y ) (Model I), −k tr(W D) + n k r=1 (Y ′ r AY r )/(Y ′ r W Y r ) (Model II), −k tr(W D) + n tr((Y ′ W Y ) −1 Y ′ AY ) (Model III), where W = Γ −1 and A = W DW . The quadratic form tr(Y ′ W Y ) in model I is distributed as σ 2 χ 2 nk , independently of the ratio tr(Y ′ AY )/ tr(Y ′ W Y )E tr(Y ′ AY ) tr(Y ′ W Y ) = k tr(AΓ) nk = tr(W D) n .
It follows that the log likelihood derivative for model I has zero expectation. The same argument applied to each series leads to the same conclusion for model II. The argument for model III is superficially more complicated. For fixed Γ, the natural quadratic form Y ′ W Y is a complete sufficient statistic for Σ, with expectation nΣ. The statistic tr((Y ′ W Y ) −1 Y ′ AY ) is invariant under the group GL(R k ) of linear transformations Y → Y g acting by right composition. Hence, the distribution does not depend on Σ. By Basu's theorem (Basu (1955)), every ancillary statistic such as tr(
(Y ′ W Y ) −1 Y ′ AY ) is independent of Y ′ W Y . Consequently, if we transform to Z = W 1/2 Y and condition on the event Y ′ W Y = Z ′ Z = I k ,
the columns of Z are orthonormal, the first k columns of a random orthogonal matrix, uniformly distributed with respect to Haar measure on the orthogonal group (Heiberger (1978), Stewart (1980), Diaconis and Shahshahani (1994)). Hence,
E[tr((Y ′ W Y ) −1 Y ′ AY )] = tr[E((Y ′ W Y ) −1 Y ′ AY )] = tr[E(Z ′ Γ 1/2 AΓ 1/2 Z)] = k tr(ΓA)/n = k tr(W D)/n,
since ZZ ′ is a random projection with rank k ≤ n and expectation kI n /n. For all three models, the first derivative has zero expectation, so the elimination of Σ by maximization has not introduced a bias.
Similar, but more intricate, calculations for random orthogonal matrices described in Appendix A reveal that
var ∂l p ∂β = −E ∂ 2 l p ∂β 2 = V k 2 2(nk + 2) (I), V k 2(n + 2) (II), V k(n − k) 2(n − 1)(n + 2) (III),
where V = n tr(W DW D) − tr 2 (W D). For model III, this formula holds only for k ≤ n. Thus, the second Bartlett identity is satisfied, and it follows from Appendix B that the Bartlett identities of all orders are satisfied. For small k, the Fisher information increases roughly in proportion to the number of series, all series contributing equally. If, in fact, the series are independent and identically distributed, the efficiency of model II to model I is (nk + 2)/(nk + 2k), which is fairly high, even for a large number of short series. For example, if n = 10, the relative efficiency decreases from 1.0 to 10/12 as k → ∞. For fixed k, the relative efficiency increases with n, presumably because the number of nuisance parameters in model II is fixed. It appears from these calculations that the additional flexibility of model II over model I comes at a fairly small cost, so II is likely to be preferred over I in most circumstances.
The most striking anomaly for large k is that the Fisher information for β in model III is monotone decreasing for k > n/2 and is reduced to zero for k ≥ n. For a conventional one-parameter model with distributions f k (y 1 , . . . , y k ; β), the Fisher information satisfies
FI k = FI k−1 + var ∂ log f k (y k |y 1 , . . . , y k−1 ; β) ∂β ≥ FI k−1 ,
so the Fisher information is necessarily non-decreasing in k. It is immaterial whether the components are scalars or vectors. This factorization argument also holds for marginal distributions based on residuals, that is, the REML likelihood for variance components or spatial autocorrelations. It also covers the marginal likelihood for models I and II, and conditional likelihoods of the type used to eliminate nuisance parameters in binary regression models. However, explicit Fisher information calculations for β in model III show that this seemingly impregnable argument may fail. The difficulty lies in the fact that the marginal distributions f k of the maximal invariant in model III cannot be factored: f k−1 is not the marginal distribution of f k under deletion of the last component (see Section 5).
Bearing in mind the stated goal of increasing precision by pooling information from all series, the third formulation is a complete success for small k. But it is a spectacular failure for large k because any information about β that is present in the first few series remains available even when further series are observed. The marginal likelihood with k ≥ n is constant and thus devoid of information, but the marginal likelihood based on any single series or pair of series is informative and the Fisher information is positive. A skeptical reader may consider the case k = n, where the matrix Y is invertible with probability one. Direct examination of (2) for model III shows that the term det(Y ′ W Y ) factors and that the log likelihood does not depend on the parameter. These conclusions are independent of the nature of the model for Γ.
If k < n, the log likelihood function for model III may be used for inference about β, either for computing a point estimate and standard error, for generating confidence intervals or for computing posterior intervals. However, if k > n/2, greater precision can be achieved by discarding a random subset of the series and applying the same model to the remainder. This counterintuitive behavior is easily verified by simulation.
Regression effects
The standard model with zero-mean Gaussian variables is easily extended to include linear models having non-zero mean. The simplest model of this form is the standard Gaussian multivariate regression model,
E(Y ) = Xθ, cov(Y ) = Γ ⊗ Σ,(3)
where the model matrix X is of order n × p with rank p ≤ n and θ is a parameter matrix of order p × k. This model with k = 2 occurs in field trials where the response is bivariate, for example, weight of grain and weight of straw on each plot (McCullagh and Clifford (2006)). The log likelihood based on residuals or X-contrasts (Patterson and Thompson (1971), Harville (1977)) iš
l(Γ, Σ; Y ) = k 2 log Det(W Q) − n 2 log |Σ| − 1 2 tr(Y ′ W QY Σ −1 ),
where Q = I n − X(X ′ W X) −1 X ′ W has rank n − p and Det(·) is the product of the non-zero eigenvalues. The profile log likelihood for Γ in model III iš
l p (Γ; Y ) = k 2 log Det(W Q) − max(k, n − p) 2 log Det(Y ′ W QY ).
All of the remarks made in the preceding section about the Fisher information hold for the profile residual likelihood with n replaced by n − p and W by W Q.
Application to phylogenetics
The motivating example for this work comes from genetics, where sequence data are observed for n species at k homologous loci. In Kim and Pritchard (2007), n = 5 and the loci are highly conserved non-coding sequences numbering several thousand. In reality, the value at each locus is a sequence from the genetic alphabet, but we assume here for simplicity that this can be coded in such a way that Y ir is a real number. For locus r, the covariance of Y ir with Y jr is σ rr Γ ij , where σ rr is the site-specific mutation rate and Γ ij is the length of the ancestral tree that is shared by the two species. Under neutral evolution, the genetic distance between species is constant, the same at each locus. Furthermore, the responses at different loci may be correlated due to their proximity on the genomes of one or more species. The natural Gaussian model is (3) with X = 1, the constant vector.
Our aim is to estimate the ancestral tree using one of the three variants of (3). The profile log likelihood function for model I iš
l(Γ; Y ) = k 2 log Det(W Q) − (n − 1)k 2 log tr(Y ′ W QY ) = k 2 log Det(W Q) − (n − 1)k 2 log tr(W QS) = k 2 log Det(W Q) + (n − 1)k 4 log tr(W QD),
where S = Y Y ′ is the observed inner product matrix and D ij = S ii + S jj − 2S ij is the observed squared distance between species. This expression is the log likelihood function on phylogenetic trees based on the marginal distribution of the squared distance matrix D (McCullagh (2008)).
If we wish to take account of locus-specific mutation rates, version II of the standard model is more appropriate. The profile log likelihood function for this model iš
l(Γ; Y ) = k 2 log Det(W Q) − n 2 k r=1 log(Y ′ r W QY r ) = k 2 log Det(W Q) + n 4 k r=1 log tr(W QD r ),
which requires locus-specific squared distance matrices D r (i, j) = (Y ir − Y jr ) 2 . Although formulation III appeared to be appropriate and natural for this application, the model with general Σ is a total failure because k is much larger than n and the profile log likelihood is uninformative.
Tractable models intermediate between II and III can be used to take account of correlations and to pool information more efficiently. The technique is illustrated here by the set of Markov matrices, that is, Σ is a Green's matrix of the form a i b j for i ≤ j and Σ −1 is a symmetric Jacobi, or tri-diagonal, matrix (Karlin (1968), Section 3.3). Let Y 0 = 0 and let Q r be the orthogonal projection in R n with kernel span(X, Y r−1 ) and rank n − p − 1 for r > 1. Conditional on Y 1 , . . . , Y r−1 , the residual log likelihood for Γ based on Q r Y r / Q r Y r is
1 2 log Det(W Q r ) − rank(Q r ) 2 log(Y ′ r W Q r Y r ).
The full log likelihood is the sum of k similar terms, and the derivatives have the same form as those for model II. Since Q r is a random projection, the matrix V r = n tr((W Q r D) 2 ) − tr 2 (W Q r D) governing the conditional Fisher information is also random. No closed-form expression is available for the expected value, but symmetry considerations indicate that the total Fisher information is of order rank(Q r ), directly proportional to the number of series. The marginal likelihood for the series in reverse order is different, but the Fisher information is the same. Neither marginal likelihood coincides exactly with the profile likelihood.
Marginal likelihood and group orbits
In order to eliminate θ from the likelihood in the model Y ∼ N (Xθ, Γ), part of the data is ignored. The residual likelihood function is based on the statistic LY ∼ N (0, LΓL ′ ), where L is any linear transformation with kernel X = span(X). To eliminate scalar constants, we also ignore scalar multiples and base the likelihood function on the reduced statistic Y / Y or LY / LY . For k = 1, this technique gives a marginal log likelihood of
l(Γ; Y ) = 1 2 log Det(W Q) − n − p 2 log(Y ′ W QY ),
where W = Γ −1 and Q = I − X(X ′ W X) −1 X ′ W has rank n − p. Note thatľ(αΓ; Y ) = L(Γ; Y ), so the marginal likelihood is constant on scalar multiples of Γ. Equivalent versions of this marginal likelihood function have been given by Bellhouse (1978Bellhouse ( , 1990, Cruddas, Reid and Cox (1989) and Tunnicliffe-Wilson (1989). The marginal log likelihood is based on the maximal invariant under the action of a certain group on the observation space Y ∈ R n . The standard residual likelihood associated with the group of translations Y → Y + x with x ∈ X leads to the REML log likelihood
1 2 log Det(W Q) − 1 2 Y ′ W QY.
The maximal invariant can be described in one of two ways, either in terms of X-contrasts or in terms of the group orbit which is the coset y + X . When the group is extended to include scalar multiplication, the maximal invariant is reduced and the marginal log likelihood is the functionľ(Γ; Y ) shown above.
In the multivariate case Y ∼ N (Xθ, Γ ⊗ Σ), the regression parameter is eliminated, as above, by considering an arbitrary linear transformation L: R n → R n with kernel X and applying it to each of the columns of Y . The kernel is thus X ⊕k , the group orbits are cosets and the multivariate residual log likelihood is
k 2 log Det(W Q) − n 2 log |Σ| − 1 2 tr(Y ′ W QY Σ −1 ).
If we now extend the group by linear transformations Y → Y g with g ∈ GL(R k ), the dependence on Σ vanishes and the marginal log likelihood is
k 2 log Det(W Q) − max(k, n − p) 2 log Det(Y ′ W QY ) (Appendix B)
. Since this is a log likelihood function, the Bartlett identities are automatically satisfied, as was observed in Section 2. The preceding remarks help to explain the anomalous behavior of the log likelihood under model III. For X = 0 and k = 1, each one-dimensional subspace excluding the origin is a group orbit in R n , so there are as many orbits as there are points on the projective sphere in R n . For general k, the observation space is R nk , but a typical group orbit has dimension k 2 , so the maximal invariant has dimension k(n − k), which is the factor governing the rate of increase of the Fisher information. For k ≥ n, there is one group orbit that has probability one, so the invariant statistic is degenerate and uninformative.
The preceding discussion suggests the following question. The action of the group GL(R k ) is such that that the maximal invariant has a distribution independent of Σ. Can the same effect be achieved at less cost by a sub-group? The answer, which is a qualified "yes", is now illustrated by the sub-group UT k of upper triangular transformations. Taking the series in the order given, the maximal invariant is constructed as follows. For each series Y r , compute the residual after linear regression on both X and Y 1 , . . . , Y r−1 , ignoring scalar multiples. The contribution to the log likelihood function from the series Y r is
1 2 log Det(W Q r ) − rank(Q r ) 2 log(Y ′ r W Q r Y r ),
where Q r is the orthogonal projection in R n with inner product matrix W = Γ −1 and null space span(X, Y 1 , . . . , Y r−1 ). The contribution to the Fisher information is non-negative, but zero for r ≥ n − p. The total log likelihood based on the maximal invariant under the upper triangular sub-group is thus
k r=1 1 2 log Det(W Q r ) − n − p − r + 1 2 log(Y ′ r W Q r Y r ).
The group determines the order in which the series are taken, each order has a different maximal invariant and the log likelihood clearly depends on the order. No closed-form expressions are available for the Fisher information, but, by contrast with the behavior for GL(R k ), the Fisher information does not decrease with k.
E(tr 2 (H 2 )) = E(H s r H r s H u t H t u ) = δ rs δ tu ((n + 1)δ rs δ tu [3] − δ rt δ su [6])/(n(n − 1)(n + 2)) = (3n 2 (n + 1) − 6n)/(n(n − 1)(n + 2)) = 3, in agreement with more general formulae for moments of traces given by Diaconis and Shahshahani (1994). Finally, for the variance or covariance of log likelihood derivatives under model III, let Z consist of the first k columns of H, so that indices r, s, . . . run from 1 to k ≤ n. Then E tr(Z ′ AZ) = E(Z i r Z j r A ij ) = δ rr δ ij A ij /n = k tr(A)/n, E(tr(Z ′ AZ) tr(Z ′ BZ)) = E(Z i r Z j r A ij Z k s Z l s B kl ) = A ij B kl E(Z i r Z j r Z k s Z l s ) = k(nk + k − 2) tr(A) tr(B) + 2k(n − k) tr(AB) n(n − 1)(n + 2) and cov(tr(Z ′ AZ), tr(Z ′ BZ)) = 2k(n − k)(tr(AB) − tr(A) tr(B)/n) n(n − 1)(n + 2) .
Appendix B: Distribution of the maximal invariant
Let Y be a random matrix of order n × k with density f (y) dy with respect to Lebesgue measure at y ∈ R nk . In order to calculate the distribution of the maximal invariant under the action of GL(R k ) by right multiplication, we first observe that the action on the first k components is weakly transitive. For n = k, there are many group orbits, but for continuous distributions, there is a single orbit that has probability one. Under standard conditions, the matrixg = Y (k) consisting of the first k rows of Y has full rank, so the group elementg −1 sends Y to a standard configuration or representative orbit element Z = Yg −1 in which the leading k rows are equal to I k . The Jacobian of the transformation Y → (g, Z) is equal to |g| n−k , so the marginal density of Z is R k 2 f (zg)|g| n−k dg.
Simplification of this expression for the Gaussian distribution with covariance (1) gives the marginal likelihood function in the form |Γ| −k/2 |z ′ Γ −1 z| n/2 ∝ |Γ| −k/2 |y ′ Γ −1 y| n/2 .
In other words, the profile log likelihood (2) coincides with the marginal log likelihood based on the maximal invariant.
= ((n + 1)(n 2 + 2n) − 2n(n + 2))/(n(n − 1)(n + 2)) = 1,
AcknowledgementSupport for this research was provided in part by NSF Grant DMS-03-05009.Appendix A: Haar momentsLet H be a random orthogonal matrix uniformly distributed with respect to Haar measure on the orthogonal group of order n. The value in row r and column j is denoted by H j r , so the (r, j) component of H 2 is H i r H j i using the summation convention for repeated indices. By contrast, the (r, s) component of HH ′ is H j r H j s = δ rs , where δ rs is the Kronecker symbol for the identity matrix.Since −H has the same distribution as H, the moments and cumulants of odd order are zero. For n ≥ 2, the non-zero moments and cumulants up to order four areSubscripts on the left-hand sides are in one-to-one alphabetic correspondence with superscripts. For a moment or cumulant of order k, the right-hand side is a sum over bi-partitions of {1, . . . , k}, that is, ordered pairs of partitions of subscripts and superscripts, all partitions having blocks of size two only. For example, the diagonal bi-partition (13|24, 13|24) appears in alphabetic form as δ rt δ su δ ik δ jl , while (12|34, 13|24) appears as δ rs δ tu δ ik δ jl . The coefficient depends only on the least upper bound of the two partitions.Since there are three partitions of four elements into two blocks of size two, there are nine bi-partitions of {1, . . . , 4}, the three diagonal elements having one coefficient in the fourth moment and the six off-diagonal elements having a different coefficient. Likewise, there are 15 partitions of six elements into blocks of size two, so the sixth moment is a sum over 15 2 = 225 bi-partitions. The 15 diagonal pairs have a least upper bound with three blocks, a further 90 pairs have a least upper bound with two blocks and the remaining 120 pairs have a least upper bound with one block. Thus, there are three distinct coefficients in the sum over bi-partitions of {1, . . . , 6}. It follows that E(tr(H 2 )) = E(H i r H r i ) = δ ir δ ir /n = 1, E(tr 2 (H)) = E(X r r X s s ) = δ rs δ rs /n = 1, E(tr(H 4 )) = E(H u r H r s H s t H t u ) = δ rs δ tu ((n + 1)δ ru δ st [3] − δ rt δ su [6])/(n(n − 1)(n + 2))
Approximate confidence intervals II: More than one unknown parameter. M S Bartlett, Biometrika. 40Bartlett, M.S. (1953). Approximate confidence intervals II: More than one unknown parameter. Biometrika 40 306-317. MR0059519
Approximate confidence intervals III: A bias correction. M S Bartlett, Biometrika. 42Bartlett, M.S. (1955). Approximate confidence intervals III: A bias correction. Biometrika 42 201-204. MR0070117
On statistics independent of a complete sufficient statistic. D Basu, Sankhyā. 15Basu, D. (1955). On statistics independent of a complete sufficient statistic. Sankhyā 15 227-380. MR0074745
Marginal likelihoods for distributed lag models. D R Bellhouse, Statist. Hefte. 19Bellhouse, D.R. (1978). Marginal likelihoods for distributed lag models. Statist. Hefte 19 2-14. MR0494032
In the equivalence of marginal and approximate conditional likelihoods for correlation parameters under a normal model. D R Bellhouse, Biometrika. 77Bellhouse, D.R. (1990). In the equivalence of marginal and approximate conditional likelihoods for correlation parameters under a normal model. Biometrika 77 743-746. MR1086686
Applications of Basu's theorem. D D Boos, J M Hughes-Oliver, American Statistician. 521650407Boos, D.D. and Hughes-Oliver, J.M. (1998). Applications of Basu's theorem. American Statis- tician 52 218-221. MR1650407
Parameter orthogonality and approximate conditional inference (with discussion). D R Cox, N Reid, J. Roy. Statist. Soc. Ser. B. 49Cox, D.R. and Reid, N. (1987). Parameter orthogonality and approximate conditional inference (with discussion). J. Roy. Statist. Soc. Ser. B 49 1-39. MR0893334
A time series illustration of approximate conditional likelihood. A M Cruddas, N Reid, D R Cox, Biometrika. 761016013Cruddas, A.M., Reid, N. and Cox, D.R. (1989). A time series illustration of approximate con- ditional likelihood. Biometrika 76 231-237. MR1016013
On the eigenvalues of random matrices. P Diaconis, M Shahshahani, J. Appl. Probab. 31Diaconis, P. and Shahshahani, M. (1994). On the eigenvalues of random matrices. J. Appl. Probab. 31 49-62. MR1274717
Maximum likelihood approaches to variance component estimation and to related problems (with discussion). D A Harville, J. Amer. Statist. Assoc. 72451550Harville, D.A. (1977). Maximum likelihood approaches to variance component estimation and to related problems (with discussion). J. Amer. Statist. Assoc. 72 320-340. MR0451550
Matrix Algebra from a Statistician's Perspective. D A Harville, Springer1467237New YorkHarville, D.A. (1997). Matrix Algebra from a Statistician's Perspective. New York: Springer. MR1467237
Algorithm AS 127: Generation of random orthogonal matrices. R A Heiberger, Appl. Statist. 27Heiberger, R.A. (1978). Algorithm AS 127: Generation of random orthogonal matrices. Appl. Statist. 27 199-206.
Total Positivity. S Karlin, Stanford Univ. Press230102Karlin, S. (1968). Total Positivity. Stanford Univ. Press. MR0230102
Adaptive evolution of conserved non-coding elements in mammals. S Y Kim, J Pritchard, 10.1371/journal.pgen.0030147.eorPLoS Genetics. Kim, S.Y. and Pritchard, J. (2007). Adaptive evolution of conserved non-coding elements in mammals. PLoS Genetics, e147.eor doi:10.1371/journal.pgen.0030147.eor.
Marginal likelihood for distance matrices. P Mccullagh, Statistica Sinica. To appearMcCullagh, P. (2008). Marginal likelihood for distance matrices. Statistica Sinica. To appear.
A simple method for the adjustment of profile likelihoods. P Mccullagh, R Tibshirani, J. Roy. Statist. Soc. Ser. B. 52McCullagh, P. and Tibshirani, R. (1990). A simple method for the adjustment of profile likeli- hoods. J. Roy. Statist. Soc. Ser. B 52 325-344. MR1064420
Evidence for conformal invariance of crop yields. P Mccullagh, D Clifford, Proc. Roy. Soc. A. 462McCullagh, P. and Clifford, D. (2006). Evidence for conformal invariance of crop yields. Proc. Roy. Soc. A 462 2119-2143.
Recovery of inter-block information when block sizes are unequal. H D Patterson, R Thompson, Biometrika. 58319325Patterson, H.D. and Thompson, R. (1971). Recovery of inter-block information when block sizes are unequal. Biometrika 58 545-554. MR0319325
A likelihood ratio test for separability of covariances. M W Mitchell, M G Genton, M L Gumpertz, J. Multivariate Anal. 972276147Mitchell, M.W., Genton, M.G. and Gumpertz, M.L. (2006). A likelihood ratio test for separa- bility of covariances. J. Multivariate Anal. 97 1025-1043. MR2276147
Consistent estimates based on partially consistent observations. J Neyman, E L Scott, Econometrica. 1625113Neyman, J. and Scott, E.L. (1948). Consistent estimates based on partially consistent observa- tions. Econometrica 16 1-32. MR0025113
Interpolation of Spatial Data: Some Theory for Kriging. M L Stein, Springer. MR1697409New YorkStein, M.L. (1999). Interpolation of Spatial Data: Some Theory for Kriging. New York: Springer. MR1697409
Space-time covariance functions. M L Stein, MR2156840J. Amer. Statist. Assoc. 100Stein, M.L. (2005). Space-time covariance functions. J. Amer. Statist. Assoc. 100 310-321. MR2156840
The efficient generation of random orthogonal matrices with application to condition estimation. G W Stewart, SIAM J. Numer. Anal. 17581487Stewart, G.W. (1980). The efficient generation of random orthogonal matrices with application to condition estimation. SIAM J. Numer. Anal. 17 403-409. MR0581487
On the use of marginal likelihood in time series model estimation. G Tunnicliffe-Wilson, J. Roy. Statist. Soc. Ser. B. 51Tunnicliffe-Wilson, G. (1989). On the use of marginal likelihood in time series model estimation. J. Roy. Statist. Soc. Ser. B 51 15-27. MR0984990
|
[] |
[
"Confirming EIS Clusters. Multi-object Spectroscopy",
"Confirming EIS Clusters. Multi-object Spectroscopy"
] |
[
"A Biviano \nIstituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy\n",
"M Ramella \nIstituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy\n",
"W Boschin \nIstituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy\n",
"Osservatorio Astronomico \nIstituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy\n",
"Di Trieste \nIstituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy\n",
"Italy S Bardelli \nIstituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy\n",
"Osservatorio Astronomico \nIstituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy\n",
"Di Bologna \nIstituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy\n",
"Italy M Scodeggio \nIstituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy\n",
"L N Da Costa \nIstituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy\n",
"L F Olsen \nIstituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy\n",
"M Nonino \nIstituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy\n",
"S Borgani \nIstituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy\n",
"M Girardi \nIstituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy\n"
] |
[
"Istituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy",
"Istituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy",
"Istituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy",
"Istituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy",
"Istituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy",
"Istituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy",
"Istituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy",
"Istituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy",
"Istituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy",
"Istituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy",
"Istituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy",
"Istituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy",
"Istituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy",
"Istituto Nazionale di Fisica Nucleare\nDipartimento di Astronomia, Università degli Studi di Trieste\nEuropean Southern Observatory\nGarching bei MünchenGermany, Italy, Italy"
] |
[
"Clustering at High Redshifts ASP Conference Series"
] |
Clusters of galaxies arise from exceptionally high peaks of the primordial fluctuation density field. Their properties as a function of redshift, z, are therefore highly sensitive to the nature of such cosmic fluctuations. It is therefore very important to have a sample of galaxy clusters covering as wide a redshift range as possible.Recently,Olsen et al. (1999)andScodeggio et al. (1999)have identified clusters in 2D from the I-band images of the ESO Imaging Survey (EIS, see Renzini & da Costa 1997), using the matched filter algorithm of Postman et al. (1996). Very little is known on the performance of this algorithm at z ≥ 0.5, and many of the cluster candidates may not be real. The spectroscopic redshifts and confirmation of cluster candidates in the range 0.5 ≤ z ≤ 0.7 is possible with 4m-class telescopes.Here we report on preliminary results of new spectroscopic observations of six EIS candidate clusters. A complete description of our survey and results will be published in Ramella et al. (in preparation). The selected cluster candidates have estimated redshifts (from the matched filter algorithm) 0.5 ≤ z mf ≤ 0.7. We observed these cluster fields with EFOSC2 at the 3.6 m ESO telescope at La Silla, in Multi-Object Spectroscopy mode, during two nights in February 1999, in average weather conditions and partial moonlight.In total we determined redshifts for 67 galaxies, covering the range 0.09 ≤ z ≤ 0.79 (plus a redshift for a serendipitously found QSO at z = 3.2), with an average z = 0.38. Magnitudes of these galaxies span the range 17.0 ≤ m I ≤ 21.3, where m I is the apparent magnitude in the I c band(Nonino et al. 1999). 0
| null |
[
"https://arxiv.org/pdf/astro-ph/9910470v1.pdf"
] | 15,109,769 |
astro-ph/9910470
|
3fc669bcbe91cdb5830c095bccb6a06335cfee48
|
Confirming EIS Clusters. Multi-object Spectroscopy
2000
A Biviano
Istituto Nazionale di Fisica Nucleare
Dipartimento di Astronomia, Università degli Studi di Trieste
European Southern Observatory
Garching bei MünchenGermany, Italy, Italy
M Ramella
Istituto Nazionale di Fisica Nucleare
Dipartimento di Astronomia, Università degli Studi di Trieste
European Southern Observatory
Garching bei MünchenGermany, Italy, Italy
W Boschin
Istituto Nazionale di Fisica Nucleare
Dipartimento di Astronomia, Università degli Studi di Trieste
European Southern Observatory
Garching bei MünchenGermany, Italy, Italy
Osservatorio Astronomico
Istituto Nazionale di Fisica Nucleare
Dipartimento di Astronomia, Università degli Studi di Trieste
European Southern Observatory
Garching bei MünchenGermany, Italy, Italy
Di Trieste
Istituto Nazionale di Fisica Nucleare
Dipartimento di Astronomia, Università degli Studi di Trieste
European Southern Observatory
Garching bei MünchenGermany, Italy, Italy
Italy S Bardelli
Istituto Nazionale di Fisica Nucleare
Dipartimento di Astronomia, Università degli Studi di Trieste
European Southern Observatory
Garching bei MünchenGermany, Italy, Italy
Osservatorio Astronomico
Istituto Nazionale di Fisica Nucleare
Dipartimento di Astronomia, Università degli Studi di Trieste
European Southern Observatory
Garching bei MünchenGermany, Italy, Italy
Di Bologna
Istituto Nazionale di Fisica Nucleare
Dipartimento di Astronomia, Università degli Studi di Trieste
European Southern Observatory
Garching bei MünchenGermany, Italy, Italy
Italy M Scodeggio
Istituto Nazionale di Fisica Nucleare
Dipartimento di Astronomia, Università degli Studi di Trieste
European Southern Observatory
Garching bei MünchenGermany, Italy, Italy
L N Da Costa
Istituto Nazionale di Fisica Nucleare
Dipartimento di Astronomia, Università degli Studi di Trieste
European Southern Observatory
Garching bei MünchenGermany, Italy, Italy
L F Olsen
Istituto Nazionale di Fisica Nucleare
Dipartimento di Astronomia, Università degli Studi di Trieste
European Southern Observatory
Garching bei MünchenGermany, Italy, Italy
M Nonino
Istituto Nazionale di Fisica Nucleare
Dipartimento di Astronomia, Università degli Studi di Trieste
European Southern Observatory
Garching bei MünchenGermany, Italy, Italy
S Borgani
Istituto Nazionale di Fisica Nucleare
Dipartimento di Astronomia, Università degli Studi di Trieste
European Southern Observatory
Garching bei MünchenGermany, Italy, Italy
M Girardi
Istituto Nazionale di Fisica Nucleare
Dipartimento di Astronomia, Università degli Studi di Trieste
European Southern Observatory
Garching bei MünchenGermany, Italy, Italy
Confirming EIS Clusters. Multi-object Spectroscopy
Clustering at High Redshifts ASP Conference Series
2000
Clusters of galaxies arise from exceptionally high peaks of the primordial fluctuation density field. Their properties as a function of redshift, z, are therefore highly sensitive to the nature of such cosmic fluctuations. It is therefore very important to have a sample of galaxy clusters covering as wide a redshift range as possible.Recently,Olsen et al. (1999)andScodeggio et al. (1999)have identified clusters in 2D from the I-band images of the ESO Imaging Survey (EIS, see Renzini & da Costa 1997), using the matched filter algorithm of Postman et al. (1996). Very little is known on the performance of this algorithm at z ≥ 0.5, and many of the cluster candidates may not be real. The spectroscopic redshifts and confirmation of cluster candidates in the range 0.5 ≤ z ≤ 0.7 is possible with 4m-class telescopes.Here we report on preliminary results of new spectroscopic observations of six EIS candidate clusters. A complete description of our survey and results will be published in Ramella et al. (in preparation). The selected cluster candidates have estimated redshifts (from the matched filter algorithm) 0.5 ≤ z mf ≤ 0.7. We observed these cluster fields with EFOSC2 at the 3.6 m ESO telescope at La Silla, in Multi-Object Spectroscopy mode, during two nights in February 1999, in average weather conditions and partial moonlight.In total we determined redshifts for 67 galaxies, covering the range 0.09 ≤ z ≤ 0.79 (plus a redshift for a serendipitously found QSO at z = 3.2), with an average z = 0.38. Magnitudes of these galaxies span the range 17.0 ≤ m I ≤ 21.3, where m I is the apparent magnitude in the I c band(Nonino et al. 1999). 0
Istituto Nazionale di Fisica Nucleare, Italy
M. Girardi
Dipartimento di Astronomia, Università degli Studi di Trieste, Italy Clusters of galaxies arise from exceptionally high peaks of the primordial fluctuation density field. Their properties as a function of redshift, z, are therefore highly sensitive to the nature of such cosmic fluctuations. It is therefore very important to have a sample of galaxy clusters covering as wide a redshift range as possible.
Recently, Olsen et al. (1999) and Scodeggio et al. (1999) have identified clusters in 2D from the I-band images of the ESO Imaging Survey (EIS, see Renzini & da Costa 1997), using the matched filter algorithm of Postman et al. (1996). Very little is known on the performance of this algorithm at z ≥ 0.5, and many of the cluster candidates may not be real. The spectroscopic redshifts and confirmation of cluster candidates in the range 0.5 ≤ z ≤ 0.7 is possible with 4m-class telescopes.
Here we report on preliminary results of new spectroscopic observations of six EIS candidate clusters. A complete description of our survey and results will be published in Ramella et al. (in preparation). The selected cluster candidates have estimated redshifts (from the matched filter algorithm) 0.5 ≤ z mf ≤ 0.7. We observed these cluster fields with EFOSC2 at the 3.6 m ESO telescope at La Silla, in Multi-Object Spectroscopy mode, during two nights in February 1999, in average weather conditions and partial moonlight.
In total we determined redshifts for 67 galaxies, covering the range 0.09 ≤ z ≤ 0.79 (plus a redshift for a serendipitously found QSO at z = 3.2), with an average z = 0.38. Magnitudes of these galaxies span the range 17.0 ≤ m I ≤ 21.3, where m I is the apparent magnitude in the I c band (Nonino et al. 1999).
Confirming EIS Clusters
1 At the average estimated redshift of our candidate clusters, z ∼ 0.6, the EFOSC2 field-of-view covers 1.9 × 1.3 h −2 75 Mpc 2 , roughly matching the typical size of clusters. Therefore, in searching for the redshift-space system that should correspond to the 2D EIS cluster, we consider the whole EFOSC2 field.
We start by defining as candidate galaxy systems, any set of two or more galaxies in an EFOSC2 field, contained within a suitable redshift range, ∆z. We use ∆z = 0.01×(1+z) (the (1+z) factor is the usual cosmological correction -see Danese et al. 1980). Then, we estimate the likelihoods of the detected systems. We compare the observed number of galaxies within each system against the number of system galaxies expected for a uniform galaxy distribution within our magnitude range. The luminosity function we use is that of Postman et al. (1996), which, for our purposes, should be close enough to the luminosity function of the EIS survey. Since field galaxies are inhomogeneously distributed, we calibrate the likelihoods of our systems by comparison with a real field galaxy sample (the Canada-France Redshift Survey, Lilly et al. 1995).
We find 4 real systems (at the 94 % confidence level) among the six EIS candidate clusters. Note that the non-detection of two of the candidate clusters does not prove the clusters do not exist. We may simply have not been observing deep enough. For the confirmed clusters, only in two cases the spectroscopic mean redshift, z is in agreement with the matched-filter estimate (z = 0.673 vs. z mf = 0.6, and z = 0.445 vs. z mf = 0.5). In the other two, the spectroscopic redshift is significantly smaller (z = 0.129 vs. z mf = 0.5, and z = 0.236 vs. z mf = 0.5). Our spectroscopic results are supported by independent evidence coming from the analysis of the colour-magnitude diagrams of galaxies in these same cluster fields.
From our analysis we conclude that 2/3 of the candidate clusters we have observed are real, and half of them have z ≃ z mf . Our sample is extremely small, but if we take our results at face value, they imply that the EIS sample contains ≃ 50 clusters at 0.5 ≤ z ≤ 0.7.
. L Danese, G De Zotti, G Di Tullio, A&A. 82322Danese, L., De Zotti, G., di Tullio, G. 1980, A&A, 82, 322
. S J Lilly, O Le Fèvre, D Crampton, F Hammer, L Tresse, ApJ. 45550Lilly, S.J., Le Fèvre, O., Crampton, D., Hammer, F., Tresse, L. 1995, ApJ, 455, 50
. M Nonino, E Bertin, L N Da Costa, A&AS. 13751Nonino, M., Bertin, E., da Costa, L.N. et al. 1999, A&AS, 137, 51
. L F Olsen, M Scodeggio, L N Da Costa, A&A. 345681Olsen, L.F., Scodeggio, M., da Costa, L.N. et al. 1999, A&A, 345, 681
. M Postman, L M Lubin, J E Gunn, AJ. 111615Postman, M., Lubin, L.M., Gunn, J.E. et al. 1996, AJ, 111, 615
. A Renzini, L N Da Costa, Messenger8723Renzini, A., da Costa, L.N. 1997, Messenger, 87, 23
. M Scodeggio, L F Olsen, L N Da Costa, A&AS. 13783Scodeggio, M., Olsen, L.F., da Costa, L.N. et al. 1999, A&AS, 137, 83
|
[] |
[
"LANGUAGE RECOGNITION USING RANDOM INDEXING",
"LANGUAGE RECOGNITION USING RANDOM INDEXING"
] |
[
"Aditya Joshi [email protected] ",
"Johan T Halseth [email protected] ",
"Pentti Kanerva [email protected] ",
"\nDepartment of Mathematics\nDepartment of Computer Science\nUniversity of California\n94720Berkeley BerkeleyCAUSA\n",
"\nRedwood Center for Theoretical Neuroscience University of California\nUniversity of California\n94720, 94720Berkeley Berkeley, Berkeley BerkeleyCA, CAUSA, USA\n"
] |
[
"Department of Mathematics\nDepartment of Computer Science\nUniversity of California\n94720Berkeley BerkeleyCAUSA",
"Redwood Center for Theoretical Neuroscience University of California\nUniversity of California\n94720, 94720Berkeley Berkeley, Berkeley BerkeleyCA, CAUSA, USA"
] |
[] |
Random Indexing is a simple implementation of Random Projections with a wide range of applications. It can solve a variety of problems with good accuracy without introducing much complexity. Here we demonstrate its use for identifying the language of text samples, based on a novel method of encoding letter n-grams into high-dimensional Language Vectors. Further, we show that the method is easily implemented and requires little computational power and space. As proof of the method's statistical validity, we show its success in a language-recognition task. On a difficult data set of 21,000 short sentences from 21 different languages, we achieve 97.8% accuracy, comparable to state-of-the-art methods.
| null |
[
"https://arxiv.org/pdf/1412.7026v2.pdf"
] | 17,269,653 |
1412.7026
|
10e46d9c0bdfca2eb409892bd4fa394ed7af90bb
|
LANGUAGE RECOGNITION USING RANDOM INDEXING
Aditya Joshi [email protected]
Johan T Halseth [email protected]
Pentti Kanerva [email protected]
Department of Mathematics
Department of Computer Science
University of California
94720Berkeley BerkeleyCAUSA
Redwood Center for Theoretical Neuroscience University of California
University of California
94720, 94720Berkeley Berkeley, Berkeley BerkeleyCA, CAUSA, USA
LANGUAGE RECOGNITION USING RANDOM INDEXING
Under review as a conference paper at ICLR 2015
Random Indexing is a simple implementation of Random Projections with a wide range of applications. It can solve a variety of problems with good accuracy without introducing much complexity. Here we demonstrate its use for identifying the language of text samples, based on a novel method of encoding letter n-grams into high-dimensional Language Vectors. Further, we show that the method is easily implemented and requires little computational power and space. As proof of the method's statistical validity, we show its success in a language-recognition task. On a difficult data set of 21,000 short sentences from 21 different languages, we achieve 97.8% accuracy, comparable to state-of-the-art methods.
INTRODUCTION
As humans who communicate through language, we have the fascinating ability to recognize unknown languages in spoken or written form, using simple cues to distinguish one language from another. Some unfamiliar languages, of course, might sound very similar, especially if they come from the same language family, but we are often able to identify the language in question with very high accuracy. This is because embedded within each language are certain features that clearly distinguish one from another, whether it be accent, rhythm, or pitch patterns. The same can be said for written languages, as they all have features that are unique. Recognizing the language of a given text is the first step in all sorts of language processing, such as text analysis, categorization, translation and much more.
As popularized by Shannon (1948), most language models use distributional statistics to explain structural similarities in various specified languages. The traditional method of identifying languages consists of counting individual letters, letter bigrams, trigrams, tetragrams, etc., and comparing the frequency profiles of different text samples. As a general principle, the more accurate you want your detection method to be, the more data you have to store about the various languages. For example, Google's recently open-sourced program called Chromium Compact Language Detector uses large language profiles built from enormous corpora of data. As a result, the accuracy of their detection, as seen through large-scale testing and in practice, is near perfect (McCandless (2011)).
High-dimensional vector models are popular in natural-language processing and are used to capture word meaning from word-use statistics. The vectors are called semantic vectors or context vectors.
Ideally, words with a similar meaning are represented by semantic vectors that are close to each other in the vector space, while dissimilar meanings are represented by semantic vectors far from each other. Latent Semantic Analysis is a well-known model that is explained in detail in Landauer & Dumais (1997). It produces 300-dimensional (more or less) semantic vectors from a singular value decomposition (SVD) of a matrix of word frequencies in a large collection of documents.
An alternative to SVD, based on Random Projections, was proposed by Papadimitriou et al. and Kaski (1998). Random Indexing (Kanerva et al. (2000); Sahlgren (2005)) is a simple and effective implementation of the idea. It has been used in ways similar to Mikolov et al.'s Continuous Bagof-Words Model (KBOW;Mikolov & Dean (2013)) and has features similar to Locality-Sensitive Hashing (LSH) but differs from them in its use of high dimensionality and randomness. With the dimensionality in the thousands (e.g., D = 10,000)-referred to as "hyperdimensional"-it is possible to calculate useful representations in a single pass over the dataset with very little computing.
In this paper, we will present a way of doing language detection using Random Indexing, which is fast, highly scalable, and space efficient. We will also present some results regarding the accuracy of the method, even though this will not be the main goal of this paper and should be investigated further.
RANDOM INDEXING
Random Indexing stores information by projecting data onto vectors in a hyperdimensional space. There exist a huge number of different, nearly orthogonal vectors in such a space (Kanerva, 1988, p. 19). This lets us combine two such vectors into a new vector using well-defined vector-space operations, while keeping the information of the two with high probability. In our implementation of Random Indexing, we use a variant of the MAP (Multiply, Add, Permute) coding described in Levy & Gayler (2009) to define the hyperdimensional vector space. Vectors are initially taken from a D-dimensional space (with D = 10,000) and have an equal number of randomly placed 1s and −1s. Such vectors are used to represent the basic elements of the system, which in our case are the 26 letters of the alphabet and the (ASCII) Space. These vectors for letters are sometimes referred to as their Random Labels.
The binary operations on such vectors are defined as follows. Elementwise addition of two vectors A and B, is denoted by A + B. Similar, elementwise multiplication is denoted by A * B. A vector A will be its own multiplicative inverse, A * A = 1, where 1 is the D-dimensional identity vector consisting of only 1s. Cosine angles are used to measure the similarity of two vectors. It is defined as cos(A, B) = |A * B |, where A and B are the normalized vectors of A and B, respectively, and |C| denotes the sum of the elements in C.
Information from a pair of vectors A and B is stored and utilized in a single vector by exploiting the summation operation. That is, the sum of two separate vectors naturally preserves unique information from each vector because of the mathematical properties of the hyperdimensional space. To see this, note that cos(A, A) = 1, while for all B = A, cos(A, B) < 1. The cosine of two random, unrelated vectors tend to be close to 0. Because of this, the vector B can easily be found in the vector A + B: cos(B, A + B) differs significantly from 0.
For storing sequences of vectors, we use a random (but fixed throughout all our computations) permutation operation ρ of the vector coordinates. Hence, the sequence A-B-C, is stored as the vector (ρ((ρA) * B)) * C = ρρA * ρB * C. This efficiently distinguishes the sequence A-B-C from, say, A-C-B. This can be seen from looking at their cosine (here c is the normalization factor):
V 1 = ρρA * ρB * C V 2 = ρρA * ρC * B =⇒ cos(V 1 , V 2 ) = c · |(ρρA * ρB * C) * (ρρA * ρC * B)| = c · |ρρA * ρρA * ρB * ρC * C * B)| = c · |ρρ(A * A) * ρ(B * C) * (B * C))| ≈ c · 0
since a random permutation ρV 1 of a random vector V 1 is uncorrelated to V 2 .
MAKING AND COMPARING OF TEXT VECTORS
We use the properties of hyperdimensional vectors to extract certain properties of text into a single vector. Kanerva (2014) shows how Random Indexing can be used for representing the contexts in which a word appears in a text, into that word's context vector. We show here how to use a similar strategy for recognizing a text's language by creating and comparing Text Vectors: the Text Vector of an unknown text sample is compared for similarity to precomputed Text Vectors of known language samples-the latter are referred to as Language Vectors.
Simple language recognition can be done by comparing letter frequencies of a given text to known letter frequencies of languages. Given enough text, a text's letter distribution will approach the letter distribution of the language in which the text was written. The phenomenon is called an "ergodic" process in Shannon (1948), as borrowed from similar ideas in physics and thermodynamics. This can be generalized to using letter blocks of different sizes. By a block of size n, we mean n consecutive letters in the text so that a text of length m would have m − n + 3 blocks. When the letters are taken in the order in which they appear in the text, they are referred to as a sequences (of length n) or as n-grams.
As an example, the text "a brook" gives rise to the tetragrams "-a-b", "a-br", "-bro", "broo", "rook", and "ook-" (here "-" stands for Space). The frequencies of such letter blocks can be found for a text and compared to known frequencies for different languages. For texts in languages using the Latin alphabet of 26 letters (plus Space), like English, this would lead to keeping track of 27 4 = 531,441 different tetragram frequencies. For arbitrary alphabets of l letters, there would be (l + 1) n n-grams to keep track of. These numbers grow quickly as the block size n increases.
The Random Indexing approach for doing language recognition is similar. A text's Text Vector is first calculated by running over all the blocks of size n within the text and creating an n-Gram Vector for each. An n-Gram Vector is created for the sequence of letters as described earlier.
As an example, if we encounter the block "grab", its vector is calculated by performing ρρρG + ρρR + ρA + B, where G, R, A and B are the Random Labels for g, r, a, and b-they are random D-dimensional vector with half 1s and half −1s and they remain constant.
A text's Text Vector is now obtained from summing the n-Gram Vectors for all the blocks in the text. This is still an D-dimensional vector and can be stored efficiently. Language Vectors are made in exactly the same way, by making Text Vectors from samples of a known language and adding them into a single vector. Determining the language of an unknown text is done by comparing its Text Vector to all the Language Vectors. More precisely, the cosine angle measure d cos between a language vector X and an unknown text vector V is defined as follows:
d cos (X, V ) = X · V |X||V | = D i=1 x i v i D j=1 x 2 j D k=1 v 2 k
If the cosine angle is high (close to 1), the block frequencies of the text are similar to the block frequencies of that language and thus, the text is likely to be written in the same language. Hence, the language that yields the highest cosine is chosen as the system's prediction/guess.
COMPLEXITY
The outlined algorithm for Text Vector generation can be implemented efficiently. For generating a vector for an n-gram, n − 1 vector additions and permutations are performed. This takes time O(n · D). Looping over a text of m letters, O(m) n-Gram Vectors must be created and added together. This clearly implies an O(n · D · m) implementation. This can be improved to O(D · m) by noting that most of the information needed for creating the n-Gram Vector for the next block is already contained in the previous n-Gram Vector, and can be retrieved by removing the contribution from the letter that is now no longer in the block.
Say we have the n-Gram Vector A = ρ (n−1) V 1 * ρ (n−2) V 2 * . . . * ρV n−1 * V n for block number i, and now want to find the n-Gram Vector B for block i + 1. We remove from A the vector ρ (n) V 1 by multiplying with its inverse (which is the vector itself), which we can do in O(D) time since ρ (n−1) is just another (pre-calculated) permutation. Then we permute the result once using ρ and multiply Figure 1: 10,000-dimensional Language Vectors for 23 languages roughly cluster based on the known relations between the languages. The Language Vectors were based on letter trigrams and were projected onto a plane using t-sne (van der Maaten (2008)).
that with the Letter Vector V n+1 for the new letter in the block. This gives us the new n-Gram Vector
B = ρ(ρ (n−1) V 1 * A) * V n+1 = ρ(ρ (n−2) V 2 * . . . * ρV n−1 * V n ) * V n+1 = ρ (n−1) V 2 * . . . * ρ (2) V n−1 * ρV n * V n+1
and so we can create n-Gram Vectors for arbitrary size blocks without adding complexity. In practice it means that the text is processed in a single pass at about a 100,000 letters a second on a laptop computer.
EXPERIMENTAL RESULTS
The algorithm outlined above was implemented by (Joshi & Halseth (2014)), and used to create Language Vectors for 23 languages. Texts for the Language Vectors were taken from Project Gutenberg (Hart) where it is available in a number of languages, and from the Wortschatz Corpora (Quastoff et al. (2006)) where large numbers of sentences in selected languages can be easily downloaded. Each Language Vector was based on about 100,000 bytes of text. Computing of the Language Vectors corresponds to training the system and took about 1 second per language on a laptop computer.
Intuitively, Language Vectors within a language family should be closer to each other than vectors for unrelated languages. Indeed, the hyperdimensional Language Vectors roughly cluster in this manner, as seen in Figure 1.
To get an idea of how well the actual detection algorithm works, we tested the Language Vectors' ability to identify text samples from the Europarl Parallel Corpus, described in Nakatani. This corpus includes 21 languages with 1,000 samples of each, and each sample is a single sentence. Table 1 shows the result for n-gram sizes from 1 to 5 (n = 1 is the equivalent of comparing letter histograms). With tetragrams we were able to guess the correct language with 97.8% accuracy. Even when incorrect, the system usually chose a language from the same family, as seen from Table 2.
It is worth noting that 10,000-dimensional Language Vectors are keeping track of 531,441 possible tetragrams and easily accommodate pentagrams and beyond if needed. The method should be explored further, as explained in the Future Work section.
FUTURE WORK
Many adjustments can be made to improve the efficacy of Random Indexing on language detection. The results of this paper are based solely on letter trigrams and tetragrams. However, it is a simple matter to add into the Text Vectors single-letter frequencies and bigrams, for example. Also, the vector dimensionality can be reduced to several thousands without markedly affecting the results.
The arithmetic (algebra) of the operations with which Text Vectors are made-i.e., permutation, multiplication, and addition, and how they work together-make it possible to analyze the Language Vectors and find out, for example, what letters are most likely to follow "the". (In English it would be the Space, but what is the next most likely?) Notice that we don't need to contemplate such questions in advance and then design the data-gathering algorithm with that in mind. The information is in the vectors in a form that allows it to be retrieved with the arithmetic.
Because of the generality of Random Indexing on texts, any time series with a well-defined "alphabet" can be encoded using this scheme. In this way, we propose that our method can be used to do language detection in speech data, addressing our original problem.
CONCLUSION
We have described the use of Random Indexing to language identification. Random Indexing has been used in the study of semantic vectors since 2000 (Kanerva et al. (2000); Sahlgren (2005)), and for encoding problems in graph theory (Levy & Gayler (2009)), but only now for identifying source materials. It is based on simple operations on high-dimensional random vectors: on Random Labels with 0-mean components that allow weak signals to rise above noise as the data accumulate. The algorithm works in a single pass, in linear time, with limited memory, and thus is inherently scalable, and it produces vectors that are amenable to further analysis. The experiments reported in this paper were an easy task for a laptop computer.
1 :
1Percentage of sentences correctly identified as a function of n-gram size. Ell Eng Ita Ces Est Spa Nld Por Lav Lit Ron Pol Fra Bul Deu Dan Fin Hun Swe Slk Slv ----------------------------------------------------------------------------------------
1 982 -
982---------------------------------------------------------------------------------------Ell Eng Ita Ces Est Spa Nld Por Lav Lit Ron Pol Fra Bul Deu Dan Fin Hun Swe
Table
Table 2 :
2The confusion matrix of language detection using 10,000-dimensional Language Vectors based on letter trigrams. Each row corresponds to the correct label and each column is the predicted label for the Europarl corpus detection test. The entry (i, j) is the number of sentences (out of a 1,000) that language j was guessed for language i. A high value diagonal shows the very high accuracy.
ACKNOWLEDGMENTSWe would like to thank Bruno Olshausen, Mayur Mudigonda, and many others at the Redwood Center for Theoretical Neuroscience for insightful discussions and feedback.
. M Hart, Project, Gutenberg, Hart, M. Project gutenberg. URL https://www.gutenberg.org.
Random indexing for languages python implementation. A Joshi, J T Halseth, Github, Joshi, A. and Halseth, J.T. Github: Random indexing for languages python implementation, 2014. URL https://github.com/halseth/vs265_project_f14.
P Kanerva, Sparse Distributed Memory. MIT PressKanerva, P. Sparse Distributed Memory. MIT Press, 1988.
Computing with 10,000-bit words. P Kanerva, Proc. 52nd Annual Allerton Conference on Communication, Control, and Computing. 52nd Annual Allerton Conference on Communication, Control, and ComputingKanerva, P. Computing with 10,000-bit words. Proc. 52nd Annual Allerton Conference on Commu- nication, Control, and Computing, 2014.
Random indexing of text samples for latent semantic analysis. P Kanerva, J Kristoferson, A Holst, Proc. 22nd Annual Conference of the Cognitive Science Society. 22nd Annual Conference of the Cognitive Science SocietyKanerva, P., Kristoferson, J., and Holst, A. Random indexing of text samples for latent semantic analysis. pp. "1036". Proc. 22nd Annual Conference of the Cognitive Science Society, 2000.
Dimensionality reduction by random mapping: Fast similarity computation for clustering. S Kaski, Proc. IJCNN'98, International Joint Converence on Neural Networks. IJCNN'98, International Joint Converence on Neural Networks1Kaski, S. Dimensionality reduction by random mapping: Fast similarity computation for clustering. Proc. IJCNN'98, International Joint Converence on Neural Networks, 1:413-418, 1998.
A solution to plato's problem: The latent semantic analysis theory of acquisition, induction and representation of knowledge. T Landauer, S Dumais, Psychology Review. 1042Landauer, T. and Dumais, S. A solution to plato's problem: The latent semantic analysis theory of acquisition, induction and representation of knowledge. Psychology Review, 104(2):"211-240", 1997.
lateral inhibition in a fully distributed connectionist architecture. S D Levy, R W Gayler, Proceedings of the Ninth International Conference on Cognitive Modeling. the Ninth International Conference on Cognitive ModelingLevy, S.D. and Gayler, R.W. lateral inhibition in a fully distributed connectionist architecture. Proceedings of the Ninth International Conference on Cognitive Modeling, 2009.
Accuracy and performance of google's compact language detector. M Mccandless, McCandless, M. Accuracy and performance of google's compact language de- tector, 2011. URL http://blog.mikemccandless.com/2011/10/ accuracy-and-performance-of-googles.html.
Efficient estimation of word representations in vector space. T Mikolov, G Corrado, K Chen, J Dean, arXiv:1301.3781v3[cs.CL]712ppMikolov, T., Corrado G. Chen K. and Dean, J. Efficient estimation of word representations in vector space. 2013. arXiv:1301.3781v3 [cs.CL] 7 Sep 2013, 12 pp.
. S Nakatani, langdetect is updated. Online; accessed 16-December-2014Nakatani, S. langdetect is updated(added profiles of Estonian / Lithuanian / Latvian / Slovene, and so on. http://shuyo.wordpress.com/2011/09/29/langdetect-is-updatedadded-profiles-of-estonian- lithuanian-latvian-slovene-and-so-on/. [Online; accessed 16-December-2014].
Latent semantic indexing: A probabilistic analysis. C H Papadimitriou, P Raghavan, H Tamaki, S Vempala, Proc. 17th ACM Symp. on the Principles of Database Systems. 17th ACM Symp. on the Principles of Database SystemsPapadimitriou, C.H., Raghavan, P., Tamaki, H., and Vempala, S. Latent semantic indexing: A probabilistic analysis. Proc. 17th ACM Symp. on the Principles of Database Systems, pp. 159- 168.
Corpus portal for search in monolingual corpora. U Quastoff, M Richter, C Biemann, Proceedings of the fifth international conference on Language Resources and Evaluation, LREC. the fifth international conference on Language Resources and Evaluation, LRECQuastoff, U., Richter, M., and Biemann, C. Corpus portal for search in monolingual corpora. Pro- ceedings of the fifth international conference on Language Resources and Evaluation, LREC, pp. 1799-1802, 2006.
An introduction to random indexing. M Sahlgren, Methods and Applications of Semantic Indexing Workshop at the 7th international conference on Terminology and Knowledge Engineering. Sahlgren, M. An introduction to random indexing. Methods and Applications of Semantic Indexing Workshop at the 7th international conference on Terminology and Knowledge Engineering, 2005.
A mathematical theory of communication. The Bell System Technical Journal, 1948. van der Maaten, L. Visualizing high-dimensional data using t-sne. C E Shannon, Journal of Machine Learning Research. 9Shannon, C.E. A mathematical theory of communication. The Bell System Technical Journal, 1948. van der Maaten, L. Visualizing high-dimensional data using t-sne. Journal of Machine Learning Research, 9:2579-2605, 2008.
|
[
"https://github.com/halseth/vs265_project_f14."
] |
[
"Detection of a large fraction of atomic gas not associated with star-forming material in M17 SW",
"Detection of a large fraction of atomic gas not associated with star-forming material in M17 SW"
] |
[
"J P Pérez-Beaupuits \nMax-Planck-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany\n",
"J Stutzki \nI. Physikalisches Institut\nUniversität zu Köln\nZülpicher Straße 7750937KölnGermany\n",
"V Ossenkopf \nI. Physikalisches Institut\nUniversität zu Köln\nZülpicher Straße 7750937KölnGermany\n",
"M Spaans \nKapteyn Astronomical Institute\n9747 AVRijksuniversiteit Groningen, GroningenThe Netherlands\n",
"R Güsten \nMax-Planck-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany\n",
"H Wiesemeyer \nMax-Planck-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany\n"
] |
[
"Max-Planck-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany",
"I. Physikalisches Institut\nUniversität zu Köln\nZülpicher Straße 7750937KölnGermany",
"I. Physikalisches Institut\nUniversität zu Köln\nZülpicher Straße 7750937KölnGermany",
"Kapteyn Astronomical Institute\n9747 AVRijksuniversiteit Groningen, GroningenThe Netherlands",
"Max-Planck-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany",
"Max-Planck-Institut für Radioastronomie\nAuf dem Hügel 6953121BonnGermany"
] |
[] |
Context. The [C II] 158 µm line is one of the dominant coolants of the ISM, and an important probe with which to study the star formation process. Recent Herschel/HIFI and SOFIA/GREAT observations showed that assuming the total velocity-integrated intensity of this line is directly associated with the star-forming material is inadequate. Aims. We probe the column densities and masses traced by the ionized and neutral atomic carbon with spectrally resolved maps, and compare them to the diffuse and dense molecular gas traced by [C I] and low-J CO lines toward the star-forming region M17 SW. Methods. We mapped a 4.1 pc × 4.7 pc region in the [C I] 609 µm line using the APEX telescope, as well as the CO isotopologues with the IRAM 30m telescope. Because of the velocity-resolved spectra, we analyze the data based on velocity channel maps that are 1 km s −1 wide. We correlate their spatial distribution with that of the [C II] map obtained with SOFIA/GREAT. Optically thin approximations were used to estimate the column densities of [C I] and [C II] in each velocity channel.Results. The distribution of the emission from the isotopologues 13 CO, C 17 O, and C 18 O resembles more closely that of the [C I] emission than that of the 12 CO emission. The spatial distribution of the [C I] and all CO isotopologues emission was found to be associated with that of [C II] in about 20%-80% of the mapped region, with the high correlation found in the central (15-23 km s −1 ) velocity channels. Conclusions. The excitation temperature of [C I] ranges between 40 K and 100 K in the inner molecular region of M17 SW. Excitation temperatures up to 200 K are found along the ridge. Column densities in 1 km s −1 channels between ∼10 15 cm −2 and ∼10 17 cm −2 were found for [C I]. Just ∼20 % of the velocity range (∼40 km s −1 ) that the [C II] line spans is associated with the star-forming material traced by [C I] and CO. The total (integrated over the 0-40 km s −1 velocity range) gas mass estimated from the [C II] emission gives a lower limit of ∼4.4×10 3 M . A very large fraction of at least 64% of this mass is not associated with the star-forming material in M17 SW. We also found that about 36%, 17%, and 47% of the [C II] emission is associated with the H II, H I, and H 2 regimes, respectively. Comparisons with the H41α line shows an ionization region mixed with the neutral and part of the molecular gas, in agreement with the clumped structure and dynamical processes at play in M17 SW. These results are also relevant to extra-galactic studies in which [C II] is often used as a tracer of star-forming material.
|
10.1051/0004-6361/201425020
|
[
"https://arxiv.org/pdf/1501.02735v1.pdf"
] | 10,860,744 |
1501.02735
|
78289b15d89f5f64383020c5de029b6910291aac
|
Detection of a large fraction of atomic gas not associated with star-forming material in M17 SW
January 13, 2015
J P Pérez-Beaupuits
Max-Planck-Institut für Radioastronomie
Auf dem Hügel 6953121BonnGermany
J Stutzki
I. Physikalisches Institut
Universität zu Köln
Zülpicher Straße 7750937KölnGermany
V Ossenkopf
I. Physikalisches Institut
Universität zu Köln
Zülpicher Straße 7750937KölnGermany
M Spaans
Kapteyn Astronomical Institute
9747 AVRijksuniversiteit Groningen, GroningenThe Netherlands
R Güsten
Max-Planck-Institut für Radioastronomie
Auf dem Hügel 6953121BonnGermany
H Wiesemeyer
Max-Planck-Institut für Radioastronomie
Auf dem Hügel 6953121BonnGermany
Detection of a large fraction of atomic gas not associated with star-forming material in M17 SW
January 13, 2015Received / AcceptedAstronomy & Astrophysics manuscript no. 25020_printer c ESO 2015galactic: ISM -galactic: individual: M17 SW -radio lines: galactic -molecules: CO -atoms: [C I], [C II]
Context. The [C II] 158 µm line is one of the dominant coolants of the ISM, and an important probe with which to study the star formation process. Recent Herschel/HIFI and SOFIA/GREAT observations showed that assuming the total velocity-integrated intensity of this line is directly associated with the star-forming material is inadequate. Aims. We probe the column densities and masses traced by the ionized and neutral atomic carbon with spectrally resolved maps, and compare them to the diffuse and dense molecular gas traced by [C I] and low-J CO lines toward the star-forming region M17 SW. Methods. We mapped a 4.1 pc × 4.7 pc region in the [C I] 609 µm line using the APEX telescope, as well as the CO isotopologues with the IRAM 30m telescope. Because of the velocity-resolved spectra, we analyze the data based on velocity channel maps that are 1 km s −1 wide. We correlate their spatial distribution with that of the [C II] map obtained with SOFIA/GREAT. Optically thin approximations were used to estimate the column densities of [C I] and [C II] in each velocity channel.Results. The distribution of the emission from the isotopologues 13 CO, C 17 O, and C 18 O resembles more closely that of the [C I] emission than that of the 12 CO emission. The spatial distribution of the [C I] and all CO isotopologues emission was found to be associated with that of [C II] in about 20%-80% of the mapped region, with the high correlation found in the central (15-23 km s −1 ) velocity channels. Conclusions. The excitation temperature of [C I] ranges between 40 K and 100 K in the inner molecular region of M17 SW. Excitation temperatures up to 200 K are found along the ridge. Column densities in 1 km s −1 channels between ∼10 15 cm −2 and ∼10 17 cm −2 were found for [C I]. Just ∼20 % of the velocity range (∼40 km s −1 ) that the [C II] line spans is associated with the star-forming material traced by [C I] and CO. The total (integrated over the 0-40 km s −1 velocity range) gas mass estimated from the [C II] emission gives a lower limit of ∼4.4×10 3 M . A very large fraction of at least 64% of this mass is not associated with the star-forming material in M17 SW. We also found that about 36%, 17%, and 47% of the [C II] emission is associated with the H II, H I, and H 2 regimes, respectively. Comparisons with the H41α line shows an ionization region mixed with the neutral and part of the molecular gas, in agreement with the clumped structure and dynamical processes at play in M17 SW. These results are also relevant to extra-galactic studies in which [C II] is often used as a tracer of star-forming material.
Introduction
In order to advance our understanding of the ambient conditions of star formation, observations of large areas of known massive Galactic star-forming regions have been done over a wide range of wavelengths. Observations of low-and mid-J transitions of 12 CO towards several massive star-forming regions have shown that warm and dense gas is usually confined to narrow (< 1 pc) zones close to the ionization front (e.g., Harris et al. 1987;Graf et al. 1993;Yamamoto et al. 2001;Kramer et al. 2004Kramer et al. , 2008Pérez-Beaupuits et al. 2010). Although slow shocks and cloud-cloud collisions can be an important source of heating in high velocity wing objects like Orion, W51, and W49 (Jaffe et al. 1987), narrow mid-J 12 CO lines, as well as the parameters needed to explain the CO observations, favor photoelectric heating of the warm gas located beyond the H II region driven by the UV radiation field emerging from an ionizing source, the so-called photon-dominated region (PDR). and CO expected from standard 1-D steady-state PDR models, which is not observed in several sources (e.g., Keene et al. 1985;Genzel et al. 1988;Stutzki et al. 1988;Spaans & van Dishoeck 1997;Gerin & Phillips 1998;Yamamoto et al. 2001;Schneider et al. 2002;Röllig et al. 2011;Mookerjea et al. 2003;Pérez-Beaupuits et al. 2010).
Massive star-forming regions like the Omega Nebula M17, with an edge-on view (particularly in its southwest region), are ideal sources to study the clumpy structure of molecular clouds, as well as the chemical and thermodynamic effects of the nearby ionizing sources. The southwest region of M17 (M17 SW) concentrates molecular material in a clumpy structure. Models based on far-IR and submillimeter observations Meixner et al. 1992) suggest that the distribution and intensity of the emissions observed in the M17 SW complex, can be explained with high density (n(H 2 ) ∼ 5 × 10 5 cm −3 ) clumps embedded in an interclump medium (n(H 2 ) ∼ 3 × 10 3 cm −3 ) and surrounded by a diffuse halo (n(H 2 ) ∼ 300 cm −3 ).
The central cluster of more than 100 stars that illuminates M17 SW is NGC 6618 (e.g., Lada et al. 1991;Hanson et al. 1997). The two components of the massive binary CEN1 (Kleinmann 1973;Chini et al. 1980) are part of the central cluster NGC 6618 and are separated by ∼ 1 . 8. This source, originally classified as a double O or early B system by Kleinmann (1973), is actually composed of two O4 visual binary stars, named CEN 1a (NE component) and CEN 1b (SW component), and it appears to be the dominant source of photo-ionization in the whole M17 region (Hoffmeister et al. 2008).
Recent SOFIA/GREAT observations of the velocityresolved [C II] spectra showed that a large fraction (>60%) of the [C II] emission, observed at the lower (<10 km s −1 ) and higher (>24 km s −1 ) velocity channels, is not associated with the starforming material (denser and colder gas) traced by species like CO and [C I], which has an average line width of 5 to 10 km s −1 centered at V LS R = 20 km s −1 (Pérez-Beaupuits et al. 2012, 2013. Only the central narrow (1 km s −1 ) channel maps of the velocity-resolved [C II] spectra show a spatial association with other gas tracers (e.g., [C I] and 12 CO). The broader velocity range covered by the [C II] line with respect to the [C I] and 12 CO has to be associated with additional material, either lower density clumps or more diffuse, possibly ablated material, resulting in additional layers of ionized carbon gas within the telescope beam. The [C II] emission have been found to extend at least 1 4 • in the sky (Russell et al. 1981), and ∼15 pc into the M17 SW molecular region ). The spatial distribution of the [C II] emission (and abundance) in the southern region of M17 SW does not follow theoretical predictions of stratified or clumpy PDR models (Pérez-Beaupuits et al. 2012).
In earlier works, high resolution maps of high-and mid-J CO lines, the 3 P 2 → 3 P 1 fine-structure transition of [C I], and the [C II] 158 µm emission, have been reported (Pérez-Beaupuits et al. 2010. In this study we present a new high resolution map of the 3 P 1 → 3 P 0 fine-structure transition of [C I], as well as maps of the J = 1 → 0 and J = 2 → 1 transitions of 12 CO and its isotopologues. In contrast to [C II] 158 µm (and [O I] 63 µm, not included in the present data set), PDR models predict that the intensity of the [C I] fine structure lines do not have a strong dependence on UV energy density (e.g., Hollenbach & Tielens 1999). Therefore, in a clumpy cloud irradiated by UV photons, the intensity of the [C I] emission is expected to be proportional to the number of photodissociation surfaces of clumps along the line of sight (e.g., Spaans 1996;Howe et al. 2000;Kramer et al. 2004). Since several velocity components along the line of sight can be found in a clumpy medium, we present our analysis and discussions of the new results based on velocity channel maps, showing the temperatures of the lines integrated over a narrow 1 km s −1 channel width. From them we estimate the excitation temperature and column density of [C I], as well as the column density of [C II] and the gas mass not associated with the starforming material traced by [C I] and the CO isotopologues.
The organization of this article is as follows. In Sect. 2 we describe the observations. The maps of the observed lines are presented in Sect. 3. The excitation temperature and column densities, as well as mass estimates, are presented in Sect. 4. In Sect. 5 we estimate the [C II] emission not associated with other gas tracers. The conclusions and final remarks are presented in Sect. 6.
Observations
The APEX data
We used the higher frequency band of the dual channel receiver FLASH (Heyminck et al. 2006, hereafter FLASH-460) on the Atacama Pathfinder EXperiment (APEX 1 ; Güsten et al. 2006) during October 2009 to map the 3 P 1 → 3 P 0 609 µm (hereafter 1 → 0) fine-structure transition of [C I] at 492.161 GHz. The observed region covers about 6 .2×7 .2 (4.1 pc × 4.7 pc) compared to the 5 .3 × 4 .7 (3.4 pc × 3.0 pc) area previously mapped for [C I] 3 P 2 → 3 P 1 370 µm (hereafter J = 2 → 1) with CHAMP + (Pérez-Beaupuits et al. 2010). The [C I] J = 1 → 0 was observed in on-the-fly (OTF) slews in RA (∼ 360 arcsec long). Because the beam size of APEX at 492 GHz is about 12 . 7, the subsequent scans in Declination and RA were spaced 6 apart.
The total power mode was used for the observations, nodding the antenna prior to each OTF and raster slew to an off-source position (180 ,0 ), east of the star SAO 161357. This is used as the reference position (∆α = 0, ∆δ = 0) in the maps and throughout the paper, with RA(J2000)=18:20:27.64 and Dec(J2000)=-16:12:00.90. The OFF position at 180 was determined to be clean, even in the [C I] J = 2 → 1 and mid-J 12 CO lines. The reference for continuum pointing was Sgr B2(N) and the pointing accuracy was better than 3 for all the maps. The data were processed during the observations with the APEX real-time calibration software (Muders et al. 2006), assuming equal gains for the signal and image sidebands.
A fast Fourier transform spectrometer (FFTS), providing 1.5 GHz bandwidth and 2048 channels (Klein et al. 2012), was used for the [C I] J = 1 → 0 map. The on-source integration time per dump was 1 second for the OTF map of [C I] J = 1 → 0, and the average DSB system noise temperature of the FLASH-460 was about 810 K.
Observations toward Jupiter were performed during October 2009 to estimate the beam coupling efficiency (η c ≈ 0.59) of the FLASH-460, assuming a brightness temperature of 158 K for this planet at 492 GHz, as interpolated from data reported in Griffin et al. (1986).
The IRAM 30m data
We used four frequency setups of the broadband EMIR receivers (Carter et al. 2012) at IRAM 30m to map a similar area to the one mapped in [C I] with the APEX telescope. The 32 GHz signal bandwidth provided by the IF channels were used for the Top -Color map of the velocity integrated (in the range 0-40 km s −1 ) intensity of [C I] J = 1 → 0 in M17 SW. The peak emission is 240 K km s −1 . The contour levels are 25%, 50%, 75%, and 90% of the peak emission. Bottom -Color map of the integrated intensity of [C I] J = 2 → 1 (from Pérez-Beaupuits et al. 2010) convolved to the beam size (∼ 12 . 7) of the [C I] J = 1 → 0 line, with a peak emission of 280 K km s −1 . The contour levels are as described above. The reference position (∆α = 0, ∆δ = 0) is as in Fig. 2. receiver bands E090 (3mm) and E230 (1.3mm), covering each sideband with 8 GHz bandwidth in single polarization. These setups allowed us to fully map all the CO isotopologues (in addition to many other molecules) in their J = 1 → 0 and J = 2 → 1 transitions. The beamwidths (FWHM) for the J = 1 → 0 transitions of 12 CO, 13 CO,C 18 O,and C 17 O are 22 .6,23 .7,23 .8,and 23 .2, respectively. We also detected, by serendipity, the hydrogen recombination lines H39α, H40α, and H41α (28 . 3) in the 3mm band.
The total region mapped of about 360 ×300 was covered with two long OTF maps of 360 ×160 (with an overlap of 20 between them) and slews in RA (∼ 360 arcsec long) with steps of 4 in Dec. The off-source reference position was observed for 10 seconds every two OTF subscans (rows). The on-source integration time per dump was 0.5 second.
In order to ensure atmospheric stability, we used the same nearby off-source reference position (345 ,-230 ) as for the [C II] map (Pérez-Beaupuits et al. 2012) obtained with the German REceiver for Astronomy at Terahertz frequencies (GREAT 2 , Heyminck et al. 2012) on board the Stratospheric Observatory For Infrared Astronomy (SOFIA). From previous APEX observations (not reported here) of the 12 CO J = 3 → 2 and J = 4 → 3, we know that the reference position (345 ,-230 ) is not free of CO emission. Hence, for all the EMIR frequency setups, we first did a deep observation at the position (345 ,-230 ) against the reference that is even farther away at (3600 , -1800 ) which, in turn, was verified to be CO emission-free against the offset position at (4600 ,-2800 ). Then we added the flux from the reference position (with about two orders of magnitude lower rms than the OTF spectra) back into the spectra from the OTF maps.
Beam coupling efficiencies for each individual line were obtained from interpolation of the values given in the online table of the IRAM 30m efficiencies 3 . With these beam coupling efficiencies, and a forward efficiency η f (0.95 for EMIR090, 0.94 for EMIR230, and 0.93 for EMIR150), we converted all data to the main beam brightness temperature scale, T B = η f × T * A /η c . The reduction of these calibrated data, as well as the maps shown throughout the paper, were done using the GILDAS 4 package CLASS90.
SOFIA [C II] observations
The [C II] 158 µm map was already reported in Pérez-Beaupuits et al. (2012), where a detailed explanation of the calibration and the OTF maps was given. Here we use the same data as in the previous paper, convolved with a larger beam, and re-sampled the spectra to a 1 km s −1 channel width, as explained in Sect. 5.
The SOFIA data is publicly available in the section "Data Archive & Retrieval" of the SOFIA Data Cycle System 5 . All the data presented in this work will be available as FITS files in the Strasbourg astronomical Data Center (CDS 6 ). Figure 1 shows the maps of the intensity, integrated between 0 km s −1 and 40 km s −1 , of [C I] J = 1 → 0 (top) and J = 2 → 1 (bottom). Because the [C I] J = 2 → 1 map was convolved to the larger beam size (12 . 7) of the J = 1 → 0 transition, its peak integrated intensity is ∼ 20 K km s −1 lower than the peak value previously reported in Pérez-Beaupuits et al. (2010). The peak integrated intensities of the maps shown here are 240 K km s −1 and 280 K km s −1 for the [C I] J = 1 → 0 and J = 2 → 1 lines, respectively.
Results
The [C I] integrated intensity maps
These lines follow a similar spatial distribution and their respective peaks are located at about the offset position ∆α = −120 , ∆δ = 30 , approximately 0.88 pc (∼ 80 at P.A. 90 • ) from the ridge. They both present extended emission, unlike a theoretically expected stratified PDR. A spatial association between 13 CO and [C I] was found by Keene et al. (1985), on a 2 GREAT is a development by the MPI für Radioastronomie and the KOSMA/ Universität zu Köln, in cooperation with the MPI für Sonnensystemforschung and the DLR Institut für Planetenforschung. 3 http://www.iram.es/IRAMES/mainWiki/Iram30mEfficiencies 4 http://www.iram.fr/IRAMFR/GILDAS 5 https://dcs.sofia.usra.edu/ 6 http://cdsweb.u-strasbg.fr/ Fig. 2. Color maps of the velocity integrated intensity (in the range 0-40 km s −1 ) of the J = 1 → 0 and J = 2 → 1 transitions of 12 CO (Top panels) and 13 CO (Bottom panels) in M17 SW. The contour levels are 25%, 50% (thick contour), 75%, and 90% of the peak emission. All maps have been convolved to the larger beam (24 ) of the C 18 O J = 1 → 0 line. The reference position (∆α = 0, ∆δ = 0), marked with a cross, corresponds to the SAO star 161357 at RA(J2000)=18:20:27.65 and Dec(J2000)=-16:12:00.91. The ultracompact H II region M17-UC1 and four H 2 O masers (Johnson et al. 1998) are marked by the black circle and plus symbols, respectively. scale size of 3 (∼ 2 pc). We confirm these results with higher spatial resolution. We also add that, when looking at the overall distribution, the C 17 O and C 18 O emissions are more similar to the 13 CO than to the 12 CO emission and, hence, they also show a spatial association with the [C I] integrated emission. This spatial association is discussed further in the next section.
The CO integrated intensity maps
In order to compare them with the J = 1 → 0 transitions, and to increase the S/N of the J = 2 → 1 lines, all the maps (including the [C I] lines from Fig. 1) were convolved to the larger beam size (24 ) of the C 18 O J = 1 → 0 line for the analysis presented in the next sections. Maps of the velocity integrated intensity of the 12 CO emission, and its isotopologues 13 CO, C 17 O, and C 18 O lines, are shown in Figs. 2 and 3.
The 12 CO integrated emission is more extended, and its bulk emission does not resemble that of its isotope lines. In order to verify this, we use the scatter plots (Fig. 4) of the velocityintegrated intensity of these tracers, and the corresponding correlation coefficient described in Appendix C. The scatter plots deviate from the theoretical straight line expected for wellcorrelated maps. The 13 CO/ 12 CO plots show a well-separated optically thin (low intensities) and an optically thick (high intensities) branch. It is interesting to note that the optically thick branch still has a relatively good correlation. This shows the integrated intensity growth via line broadening at lower densities, fully in line with Larson's law; i.e., the cloud size is inversely proportional to its density, and the velocity dispersion (or line broadening) is directly proportional to the cloud size, hence, the lower the density, the larger the cloud and the broader the lines.
We note, however, that even though the intensities of the 12 CO and 13 CO maps are very scattered (at the higher values) and show two branches with different slopes, the correlation coefficient is still relatively high, r xy = 0.94 − 0.96. These correlation coefficients are similar to those found between the J = 1 → 0 and J = 2 → 1 lines of 12 CO and [C I], which are much better correlated, as shown in Fig. 4). These scatter plots show values that are less scattered (supporting the similarity between the [C I] and the CO isotopologue lines mentioned above), while the correlation coefficient is similar to (or lower than) those found for maps with less similar spatial distribution (e.g., 12 CO and 13 CO J = 1 → 0). This means the correlation coefficient is not robust enough to discriminate between "well" correlated and "not so well" correlated maps and, hence, it must be used with caution. Another way of using this statistical measure is to define a threshold to consider only regions of the map with high intensities, to compute the correlation coefficient as described in Sect. 5.1. Figure 5 shows the overlay between the two J = 1 → 0 and 2 → 1 lowest energy transitions of 12 CO and the [C II] 158 µm line, both in the total velocity-integrated intensity and the 1 km s −1 channel maps. We note that the velocity- integrated intensity of [C II] does not follow the spatial distribution of the 12 CO emission. However, some spatial association between these lines can be seen in the channel maps, but only at the central 16-24 km s −1 velocity channels, where the bulk of the 12 CO emission is found.
New results in M17 SW have shown that a large fraction of the [C II] emission is not associated with other species (e.g., 12 CO and [C I]) tracing the star-forming material when analyzed at narrow velocity channels (Pérez-Beaupuits et al. 2012, 2013. Hence, line integrated intensity maps have to be interpreted with great care, as a smaller or larger part of the line integrated emission, as in the case of [C II] here, may result in strong velocity components in one line, which are barely traceable in another line, like C 18 O or 13 CO here, due to different physical origin and/or very different excitation conditions. This obviously also strongly affects the interpretation of line-integrated intensity ratios between different tracers. In fact, we do not find strong spatial associations when comparing the velocity inte-
Excitation and column density of [C I]
The critical density (n cr ∼ 10 3 cm −3 for collisions with o-/p-H 2 at 100 K; from the LAMDA 7 database, Schöier et al. 2005) and upper-level energy (E u ≈ 24 K for J = 1 → 0, and E u ≈ 62 K for J = 2 → 1) of [C I] enable us to trace the diffuse (n(H 2 ) ≤ 10 3 cm −3 ) ISM and estimate its temperature. We note, however, that this does not imply that [C I] traces only diffuse gas, but also denser gas given the overall spatial association of the velocity-integrated intensities observed between [C I] and the CO isotopologues, as described in section 3.1.
We first estimate the excitation temperature of [C I] from the ratio between the two transitions, assuming optically thin emission. From this excitation temperature, the column density of [C I] can be estimated as well, assuming optically thin emission and LTE conditions. Then we compare the results with a non-LTE estimate at representative offset positions. This is done in section 4.2.
Excitation temperature of [C I]
We computed the ratio R = I([C I] 369µm)/I([C I] 609µm) between the intensities of the [C I] lines integrated in narrow (1 km s −1 ) velocity channels, for each channel map in the velocity range between 0 km s −1 and 40 km s −1 . We find observed values of 1 R 2, at the central velocity channels (16-24 km s −1 ). Ratios lower than unity (which means subthermal excitation) are also observed in our maps, but mostly at velocity channels <16 km s −1 and >24 km s −1 , where the intensity of both lines is below 30% of their respective peak channel intensities, which is about the noise level of the fainter [C I] 609µm line.
The observed ratios match the values expected in a PDR environment with low density ( < ∼ 10 3 cm −3 ) and relatively low radiation fields (G 0 < ∼ 10 2 , with G 0 in units of the ambient interstellar radiation field, 1.2 × 10 −4 ergs s −1 cm −1 sr −1 , Habing 1968), as shown by Meijerink et al. (2007, their Fig. 3). These densities are also in agreement with previous estimates by Meixner et al. (1992), who concluded that the [C II], [C I], and low-J CO lines emerge from an inter-clump medium, while the extended [C II] and [C I] emission emerges from a halo gas surrounding the clump and inter-clump material. However, the observed ratios are not exclusive of these ambient conditions, since they can also be found by extrapolating the values given by Meijerink et al. (2007, their Fig. 3) at higher densities (∼ 10 4 cm −3 ) and even lower radiation fields (G 0 < ∼ 10), which are naturally attenuated by the larger column densities of denser clumps.
As mentioned in Sect. 3.1, the total integrated intensity of the [C I] emission resembles that of the optically thin isotope CO lines, rather than that of the 12 CO lines. The isotope lines have critical densities of n crit ∼ 10 4 cm −3 (at T K 100-200 K) and trace the compact emission from the denser molecular clouds. Because of the self-pumping due to large optical depths resulting from the high abundance, the 12 CO lines trace not only the denser molecular clouds, but also the more diffuse and extended emission beyond (i.e., east from) the ionization front. Hence, contrary to the general picture proposed by Meixner et al. (1992), we favor a scenario where part of the [C I] emission emerges from an inter-clump medium rather than from a more diffuse halo gas, and the other part is associated to the denser clumps traced by the isotope CO lines.
Following the formalism by Schneider et al. (2003, their Appendix A), which is valid in the optically thin limit and when both lines have similar optical depths, we can estimate the excitation temperature T ex of [C I] from the ratio R as T ex = 38.8/ln(2.11/R) K. Figure 6 shows the estimated T ex of [C I] at the same region mapped in [C II] for the velocity channels from 9 km s −1 to 29 km s −1 where the emission of both [C I] lines is larger than the 3σ level detection. The [C I] maps were first convolved to the larger beam (24 ) of the C 18 O J = 1 → 0 map in order to increase the S/N.
The excitation temperature ranges between ∼40 K and ∼100 K in the inner molecular region (i.e., southwest from the ionization front). This result is in agreement with a previous LTE estimate of the excitation temperature from mid-J 12 CO (Pérez-Beaupuits et al. 2010), earlier estimates from [C I] J = 2 → 1 observations (e.g., Genzel et al. 1988) that indicated the [C I] emission arises from gas with kinetic temperature of about 50 K, and from a multi-line NH 3 study (Güsten & Fiebig 1988) which showed different coexisting gas phases with kinetic temperatures between 30 K and few 100 K, and up to about 275 K in the region traced by the VLA continuum arc (the northern ionization front). Even higher excitation temperatures are found at sparse locations northeast of the ionization front, and along the eastern edge of M17 SW at the channel bins (20-24 km s −1 ) close to the ionization front. Pointing offsets between the CHAMP + and FLASH observations (done at different observing periods), as well as differential couplings of the two respective beams just at the edge of the ridge, can mimic such temperature gradients. These effects could account for up to 30% in the line ratios, and they cannot be discarded. Hence, we limit the color scale of Fig. 6 to 200 K, which is a more reliable upper limit of the excitation temperature in the region mapped. Ratios >2, found at few positions between the ionization front and the ionizing stars, lead to negative excitation temperatures in the optically thin and LTE approximation.
Optical depths and column density of [C I]
From the excitation temperature and the peak intensity of the [C I] J = 1 → 0 and J = 2 → 1 lines, the optically thin approximation also allows us to estimate the optical depths of both lines. Knowing the excitation temperature and the optical depth of the J = 1 → 0 line, the column density N([C I]) can be computed as well. For detailed formulae see Frerking et al. (1989) and Schneider et al. (2003, their Appendix A). We also assume that the sources fully cover our beam in the emitting regions (i.e., we use a beam filling factor of unity). Therefore, the quantities reported here correspond to beam (24 ) Top -Velocity-integrated intensity maps of 12 CO J = 1 → 0 (gray), [C II] 158 µm (red contour), and 12 CO J = 2 → 1 (green contour). The contour lines (from thin to thick) are 50%, 75%, and 90% of the respective peak emissions. The stars indicate the O and B ionizing stars (Beetz et al. 1976;Hanson et al. 1997). The reference position (∆α = 0, ∆δ = 0), marked with a cross, is as in Fig. 2. Bottom -Velocity channel maps (at 1 km s −1 width) of the same lines as above. Contours are 20%, 40%, 60%, 80%, and 100% of the respective peak emissions. All maps have been convolved with the largest beam of 24 corresponding to the C 18 O J = 1 → 0 map. lines, respectively. Optical depths τ ≤ 1 are observed in most of the regions mapped at all the velocity channels. Except in a small region around the offset position (-130 , -10 ) of the τ 2→1 line, at the central velocity channels (19-20 km s −1 and 20-21 km s −1 ), where the bulk of the [C I] emission is found.
A non-LTE excitation analysis using the Radex code (van der Tak et al. 2007), was used to test the optically thin assumption in two representative positions in the 19-20 km s −1 channel map. First at offset position (-130 , -10 ) where τ 2→1 > 1 (cf. Fig. B.1), and then at the offset position (-130 , -70 ) (cf. Fig. B.2) where τ 2→1 < 1, as seen in Fig. 7. The [C I] line ratios and intensities observed at these positions can be reproduced with densities larger than 10 3 cm −3 (the critical density of [C I] J = 1 → 0 at T K between 100 K and 200 K) for kinetic temperatures below 500 K, and column densities per line width N/∆V = (4 − 7) × 10 17 cm −2 km −1 s, similar to the column densities obtained with the LTE method. Considering a line width of ∼10 km s −1 , the column densities we obtained are consistent with the peak column density and a moderate optical depth of about 2.5±0.7 derived by Genzel et al. (1988) from the total velocityintegrated intensity of [C I] J = 2 → 1, and assuming a nonlinear relation between the intensity of [C I] and the intensity of C 18 O.
Our observed [C I] intensities can also be reproduced with densities above ∼3×10 3 cm −3 (the critical density of [C I] J = 2 → 1 at T k =100-200 K) and temperatures below 300 K. The [C I] J = 1 → 0 line is close to thermalized (T ex ≈ T k ) at both positions, for T k < 500 K and densities n(H 2 ) = 10 3 − 10 4 cm −3 , while τ is not much smaller than unity in both positions, just as in the LTE results. Although the optical depths of both [C I] lines are just marginally thin, they are very similar. Hence, we can still apply the optically thin approximation in all the regions of interest, at least for τ 1→0 , which is the opacity used to estimate the column density of [C I] following Schneider et al. (2003, their Eq. A.8). Since the optical depth is not much smaller than unity, we used the correction factor τ(
[C I])/(1 − exp(−τ([C I]))) to compute the column density of [C I].
The velocity channel maps of the column density N([C I]) (cm −2 ) are shown in Fig. A.3. The values of N([C I]) range between 10 15 cm −2 and ∼10 17 cm −2 throughout the whole region mapped and among all the velocity channels. However, the bulk of the [C I] emission corresponds to column densities above 10 16 cm −2 in all the velocity channels. Column densities up to 10 17 cm −2 are reached only in the central velocity channels, between 17 km s −1 and 22 km s −1 , which correspond to the regions where the integrated intensity maps (Fig. 1) show a strong [C I] emission (≥ 50% of the peak).
Emission of [C II] not associated with other gas tracers
In Pérez-Beaupuits et al. (2013) we showed overlays of our previous 12 CO J = 2 → 1 map and the optical depth of HI from Brogan & Troland (2001), over the [C II] emission, also in channel maps of 1 km s −1 width. We showed that only at intermediate (10-24 km s −1 ) velocities the [C II] emission presented strong spatial association with other molecular gas tracers. A strong spatial association is identified when the spatial distribution of a particular [C II] channel map is very similar, or adjacent, to that of another gas tracer (e.g., 12 CO J = 2 → 1 or [C I]). On the other hand, at lower (<10 km s −1 ) and higher (>24 km s −1 ) velocity channels, the [C II] emission is mostly not associated with the other tracers of diffuse and dense gas. We note that "not associated" in this sense does not mean that we deal with physically completely independent material. The overlay between the 12 CO 2 → 1 and [C II] velocity channel maps in Fig. 5 shows that in the outer velocity range the [C II] emission often shows halos and diffuse extensions around the denser clumps and filaments identifiable in the 12 CO channel maps, suggesting that the [C II] emission traces gas that has been ablating off the clump or filament surfaces; this [C II] emitting gas, however, is not visible in 12 CO, despite this association. If the [C II] emission was associated in all velocity channels with the more diffuse molecular gas (between 30 and 300 cm −3 , independent of temperature) traced by the 12 CO J = 1 → 0 line, the spatial association between [C II] and 12 CO J = 1 → 0 would be expected to be stronger than that between [C II] and 12 CO J = 2 → 1. However, like the 12 CO J = 2 → 1 and [C I] lines, the 12 CO J = 1 → 0 line shows a strong association with [C II] only in the central 10-24 km s −1 components (cf. Fig. 9). This is another confirmation that the lower (<10 km s −1 ) and higher (>24 km s −1 ) velocity channels of the [C II] emission are not strongly associated with the bulk of the molecular gas.
Spatial correlation between the star-forming material and the [C II] emission
We quantify the spatial association observed at each velocity channel between the [C II] emission and other gas tracers (e.g., 12 CO J = 1 − 0, [C I] 609 µm) according to the procedure described in Appendix D. We use 10% of the global (i.e., among all the channels) peak emission of each tracer as a threshold to identify the pixels with significant emission to be used. This method allows us to estimate where in the region and by what fraction of the region the [C II] emission is associated, in the spatial distribution of each velocity channel map, with other gas tracers (cf. first paragraph in Sect. 5).
The correlation coefficient described in Appendix C, and histograms showing the velocity distribution of the spatial association between the [C II] emission and that of [C I] 609 µm, 12 CO, 13 CO, and C 18 O J = 2 → 1 are shown in Fig. 10. In this case the correlation coefficient r xy was computed using only the pixels with emission larger than 1/3 of the global peak emission, in order to consider only the optically thick branch of the CO scatter plots shown in Fig. 4, and to exclude the [C I] scatter at low intensities. Figure 10 then shows that the [C II] emission is associated with, and highly correlated with (r xy > 0.6) other gas trac-A&A proofs: manuscript no. 25020_printer ers, mostly at the central velocity channels between 15 km s −1 and 23 km s −1 . The fraction of the mapped region where the [C II] emission is associated with the [C I] 609 µm line is 30%-55% in the velocity range mentioned above, while it reaches 40%-80% with 12 CO, 35%-50% with 13 CO, and only 20%-45% with
C 18 O J = 2 → 1.
The large range of velocity channels of [C II] emission not associated with other gas tracers is strong evidence of the inability of the total velocity-integrated [C II] line intensity (i.e., its total flux) to estimate its beam averaged abundance (i.e., column density ratio in comparison to other tracers), and the cooling it provides to the molecular and star-forming gas traced by species like [C I] and CO. The assumption of the [C II] emission arising from the same spatial region as other tracers of diffuse and dense gas in the whole velocity range covered by the [C II] spectra, was the basis for many previous studies, according to the technology available at that time. In particular, the spectrometers on board of NASA's Kuiper Airborne Observatory (KAO) had a spectral resolution of 80-175 km s −1 for the [C II] 158 µm line (e.g., Meixner et al. 1992), while the [C II] spans only ∼40 km s −1 in our velocity-resolved spectra. Now we know that only ∼20% of that resolved velocity range is associated with the star-forming material traced by [C I] and CO. Hence, using the total velocityintegrated [C II] intensity can be misleading, since it may include (as in the case of M17 SW) emission from gas that is not really associated with the [C I] or the CO emitting gas. Therefore, the actual abundance of C + , and the cooling of the molecular gas due to [C II] emission associated with molecular gas in several Galactic and extra-galactic environments, may be overestimated.
At the central velocity channels (e.g., 21-22 km s −1 ) the [C II] line shows dips at positions where optically thin lines (e.g., C 18 O J = 2 → 1) show their peaks, like the spectra in Fig. 11 at offset position (-60 , -30 ). This spectral feature is present at several positions around the previous one, indicating the presence of a colder foreground layer, with significant optical depth to absorb the emission from a warmer background component present at an adjacent velocity. Taking the main beam temperature at the channel 22.5 km s −1 (where the C 18 O J = 2 → 1 lines drops sharply) as the continuum level (T C ∼ 65.3 K), and the temperature at the channel 21.5 km s −1 as the maximum absorption depth of the [C II] line (T L ∼ 47.4 K), we can estimate the optical depth of the absorbing layer as τ = −ln(1 − T L /T C ). Assuming that the absorbing layer completely covers the background component and that all the foreground [C II] atoms are in the ground state, we can estimate the absorbing column density following Eq. (3) in Neufeld et al. (2010) as
τdv = A ul g u λ 3 8πg l N(C + ) = 7.2×10 −18 N(C + ) (cm −2 ) km s −1 ,(1)
where A ul = 2.3 × 10 −6 s −1 is the spontaneous radiative decay rate, g u = 4 and g l = 2 are the degeneracies of the upper and lower states, and λ = 157.741 µm (used in cm in Eq. 1) is the transition wavelength of the [C II] ground state. From this we estimate a foreground absorbing column density of N(C + ) ≈ 2 × 10 17 cm −2 . This is about a factor of two smaller than the column densities that we estimate for the background emitting [C II] gas (see Sect. 5.2). Since the absorbing layer affects only one or two of the central 1 km s −1 wide velocity channels, given the sharp profile of the C 18 O line, it does not affect any of our conclusions. In the following section we estimate in more detail the [C II] column density and atomic hydrogen mass not associated with the star-forming material traced by the [C I] and C 18 O lines by using an LTE approximation.
Column density of [C II] and mass of dissociated material
Because the spectra of [C II] and the other gas tracers have slightly different velocity resolutions, we have re-sampled all the spectra to 1 km s −1 resolution.
We use the [C I] 609 µm line as a tracer of the diffuse (n(H 2 ) crit ∼ 400 cm −3 at T K ∼100-200 K) molecular gas, and the optically thin C 18 O J = 2 → 1 line as a tracer of the denser (n(H 2 ) crit ∼ 9 × 10 3 cm −3 at T K ∼100-200 K) molecular gas in the star-forming material.
For each [C I] and C 18 O spectra of the map we find the channel with the maximum intensity, and divide the corresponding [C II] channel by that maximum value to obtain the factors by which to multiply separately the [C I] and C 18 O spectra. This produced scaled up [C I] and C 18 O spectra that match the original [C II] spectra at their respective maximum channel intensities. Then we subtract the scaled up [C I] line from the original [C II] spectra. The same is done independently with the C 18 O line to produce two residual [C II] spectra. Since we do not observe any absorption line profile in our spectra, we consider all the channels with negative values as noise, and we set them to zero in order to avoid unwanted boosting of the [C II] emission. If the maximum intensity of [C I] or C 18 O is higher than the intensity of the corresponding channel in the [C II] spectrum, then no scaling up is done.
An example of this procedure, and the results for the spectra at the approximated offset position (-30 ,-15 ), is shown in Fig. 12. The shaded histogram corresponds to the residual spectra after the subtraction of the [C I] and C 18 O lines. For each channel we took the minimum intensity between the two residual [C II] spectra to produce a synthetic spectrum that represents the [C II] emission not associated with the star-forming material traced by [C I] and C 18 O. All these assumptions, in particular the scaling of the intensities, will lead to conservative numbers (lower limits).
We can estimate the column density N(C + ) of the nonassociated [C II] gas from the synthetic residual [C II] spectra, following the high-temperature LTE limit, which is valid for temperatures well above 91 K and high densities,
N(C + ) ≈ η −1 I [CII] 6.3 × 10 20 cm −2 ,(2)
obtained from a two-level system model as described in Schneider et al. (2003, their Eq. (A.5),) where η c is the beam filling factor assumed to be unity since the [C II] emission in M17 SW is very extended, and I [CII] is the [C II] emission in units of erg cm −2 s −1 sr −1 . We note that with the two-level system expression, the estimated column density of [C II] increases if lower densities and/or lower temperatures are used. If we assume a gas temperature of 250 K and a density of 10 4 cm −3 , as estimated for the re-A&A proofs: manuscript no. 25020_printer .4),) gives a 25% larger column density than the LTE approximation of Eq.
(2). Since the procedure described above subtracts the [C II] emission associated with most of the dense and diffuse molecular gas, the residual emission should be dominated by collisional excitation from atomic hydrogen and free electrons. In order to analyze how sensitive the [C II] column density is to an assumed temperature and density of the gas, we estimate N(C + ) for a range of temperatures and densities using Eq. (A.4) from Schneider et al. (2003), considering a filling fac- tor η c of unity. We consider the critical densities n cr for free electrons (e − ), atomic hydrogen (H I), and molecular hydrogen (H 2 ), which were computed for each temperature, according to the corresponding collisional deexcitation rate coefficients reported by Barinovs et al. (2005). Since the actual column density of [C II] also depends on the [C II] intensity (which is arbitrary for this analysis), in Fig. 13 we show only the ratio with respect to the N(C + ) obtained using the temperature (250 K) and density (n(H) = 10 4 cm −3 ) assumed above, to demonstrate the relative effect of using different ambient conditions. Although the critical density depends on the temperature, n cr changes by a small percent between 200 K and 500 K. Therefore, we adopt 250 K as a high-temperature limit, since the temperature dependence of the exponential term in the two-level system approximation of N(C + ) is stronger.
When using a higher temperature and density, the estimated column density decreases by less than 20% when considering H I or H 2 as the collision partner. The column density of [C II] would increase by larger factors if lower densities and temperatures were used. When using electrons as the collision partners, N(C + ) saturates at densities above 100 cm −3 , and temperatures above 400 K. These results emphasize our point that the values of N(C + ) obtained with the LTE approximation (or with the temperature and density assumed above) should be regarded as lower limits, whether the LTE conditions are met or not in all the regions mapped.
The channel maps of the residual [C II] column density (not associated with the dense and halo molecular gas traced by C 18 O and [C I]), estimated with Eq. (2), is shown in Fig. 14. The channel maps between 19 km s −1 and 22 km s −1 are the most affected by the subtraction of the [C I] and C 18 O emission, thus confirming the strong spatial association found in the central velocity range, as shown in Sect. 5.1. The small self-absorption that we see will affect the neighboring channels, but not the general picture. This is in line with the channel maps from 18-20 km s −1 that still follow the structure of M17SW. The column density of the [C II] gas not associated with the star-forming material (traced by [C I] and C 18 O) ranges between ∼10 14 cm −2 and ∼4×10 17 cm −2 in the whole region mapped, and among all the channels in the 0-40 km s −1 velocity range.
From the residual (and total) [C II] column density channel maps we can also compute the corresponding mass of the gas contained in the 24 beam area of the channel maps by assuming a gas phase carbon abundance of X(C + /H) = 1.2 × 10 −4 (Wakelam & Herbst 2008, their Table 1) and complete ionization of the carbon, according to
M gas = 1.4m H N([C II]) X C + /H A beam ,(3)
where A beam is the area (in cm 2 ) covered by the 24 beam, m H is the atomic hydrogen mass (in g), and the factor 1.4 accounts for helium and a minor fraction of other heavier elements. The velocity distribution of the mass (obtained by adding up all the pixels from each velocity channel map created from Eq. 3) is shown in Fig. 15. The gas mass per velocity channel (squares) was estimated from the original [C II] spectra. The largest fraction of non-associated gas mass (circles) is found at the higher (25-33 km s −1 ) velocity channels.
When integrating the mass in the 0-40 km s −1 velocity range, we find a gas mass of ∼4.4×10 3 M in the entire region mapped. This mass is a factor ∼3 lower than the 1.45×10 4 M found by (Stutzki & Güsten 1990) from C 18 O observations, which trace the non-dissociated cloud core mass. We note that the mass obtained from the LTE estimated column density of [C I] (cf. Fig. A.3) using the same Eq. 3 is about three orders of magnitude lower than the mass traced by [C II]. This is due to the lower intensity and less spatial extension of the [C I] emission throughout the mapped region and among all the velocity bins, compared to that of [C II]. This is in line with the fact that only a small fraction of carbon is expected to be in atomic gas traced by the [C I] line.
Considering the mass estimated from the residual [C II] spectra, the gas mass from the non-associated [C II] emission is ∼2.8×10 3 M . Thus, the estimated gas mass not associated with the star-forming material traced by [C I] and C 18 O corresponds to ∼64% of the total gas mass traced by the original [C II] emission. This still amounts to at least 19% of the C 18 O mass reported by Stutzki & Güsten (1990).
A source of uncertainty to consider in our analysis is that we are assuming that the volumes of gas corresponding to each velocity channel are associated. In other words, a spatial association of the [C II] emission with [C I] and C 18 O emitting regions in the plane of the sky, does not ensure that they are really associated along the line of sight. However, the probability that the spatial association that we see in the central components of the 1 km s −1 channel maps is just a projection effect is minimal since we are using a relatively narrow velocity width. Hence, the fraction of non-associated mass quoted above should be considered a lower limit.
Since there is no evidence of fast shocks in M17 SW, other mechanisms that can produce [C II] emission with larger velocity dispersion than the denser molecular gas must be considered. For instance, the interaction with winds and outflows from the ionizing stars can lead to substantial excitation of the [C II] emitting gas. Hence, ablation (e.g., Castor et al. 1975;Weaver et al. 1977;Tenorio-Tagle 1979;Henley et al. 2012, and references therein) and, probably slow shock-interaction due to radiative pressure (e.g., Goodwin 1997;Krumholz et al. 2010;Dale & Bonnell 2011, and references therein), have to be considered to model and interpret the [C II] emission not associated with the star-forming material in M17 SW. However, [C II] may also be present in extended low density gas around H II regions produced by far ultraviolet (FUV) photons (E>11.26 eV for the first ionizing potential of atomic carbon) from the ionizing stars. With such high energy photons, low H 2 densities also mean higher abundance (i.e., densities) of H I and free electrons e − , which would then become equally important collision partners of [C II], with critical densities of about 3×10 3 cm −3 and 10 cm −3 , respectively, at 250 K (the higher the temperature, the higher the critical densities). Thus, compensating the lower densities of H 2 and, most likely, keeping the LTE assumption for [C II] valid. There-fore, warm C + could be present at lower densities (and higher gas temperatures) than assumed. If so, then extreme ultraviolet (EUV) photons should also produce [N II] emission if their energy is larger than 14.5 eV (the first ionizing potential of nitrogen) and lower than 24.38 eV (the second ionizing potential of carbon). In such a zone of EUV photons, the [C II] and [N II] emission should co-exist, and show a high degree of spatial association. This can be checked observationally, using the GREAT instrument on board SOFIA. Zones with photon energies larger than 24.59 eV (the first ionizing potential of helium) accounts for at least 10% of the gas in M17 SW (depending on the spatial resolution of the observations) as estimated from observations of the He + /H ratio (e.g., Peimbert et al. 1988;Tsivilev & Krasnov 1999). Between the two extremes of molecular and ionized hydrogen, the [C II] emission can also co-exist with neutral atomic hydrogen H I, as shown by (Brogan & Troland 2001) in M17 SW. We discuss these gas phases in more detail in the following section.
[C II] in the three gas phases
As discussed above, there are basically three different regimes that contribute to the [C II] emission; the highly ionized gas where electrons dominate (H II), the atomic hydrogen layer (H I), and the molecular hydrogen gas (H 2 ) suffused with sufficient UV to keep CO dissociated and to ionize neutral carbon efficiently 8 . Following the method described above, we include the high res- olution VLA map of the velocity-resolved optical depth, τ(H I), from Brogan & Troland (2001), convolved with a 24 beam and re-sampled to 1 km s −1 channel width. The channel-by-channel spatial correlation between [C II] emission and τ(H I) shown in Fig. 17 indicates that most of the [C II] emission associated with the H I gas is found at the lower (< 20 km s −1 ) velocity channels.
In the previous section we estimated the [C II] column density and hydrogen mass of the gas not associated with the relatively compact and dense star-forming material traced by C 18 O J = 2 → 1, and [C I] 609 µm. The [C II] emission associated with the entire molecular gas phase, however, also comprises the diffuse and more extended H 2 gas. Therefore, we now use the 12 CO J = 1 → 0 line, as the canonical tracer of H 2 . As 12 CO J = 1 → 0 is optically thick throughout large parts of the map, this will provide a lower limit for the [C II] emission from the diffuse molecular material.
In order to analyze the impact of these line tracers in the residual [C II] emission (and hence, the [C II] column and associated gas mass), we tested three different combinations of gas tracers: (1) (3) is included for comparison, since C 18 O can complement 12 CO in regions where 12 CO is optically thick. The residual [C II] spectrum at offset position (-30 ,-15 ), obtained after subtracting the scaled-up spectra of the three combinations mentioned above, are shown in Fig. 17 (from top to bottom). When subtracting (channel by channel) the maximum of the three synthetic lines from the original [C II] spectra, we obtain the residual [C II] emission that is mostly associated with the H II regime. It can be seen in the three cases that most of the residual [C II] emission is contained in the higher velocity channels. Using only the molecular gas tracers (i.e., excluding τ(H I)), we can obtain a second residual [C II] spectra that would contain the [C II] emission associated mostly with the H II and H I gas. From these two residual spectra we can then estimate the residual [C II] emissions, and their respective column densities using Eq. (2), associated with the three gas phases, according to the procedure described in Appendix E.
In Table 1 we summarize the fraction of the average over the region mapped [C II] emission associated with the three gas phases as obtained from the three combinations of line tracers. Using [C I] 609 µm and 12 CO J = 1 → 0, combined with τ(H I), yields practically the same result given the uncertainties as when using C 18 O J = 2 → 1 instead of [C I] 609 µm. This results from the high correlation observed between [C I] and C 18 O, although it is not a 1:1 match (probably sensitivity limit driven) as shown in Fig. 4. Thus, the slightly higher (∼1%) fraction of [C II] emission associated with H 2 gas found with model (2), compared with that of model (3), may indicate that [C I] traces at least part of the CO-dark molecular gas (Wolfire et al. 2010). Therefore, we chose model (2) as the most complete one, tracing all the gas regimes where [C II] emission can be found.
The velocity distribution of the fraction of averaged [C II] emission associated with the three gas phases obtained with model (2) is shown in Fig. 18. The [C II] emission associ- Table 1. ated with H I gas is mostly contained in the lower velocity (<20 km s −1 ) channels, while the [C II] emission associated with the ionized H II gas is contained mainly at the higher velocity bins (>25 km s −1 ), although part of it is also found in the <20 km s −1 velocity range. The central velocity channels (15-30 km s −1 ) contain most of the [C II] emission associated with the molecular H 2 gas. The corresponding velocity-channel maps of the [C II] emission associated with H II, H I, and H 2 , are shown in Fig. E.1. These channel maps show that the [C II] emission, and therefore column density, associated with the ionized gas, peaks at the northeast corner of the mapped region, which coincides with the position of the ionizing sources (cf. Fig 5). The fraction of [C II] column density (or [C II] emission) associated with the molecular gas regime is about 11% larger than the fraction (36%) found in the dense star-forming material. This is expected since the 12 CO emission has a broader line profile and is also spatially more extended than C 18 O and [C I] (cf. Fig. 2 and Fig. 11).
We note that this method has uncertainties, and it gives only a first order approximation of the [C II] emission associated with the three different regimes. The results presented in Table 1 should not be taken as a sharp distinction between the three gas regimes, since in reality the three gas phases can be mixed throughout the region mapped (we elaborate on this in the next sections). In particular, the [C II] emission associated with the atomic H I gas has a large uncertainty as the optical depth τ(H I) is saturated over a significant part of the region along the molecular ridge. The saturated values were replaced by a lower limit, according to the continuum and rms level of the VLA spectra, as described in detail by Brogan & Troland (2001, their Sect. 3.3.2). Furthermore, τ(H I) obtained from H I in absorption traces atomic hydrogen in the foreground relative to the free-free emission from the H II region. This might introduce a bias since τ(H I) traces only the cold H I gas, while part of the warmer atomic hydrogen can be mixed with the H 2 and H II gas phases. However, we consider that the warm mixed H I gas can be at least partially accounted for by the [C I] and 12 CO lines. Nevertheless, this is another uncertainty in our method.
The gas masses associated with the three gas phases could be estimated with Eq. (3), by using abundances (and the corresponding mass of atomic and molecular hydrogen) of ionized carbon relative to the dominant hydrogen phase, i.e., X(C + /H 0 ), X(C + /H 2 ). However, these abundances are not really known, and although they could be estimated from a clumpy PDR model, which would be the best model currently available for M17 SW because of its highly clumpy structure, the uncertainties in the values obtained for each gas phase would be very high because the relative abundances strongly depend on the number of clumps, clump sizes, and ambient conditions of each clump, which we have not yet been able to constrain for M17 SW. In addition, knowing the actual density and temperature of the dominant collision partners in each gas regime would allow us to estimate (with non-LTE radiative transfer models) the actual [C II] emission associated with the three gas phases. We expect to estimate all these parameters in a follow up work.
Comparison with radio recombination lines
Traditionally radio recombination lines (RRL) are considered probes of gas conditions within ionized gas. However, the principal quantum levels are generally not in thermal equilibrium, and there are different competing mechanisms governing the line intensities (e.g., maser amplification of background radiation and weakening due to underpopulation of the upper quantum levels relative to LTE conditions) as well as the broadening of the line widths (e.g., thermal, turbulence and Stark effects - Griem 1967). Furthermore, the line shapes are the result of emission from regions with different ambient conditions along the line of sight, since the emitting gas is usually optically thin in the centimeter and millimeter regimes. All these effects make of RRLs, in the centimeter wavelength range, more ambiguous probes of the ambient conditions in ionized gas than originally thought. At millimeter wavelengths, however, the Stark broadening is negligible (∆V ∼ c( λ 100 m ) 5/3 ≤0.1 km s −1 for ν ≥ 90 GHz, cf. Gordon & Sorochenko 2002), leaving only thermal and turbulent broadening at play, and the RRLs tend to be optically thin and do not suffer significant departures from LTE conditions (for a review see, e.g., Gordon 1988;Gordon & Sorochenko 2002).
In our broadband (32 GHz IF bandwidth) OTF maps obtained with the IRAM 30m telescope (cf. Sect. 2), we detect a number of hydrogen RRLs in the 3mm band: H39α, H40α, and H41α lines at 106.74 GHz, 99.02 GHz, and 92.03 GHz, respectively (the beam size of the H41α line is 28 . 3). All of them show very similar spatial distribution and line shapes. We improved the S/N by convolving the maps with a 30 beam. Because of our short integration times no other recombination lines were detected, preventing further analysis of the excitation conditions of the ionized gas at our high spatial resolution. Our goal now is simply to verify if the spatial distribution of the [C II] emission can be associated with the ionized gas traced by the hydrogen recombination lines. Figure 19 shows the H39α, [C II], and 12 CO J = 1 → 0 integrated intensities, as well as the τ(H I) optical depth overlay (right panel) on the H41α map. The hydrogen recombination lines follow a relatively homogeneous distribution about 45 • along the ionization front, and they peak at the M17-UC1 ultracompact region, which is also surrounded by a number of em- Fig. 19. Left -Velocity-integrated intensity maps of H41α (gray), H39α (red contour), [C II] 158 µm (green contour), and 12 CO J = 1 → 0 (blue contour). The contour lines (from thin to thick) are 10% (dashed line), 25%, 50%, 75%, and 90% of the respective peak emissions. The stars indicate the O and B ionizing stars (Beetz et al. 1976;Hanson et al. 1997). The reference position (∆α = 0, ∆δ = 0), marked with a cross, is as in Fig. 2. The ultracompact H II region M17-UC1 and four H 2 O masers (Johnson et al. 1998) are marked by the circle and "+" symbols, respectively. The small purple circles correspond to the heavily obscured (E median > 2.5 keV, A V ≥ 10 mag) population of X-ray sources around the M17-UC1 region (Fig.10 in Broos et al. 2007; coordinates from the VizieR catalog). Right -Same as on the left, but with τ(H I) instead of H39α. All maps have been convolved with a 30 beam, to increase the S/N of the H41α map. bedded X-ray sources (Broos et al. 2007). We note that there are many more X-ray sources reported by Broos et al., several of them with a stellar counterpart, but for clarity of the figures we show here only those associated with the M17-UC1 region. The H I optical depth follows the distribution of the RRLs, but their peak emissions are not correlated. In fact, the velocity integrated τ(H I) peaks around the same position as the 12 CO J = 1 → 0 line intensity. The [C II] emission, instead, covers the entire region mapped, and its intensity peaks between the ionized gas traced by H41α and the molecular gas traced by the 12 CO J = 1 → 0 line. Figures 20 and 21 show the spectra of the H41α, [C II], and 12 CO J = 1 → 0 lines at two offset positions close to the ionizing sources, (30 ,100 ) and (20 ,70 ), and along a strip line at P.A.=63 • . Although fainter at the northeast region (Fig. 20), the H41α line shows emission only at the higher velocity range (>20 km s −1 ) coinciding with the velocity channels of the [C II] lines found to be associated mostly with the H II gas in Sect. 5.3. At the southwest region (Fig. 21) the H41α line is stronger, but its shape is asymmetric and much broader than any of the other lines we observed in M17 SW. It is hard to associate its line shape to any of the line structures observed in the [C II] line nor in τ(H I). Since the H41α line is expected to be optically thin (i.e., no optical depth nor self-absorption effects), its line shape is most likely formed by several layers of ionized gas (with different V lsr ) along the line of sight, with each velocity component being affected by a combination of thermal and turbulence broadening.
From Fig. 18, we know that the velocity ranges of the peak [C II] (residual) emission found to be associated with the H II, H I, and H 2 gas, are 32-33 km s −1 , 9-10 km s −1 , and 21-22 km s −1 , respectively. Figure 22 shows the corresponding channel maps of H41α, the original [C II] emission, τ(H I), and the 12 CO J = 1 → 0 line. We note that the spatial distribution of the H41α line does not change significantly over these three different velocity ranges. At 32-33 km s −1 , the [C II] emission peaks at the northeast region, around the ionizing sources, while the H41α emission still peaks at the M17-UC1 region (cf. Fig. 19). At 21-22 km s −1 , the H41α peak is at the southeast region of the ionization front, while the distribution of the [C II] emission follows closely that of the 12 CO J = 1 → 0. In the velocity range 9-10 km s −1 , identified mostly with the H I gas regime, the emission from all the lines seems to emerge mostly from the region of the ionization front traced by the H41α line, indicating that the three gas regimes must be mixed at these velocities. The fact that the ionization and molecular dissociation front are mixed, can be explained if advection is considered (e.g., Bertoldi & Draine 1996). This dynamical process would make the ionized region larger compared to the atomic region, and extended towards the molecular region. This, in turn, will lead to a larger contribution of the ionized gas to PDR diagnostics like [C II], as shown in models by Abel et al. (2005).
As previously noted by Brogan et al. (1999), and from Figs. 20 and 21, the velocity structures of H I and [C II] are different. A narrow component of shocked H I gas (streaming towards us) in M17 was suggested by Brogan et al. (1999), in the range 11-17 km s −1 , where we also find [C II] and 12 CO components, but only at the edge of the ionization front (cf. Fig. 21, offset position (−30 ,−15 )). In the 9-10 km s −1 velocity range, which is associated mostly with atomic gas, the H I optical depth peaks along the H41α line since τ(H I) is estimated from1 in absorption. Actually, this channel map is part of the prominent τ(H I) velocity component in the 0-11 km s −1 range (cf. Fig. 20), that has corresponding associated [C II] and 12 CO J = 1 → 0 emission. This component in τ(H I) was not mentioned in the original papers by Brogan et al. (1999); Brogan & Troland (2001). This can also be shocked ionized ([C II]), atomic (H I), and molecular (CO) gas streaming towards us.
The strong [C II] velocity component found around V lsr = 32 km s −1 (Fig. 20, top two panels) has no evident counterpart in the other atomic or molecular tracers. These differences, in particular with H I, may arise because the [C II] 158 µm emission tends to trace warmer gas than the H I absorption, which traces cold gas. The faint H41α emission around V lsr = 32 km s −1 , barely detected with S/N∼3 in our OTF maps at offset position (20 ,70 ), shows a line width (FWHM) of ∆V = 20 ± 3 km s −1 from a Gaussian fit. This line width is consistent with the thermal broadening (∆v G−thermal ∼ c × 7.16233 × 10 −7 ( T M ) 1/2 , with T in K and M in amu, cf. Gordon & Sorochenko 2002, their Eq. 2.22) expected for the LTE electron temperature T * e = 10700 ± 700 K estimated towards M17 by Gordon (1989). At offset (30 ,100 ) we find a S/N<3 for the H41α line. and (30 ,100 ), the [C II] component around V lsr = 32 km s −1 have a Gaussian fit FWHM of ∼ 5.5 ± 0.5 km s −1 , also consistent with the thermal broadening expected for the T * e quoted above. This indicates that the [C II] emission around V lsr = 32 km s −1 is associated with warm gas, and probably shocked by the proximity of the ionizing sources, streaming away from us and from the bulk of the molecular unshocked gas found around the 20 km s −1 component. Higher S/N maps of the H41α and other RRLs are needed to confirm the detection of ionized gas at these positions. Maps of [N II] and the hydrogen recombination lines (in the ≥3 mm wavelength range), tracing the transitions between
Comparison with other sources and implications for extragalactic studies
A study of the H II region S125 showed that up to 40% of the [C II] 158 µm line intensity, observed with ISO-SWL, arises from the H II region (Aannestad & Emery 2003). Comparing and modeling the [C II] and [N II] emission obtained from Herschel/PACS, Bernard-Salas et al. (2012) found that at most 18% The contour lines (from thin to thick) are 10% (dashed line), 25%, 50%, 75%, and 90% of the respective peak emissions. All the maps were convolved with a 30 beam to increase the S/N of the H41α map.
of the [C II] emission in the Orion Bar originated in the H II region. This is a factor of ∼2 lower than found for S125, and a factor of ∼ 3.6 times lower than what we estimate for M17 SW. However, this is not a one-to-one comparison, since their study is focused in a relatively small scale, covering mostly the dense PDR in the Orion Bar. Hence, most of the actual H II region (where more [C II] emission than quoted by them can be present) is not seen in their data set. Therefore, large scale maps of [C II] and other ionized, neutral, and molecular tracers like the ones we present are relevant not only in order to understand the full picture of the H II-PDR boundary, which is more complex than predicted by the classical 1-D models, but also to understand and properly interpret the observations of these diagnostic lines in extragalactic sources. As mentioned at the end of Sect. 5.2, and as shown in Sect. 5.4, the [C II] line can originate both in the PDR (molecular and neutral atomic gas) and in the H II region. When observed toward other galaxies, the [C II] emission collected in one beam (typically covering large scales), will naturally come mainly from several unresolved giant molecular clouds and H II regions, with different orientations with respect to the impinging radiation field along the line of sight. Thus, it is important to characterize in detail the [C II] contribution from these different environments. The [C II] emission is important in extragalactic studies for redshift determinations, and also to estimate the star formation rate (SFR) from the [C II] luminosity (e.g., Stacey et al. 1991;Meijerink et al. 2007;Luhman et al. 2003).
Recent results from 130 galaxies observed with Herschel/PACS by Sargsyan et al. (2014) confirm that [C II] traces the same starburst component of sources as measured with midinfrared PAH, neon emission line diagnostics, and bolometric luminosities of reradiating dust from starbursts. By using a modified Cloudy code Abel et al. (2005) estimated that about 30% of the [C II] emission in the starburst galaxy NGC 253, observed with the KAO (Carral et al. 1994), arise from the ionized medium. Using PACS data to map the FIR emission of the [C II] line in the spiral galaxy M33, and models of photo-ionization and photon-dominated regions, Mookerjea et al. (2011) found that between 20% and 30% of this emission comes from the H II region.
Compared with our lower limit estimates, these results for extragalactic sources are at least a factor of two lower than the values we find for M17 SW when considering only the [C II] emission not associated with the dense star-forming material (cf. Sect. 5.2). They are more than 6% lower than what we estimate for the [C II] emission associated mostly with the H II region (cf. Sect. 5.3). We believe that there are three main reasons for these differences:
(1) Their models of static geometries do not consider that their beams collect emission from many sources with diverse geometries (from edge-on to face-on, and many other orientations in between), systemic velocities, highly structured medium (that would allow FUV radiation to permeate the region ionizing and heating the gas on larger spatial scales), and dynamical processes like advection (as mentioned in Sect. 5.4) that would increase the contribution from the ionized medium.
(2) The models assume that the gas is in pressure equilibrium, so the equation of state is dominated by gas pressure since turbulent and magnetic pressures are not included. If turbulent or magnetic pressure dominate instead, as is the case of M17 SW (Pellegrini et al. 2007), then the density law and hence the collisional excitation of the PDR diagnostic lines throughout the H II and photon-dominated regions will be different. Whereas the velocity-resolved [C II] spectra obtained with SOFIA/GREAT towards M17 SW, shows that ∼80% of the velocity range covered by the [C II] line is associated with a mixture of ionized medium, neutral gas, and diffuse molecular gas, and only ∼20% of the velocity range is associated mostly with the dense (star-forming) molecular material (cf. Sects. 5.1 to 5.4). While this result is particular for M17 SW, we have found similar results in other regions (e.g.,NGC 3603,in prep.), showing that this characteristic should be taken into account in PDR modeling of Galactic and extragalactic sources.
All the reasons mentioned above lead us to think that the fraction of [C II] emission coming from the H II regions have been underestimated in previous studies. Their conclusions regarding the fraction of [C II] emission emerging from ongoing star formation, and their estimates of SFRs in comparison with other SFR diagnostics, may change. In order to judge how much more valid our estimate is, models of photo-ionization and photon-dominated regions that include dynamical processes and magnetic pressure are needed. In addition, a significant number of large scale maps of [C II]and other diagnostics of ionized gas toward Galactic sources are required to increase the statistical significance of our results. Large scale maps of many sources can be achieved with the upGREAT receiver arrays, the second generation receivers for the GREAT project, that will make mapping the large scale regions more efficient. The upGREAT low frequency array (1.9-2.5 THz with 14 pixels) is currently scheduled for commissioning onboard SOFIA in May 2015. The up-GREAT high frequency array (4.7 THz with 7 pixels) will follow about one year later. We expect to show the results of this long term study in follow up work.
Conclusions
We used the dual channel DSB receiver FLASH on the APEX telescope to map (with 12 . 7 resolution) a region of about 4.1 pc × 4.7 pc in the 3 P 1 → 3 P 0 609 µm (J = 1 → 0) fine-structure transition of [C I], towards the star-forming region M17 SW. We also used the broadband EMIR receivers on the IRAM 30m telescope to map a similar area of 360 ×300 in the 3mm, 2mm, and 1mm bands.
We combine these data with the previously observed and published [C I] 3 P 2 → 3 P 1 370 µm and [C II] 158 µm data to discuss the physical properties of the dense interstellar medium in the M17 SW ridge.
Because of the complex structure of the M17 SW and the availability of velocity resolved spectra, we have performed all our analysis on 1 km s −1 -wide velocity channel maps.
Excitation and column densities of [C I]
Combining our earlier observation of the [C I] 3 P 2 → 3 P 1 370 µm (J = 2 → 1) fine-structure line together with the new [C I] 3 P 1 → 3 P 0 609 µm data, we found that the R = I([C I] 369µm)/I([C I] 609µm) ratio is larger than unity in most of the regions mapped, at the central (10-24 km s −1 ) velocity channels where the bulk (>20%) of the [C I] emission is found. We estimate the excitation temperature T ex and column density of [C I] using an optically thin approximation and a non-LTE method. We found that T ex ranges between ∼40 K and ∼100 K in the inner region (i.e., southwest from the ionization front). While the [C I] column density ranges between ∼4×10 13 cm −2 and ∼10 17 cm −2 throughout the whole region mapped and among all the velocity channels. In the region where the [C I] emission is ≥ 50% of its peak integrated intensity, column densities up to 10 17 cm −2 are only reached in the central (17-22 km s −1 ) velocity channels.
Comparison of [C II], [C I], and CO
We used our recent SOFIA/GREAT velocity-resolved [C II] map to analyze the spatial and velocity association of the [C II] emission with the diffuse and molecular gas in M17 SW. For that we used the [C I] lines from APEX/FLASH and the 12 CO and isotope lines from IRAM30m/EMIR.
The [C II] emission was found to be associated with the other gas tracers in 20%-80% of the mapped region, but only at the central (15 km s −1 and 23 km s −1 ) velocity channels, which means that only ∼20% of the velocity range (∼40 km s −1 ) that the [C II] line spans in our velocity-resolved spectra is associated with the star-forming material in M17 SW.
For the non-associated [C II] gas, in the 1 km s −1 wide channel maps we estimated column densities ranging from ∼10 14 cm −2 km −1 s to ∼5×10 17 cm −2 km −1 s across the region mapped. The largest fraction of non-associated atomic mass contained in the mapped region, is found at the higher (25-33 km s −1 ) velocity channels. The total non-associated gas mass (integrated over the 0-40 km s −1 range) is ∼2.8×10 3 M . This corresponds to a very large fraction (∼64%) of the total mass (∼4.4×10 3 M ) traced by the [C II] emission. In other words, most of the gas traced by the [C II] emission in M17 SW is not associated with the star-forming material. The good match of the [C I] emission with the C 18 O J = 2 → 1 map indicates that both tracers are seen from the same, star-forming material.
Comparison of [C II], H I, and hydrogen recombination lines
When using the optical depth of H I in combination with the [C I] 609 µm and 12 CO J = 1 → 0 maps, we found that the [C II] emission associated with H I is contained in the lower (<20 km s −1 ) velocity channels, while the [C II] emission associated with the H II gas phase is found mostly at the higher velocity (>25 km s −1 ) channels. Most of the [C II] emission associated with the diffuse and dense molecular H 2 gas, is found in the central velocity channels (15-30 km s −1 ). From our preferred model (2), we found that 36.2%, 16.8%, and 47.0% of the [C II] emission is contained in the H II, H I, and H 2 regimes, respectively.
Overlays between [C II], τ(H I), 12 CO J = 1 → 0, and the H41α line at velocity ranges associated with the three gas regimes indicate that the H II region is mixed with the atomic and part of the molecular dissociation regions, in agreement with the highly clumped structure and dynamical processes at play in M17 SW.
Considering M17 SW as a proxy for active galaxies, these results are also relevant to extra-galactic studies in which [C II] is often used as a tracer of star formation rates. We have estimated a fraction of the [C II] emission not associated with the dense star-forming material, based on large maps of velocity-resolved spectra. Our estimates are up to two times larger than estimates done in extragalactic sources, based on velocity-integrated [C II] intensities. Other Galactic molecular clouds, for which velocityresolved [C II] maps are also available, will be analyzed in a follow-up work. With the enhanced observing capabilities of SOFIA's second generation heterodyne arrays, more sources can be observed in the future, increasing the statistics.
Appendix A: LTE Analysis of [C I]
Following Frerking et al. (1989) and Schneider et al. (2003, their Appendix A), the optical depths of the [C I] J = 1 → 0 and J = 2 → 1 lines can be estimated from the excitation temperature and the peak intensity of both lines, assuming a beam filling factor of unity. Knowing the excitation temperature and the optical depth of the J = 1 → 0 line, the column density N([C I]) can be computed as well. Figures A.1 and A.2 shows the channel maps corresponding to the optical depths of both [C I] lines, while Fig. A.3 shows the channel maps of the [C I] column density, as estimated assuming LTE conditions.
Appendix B: Non-LTE Excitation Analysis of [C I]
Following the procedure described in (Pérez-Beaupuits et al. 2007, 2009), we use the radiative transfer code RADEX (van der Tak et al. 2007) to create a data cube containing the intensities, as well as the excitation temperatures and optical depths of the two [C I] transitions, in function of a range of kinetic temperatures T K , number densities n(H 2 ) (i.e., excitation conditions), and column densities per line width N/∆V. The collision rates used were taken from the LAMDA database (Schöier et al. 2005).
We only used collisions with H 2 since this is the most abundant molecule. Other collision partners can be H and He. Although their collision cross sections are comparable, H 2 is about 5 times more abundant than He, and H is at least one order of magnitude less abundant than H 2 in the dense cores of molecular clouds (e.g., Meijerink & Spaans 2005). Hence, including H and He as additional collision partners would not produce a significant change in our results. We also assumed a homogeneous spherical symmetry in the clumps for the escape-probability formalism.
The original RADEX code was modified to include dust background emission as a diluted blackbody radiation field, as in Poelman & Spaans (2005) and Pérez-Beaupuits et al. (2009). The total background radiation is modeled as a composite between the cosmic background radiation (CMB), assumed to be a blackbody function at 2.73 K, and the diluted dust radiation estimated as τ dust × B(T dust ), where B(T dust ) is the Planck function and the dust continuum optical depth τ dust (λ) is defined by Hollenbach et al. (1991) as τ dust (λ) = τ 100µm (100µm/λ). We adopted an average dust background temperature T dust = 50 K and the high FIR opacity τ 100µm = 0.106 found by Meixner et al. (1992) in M17SW.
We assume that the emission collected by the beam has a homogeneous elliptical Gaussian distribution and that the coupling factor of our beam to the source distribution is unity, so we can compare directly with the output of RADEX, which is the Rayleigh-Jeans equivalent radiation temperature T R emitted by the source.
We explored all the possible excitation conditions within the given range that can lead to the observed radiation temperatures and the line ratios between the two [C I] lines. The line ratios and the peak temperature of the lower-J line involved in each ratio were used to constrain the excitation conditions. Including the rms of the observed spectra and uncertainties in all the assumptions mentioned above, a 20% error of the ratios and peak temperatures are used to define a range of values for T K , n(H 2 ), and N/∆V within which the RADEX output is selected as a valid solution.
The volume density explored ranges between 10 2 cm −3 and 10 4 cm −3 , the kinetic temperature varies from 10 K to 500 K, and the column density per line width lies between 10 10 cm −2 km −1 s and 10 20 cm −2 km −1 s. Since the optical depth, and the line intensities, are proportional to the column density per line width N/∆V, we generated the RADEX data cube assuming the ∆V = 1 km s −1 of the velocity channels in the re-sampled spectra. In order to constrain the solutions, we fit the line ratio between the peak temperatures of the transitions and the radiation temperature of the lower transition (J = 1 → 0), which are R ∼ 1.08 and T R ≈ T mb = 31.9 K at offset position (−130 ,−10 ), and R ∼ 1.06 and T R ≈ T mb = 22.4 K at offset position (−130 ,−70 ). The contours and labels correspond to the column density per line width N/∆V (cm −2 km −1 s, in log 10 scale). In the middle panel the contours and labels correspond to the excitation temperature T ex (K) of the J = 1 → 0 (left) and J = 2 → 1 (right) transitions, while the bottom panel shows the corresponding optical depths for each line. and T K ). The curvature in the solutions depict the dichotomy between the kinetic temperature and the density of the collision partner. In other words, solutions for the observed values can be found for higher temperatures and lower densities, but also for lower T K and higher n(H 2 ). The column density per line width does not change significantly along the solution curve, but it does change slightly across the curves, and especially at lower (< 100 K) kinetic temperatures. This means that for a given kinetic temperature, N/∆V will show a small variation in function of density.
Appendix C: Correlation between line tracers
In order to verify the apparent spatial correlation between two species, we use the correlation between a specific pixel value of a map, and the same pixel in the map of another tracer. This can be done only in the maps with the same dimensions and spatial resolutions. Therefore, we first convolved all the maps to the larger beam size (24 ) of the C 18 O J = 1 − 0 line, and we used the SOFIA/GREAT map of the [C II] 158 µm line (which The contours and labels correspond to the column density per line width N/∆V (cm −2 km −1 s, in log 10 scale). In the middle panel the contours and labels correspond to the excitation temperature T ex (K) of the J = 1 → 0 (left) and J = 2 → 1 (right) transitions, while the bottom panel shows the corresponding optical depths for each line.
covers the smaller region) as a template to create all the [C I] and CO maps. The sample correlation coefficient commonly used to estimate r xy between the images X and Y is the Pearson's product-moment correlation coefficient defined by (Pearson 1920; Rodgers & Nicewander 1988),
r xy = cov[X, Y] √ var[X]var[Y] = i j [X i j −X][Y i j −Ȳ] i j [X i j −X] 2 i j [Y i j −Ȳ] 2 , (C.1)
where X i j and Y i j are the pixel values of two given maps or images (e.g., [C II] and 12 CO J = 1 → 0), andX andȲ are the average pixel value of the respective maps. We note that the uncertainty of the correlation coefficient can be approximated as σ r ∼ (1 − r 2 xy )/ √ n − 2, and since we have maps with n = 31×18 = 558 pixels, the uncertainty will always be between 10 −2 and 10 −3 . This correlation is applied to the velocity integrated intensity maps, and to the 1 km s −1 width channel maps.
A&A-25020_printer, Online Material p 24
We note that because of the positive correlation between the intensities of different tracers, r xy ranges between 0 and 1. As proof of concept, the scatter plot and the associated correlation coefficient between the velocity-integrated intensity of several line tracers is shown in Fig. C.1. The application of the scatter plot to the channel maps is shown in Fig. C.2 for two test cases. We note that the correlation between the velocity-integrated intensity, as well as the channel maps, of the J = 1 → 0 and J = 2 → 1 transitions of 12 CO is very good, even at the faintest emission of the lower velocity (> 3.5 km s −1 ) channels, but the good correlation is lost at the higher velocity (> 28.5 km s −1 ) channels. In the case of the [C I] 370 µm (X-axis) versus 13 CO J = 2 → 1 (Y-axis), the pixel values are more scattered at the lower (< 16 km s −1 ) and higher (> 22 km s −1 ) velocity channels because the line intensities are fainter and, hence, those channel maps are more affected by noise. Table 1. Contours are 20%, 40%, 60%, 80%, and 100% of the respective peak emissions
Fig. 1 .
1Fig. 1. Top -Color map of the velocity integrated (in the range 0-40 km s −1 ) intensity of [C I] J = 1 → 0 in M17 SW. The peak emission is 240 K km s −1 . The contour levels are 25%, 50%, 75%, and 90% of the peak emission. Bottom -Color map of the integrated intensity of [C I] J = 2 → 1 (from Pérez-Beaupuits et al. 2010) convolved to the beam size (∼ 12 . 7) of the [C I] J = 1 → 0 line, with a peak emission of 280 K km s −1 . The contour levels are as described above. The reference position (∆α = 0, ∆δ = 0) is as in Fig. 2.
Fig. C.1. A similar case is found when comparing the [C I] 370 µm line with the J = 2 → 1 transitions of the 13 CO and C 18 O lines (bottom panel in
Fig. 3 .
3Color maps of the velocity integrated intensity (in the range 0-40 km s −1 ) of the J = 1 → 0 and J = 2 → 1 transitions of C 17 O (Top panels) and C 18 O (Bottom panels) in M17 SW. Only the C 17 O J = 1 → 0 line was integrated between 14 and 28 km s −1 because it is fainter and noisier than the other lines. The contour levels are 25%, 50% (thick contour), 75%, and 90% of the peak emission. All maps have been convolved to the larger beam (24 ) of the C 18 O J = 1 → 0 line. The reference position (∆α = 0, ∆δ = 0) and symbols are as inFig. 2.
grated [C II] 158 µm map from Pérez-Beaupuits et al. (2012, their Fig. 1) with the new CO and [C I] maps from Figs. 2 and 3. Therefore, in the following sections we present our new analysis and discussions based on velocity channel maps, showing the intensities of the lines integrated over a narrow (1 km s −1 ) channel width.
Fig. 4 .
4Scatter plots and correlation coefficients r xy (from Eq. C.1) between the velocity-integrated intensity of 12 CO and 13 CO J = 1 → 0 (top-left), and the 12 CO J = 2 → 1 compared to the J = 2 → 1 lines of 13 CO (top-right), C 17 O (middle-left), and C 18 O (middle-right). The bottom panel shows the scatter plot and correlation coefficient between the [C I] 370 µm and the J = 2 → 1 transitions of 13 CO (bottom-left) and C 18 O (bottom-right).
averaged values. Figures A.1 and A.2 show the channel maps of the optical depths estimated for the [C I] J = 1 → 0 and J = 2 → 1Fig. 5.
Fig. 6 .
6Velocity channel maps (at 1 km s −1 width) of the [C I] excitation temperature (color map in K) in M17 SW, estimated from the R = I([C I] 369µm)/I([C I] 609µm) line intensity ratio, and assuming LTE conditions and optically thin emission. Only velocity channels and pixels with emission larger than 3σ in both [C I] lines are shown.
Fig. 7 .
7Optical depths of the [C I] J = 1 → 0 (top) and J = 2 → 1 (bottom) lines in the 19-20 km s −1 velocity channel map toward M17 SW (from Figs. A.1 and A.2), estimated from the excitation temperature of [C I], and assuming LTE conditions.
Fig. 8 .
8Velocity channel map (integrated in 1 km s −1 ) of the [C II] 158µm emission (gray background), with overlays of the 3 P 1 → 3 P 0 finestructure transition of [C I] at 609µm (contours). The ionized and neutral carbon emissions are well correlated only at the intermediate velocities (16-24 km s −1 ). This can also be seen in the line shapes of the [C II], [C I] 609 µm, C 18 O J = 2 → 1, and 12 CO J = 1 − 0 spectra shown in Fig. 11 for different offset positions along a strip line at position angle (P.A.)=63 o . The pointings of the spectra from the different tracer coincide within 2 , i.e., sufficiently close considering the smeared 24 beam resolution. We note that most of the lower and higher velocity channels of the [C II] line are not associated with any of the other lines, while the C 18 O and [C I] lines are highly associated. The velocity channel maps of [C II] with overlays of the new maps of the [C I] 609 µm and 12 CO J = 1 → 0 lines, are shown in Figs. 8 and 9, respectively. The spatial distribution of each velocity channel in the [C I] 609 µm emission follows a similar pattern to the maps presented in Pérez-Beaupuits et al. (2013), although with a narrower (14-24 km s −1 ) velocity range where strong association with the [C II] emission is observed.
Fig. 9 .
9Velocity channel map (integrated in 1 km s −1 ) of the [C II] 158µm emission (gray background), with overlays of the 12 CO J = 1 → 0 transition (contours). The molecular gas shows better association with the ionized carbon at more extended intermediate velocities (14-27 km s −1 ) than the neutral carbon(Fig. 8).
Fig. 10 .
10Histograms showing the fraction of the [C II] 158 µm emitting region correlated to each 1 km s −1 channel with other gas tracers. The corresponding correlation coefficient r xy is overlaid.
Fig. 11 .
11Spectra of several lines observed at approximated (±2 ) offset positions along the strip line at P.A. 63 • (∆δ = ∆α/2). All the spectra have been resampled to a 1 km s −1 resolution and convolved with the largest beam of 24 , corresponding to the C 18 O J = 1 → 0 map. gion east of the ionization front in M17 SW (i.e., the H I and H II regions) from mid-J 12 CO line observations(Pérez-Beaupuits et al. 2010), the exact expression for the two-level system presented bySchneider et al. (2003, their Eq. (A
Fig. 12 .
12Residual [C II] 158 µm spectrum (dashed line and gray filled histogram) at offset position (-30 ,-15 ) after subtracting the scaled up (marked with (*)) [C I] 609 µm (top) and C 18 O J = 2 → 1 (bottom) lines from the original [C II] spectrum. All negative (noise) channels in the scaled up and residual spectra are set to zero. The residual [C II] spectrum is shifted by -1 K for clarity.
Fig. 13 .
13[C II] column density enhancement for three collision partners, H 2 (top), H I (middle), and e − (bottom), with respect to the column density obtained assuming a temperature of 250 K and a density of 10 4 cm −3 . This column density is depicted by the contour line equal unity.
Fig. 14 .
14Velocity channel maps at 1 km s −1 width of the residual [C II] column density (cm −2 in log 10 scale), estimated assuming LTE conditions, that is not associated with the dense and halo molecular gas traced by C 18 O and [C I], respectively.
Fig. 15 .
15Gas mass (squares) estimated from the [C II] 158 µm emission with Eq. (3) at each velocity channel in the range 0 km s −1 to 40 km s −1 . The gas mass not associated in the spatial distribution of each velocity channel map with star-forming material traced by the C 18 O J = 2 → 1 and [C I] 609 µm lines is shown by filled circles.
Fig. 16 .
16Fraction of the [C II] 158 µm emitting region correlated (at each 1 km s −1 channel) with the optical depth of H I (fromBrogan & Troland 2001). The corresponding correlation coefficient r xy is overlaid.
Fig. 17 .
17Residual [C II] 158 µm spectrum (dashed line and gray filled histogram) at offset position (-30 ,-15 ), after subtracting, from the original [C II] spectrum, the scaled up (*) spectra of model 1-(top), 2-(middle) and 3-(bottom), as well as the optical depth of H I, τ(H I) (see text). All negative (noise) channels in the scaled up and residual spectra are set to zero. The residual [C II] spectrum is shifted in -1 K for clarity.
τ(H I) + [C I] 609 µm + C 18 O(2-1); (2) τ(H I) + [C I] 609 µm + 12 CO(1-0); and (3) τ(H I) + 12 CO(1-0) + C 18 O(2-1). Model
Fig. 18 .
18Fraction of the average over the region mapped residual [C II] emission corresponding to the three gas phases: H II (left), H I (middle), and H 2 (right) as obtained with model (2) in
At these positions, (20 ,70 ) Article number, page 17 of 27 A&A proofs: manuscript no. 25020_printer
Fig. 20 .
20Spectra of several lines observed at approximated (±3 ) offset positions along the strip line at P.A. 63 • (∆δ = ∆α/2) and at positions close to the ionizing stars. All the spectra have been resampled to a 1 km s −1 resolution and convolved with a 30 beam, to increase the S/N of the H41α map. A Gaussian was fit to the H41α line at the top two positions.
Fig. 21 .
21Spectra of several lines observed at approximated (±3 ) offset positions along the strip line at P.A. 63 • (∆δ = ∆α/2). All the spectra have been resampled to a 1 km s −1 resolution and convolved with a 30 beam, to increase the S/N of the H41α map. medium and large principal quantum numbers, and even the fainter carbon recombination lines, may help explain the nature and ambient conditions of the 32 km s −1 component found in the [C II] emission.
Fig. 22 .
22Channel maps of the H41α line (gray background, in K km s −1 ) with overlays of the [C II] (red contours), the τ(H I) optical depth (blue contours) (left panels), and the 12 CO J = 1 → 0 (green contours) (right panels), at the velocity ranges of the peak [C II] emission associated with (from top to bottom) the H II, H I, and H 2 gas regimes. The corresponding velocity ranges are shown in the top-left of each channel map.
( 3 )
3Their studies are based on the total velocity-integrated intensities of the [C II] and other line emissions, provided by the spectral resolution of the instruments used.
Fig. A. 1 .
1Velocity channel maps at 1 km s −1 width of the optical depth τ 1→0 of the [C I] 1 → 0 line, estimated assuming LTE conditions in M17 SW.
Fig. A. 2 .
2Velocity channel maps at 1 km s −1 width of the optical depth τ 2→1 of the [C I] 2 → 1 line, estimated assuming LTE conditions in M17 SW.
Fig. A. 3 .
3Velocity channel maps at 1 km s −1 width of the column density (cm −2 ) of [C I] (in log 10 scale), estimated using the excitation temperature fromFig. 6.
Fig
Figures B.1 to B.2 show gray scale and contour maps of the excitation conditions found to reproduce the observed [C I] line ratios and peak temperatures at the offset positions (−130 ,−10 ) and (−130 ,−70 ), respectively. The values shown correspond to the average of all the possible N/∆V, T ex , and τ found for each pair of excitation conditions (n(. B.1. Excitation map (top panel) for the [C I] 2→1 1→0 line ratio observed at offset position (−130 ,−10 ) in the velocity channel 19-20 km s −1 .
Fig. B. 2 .
2Excitation map (top panel) for the [C I] 2→1 1→0 line ratio observed at offset position (−130 ,−70 ) in the velocity channel 19-20 km s −1 .
Fig. C. 1 .
1Example of the scatter plots between the pixel values of the velocity-integrated intensity maps of two different line tracers, and the corresponding correlation coefficient obtained using Eq. C.1.
Fig. C. 2 .
2Example of the scatter plots between the pixel values (K km s −1 ) of the velocity channel maps of the 12 CO J = 1 → 0 (X-axis) and J = 2 − 1 (Y-axis) transitions (top left), and the [C I] 370 µm (X-axis) and 13 CO J = 2 → 1 (Y-axis) lines (top right). The corresponding correlation coefficient r at each velocity channel, is shown in the bottom panels.A&A-25020_printer, Online Material p 26Fig. D.1. Example of the steps in the method to estimate the spatial association between two emission lines. The top panels shows the channel maps of the 1 km s −1 integrated intensity (in K km s −1 ) of the [C II] 158 µm (left) and [C I] 609 µm (right) lines. The contour lines correspond to the threshold of 10% of their global peak intensities (136.4 K km s −1 and 54.6 K km s −1 for [C II] and [C I], respectively). The middle panels are the binary images obtained after applying the intensity threshold to the original channel maps. The bottom panel shows the result of multiplying the two binary images, which corresponds to the region where the emission of both [C II] and [C I] lines are associated in a particular velocity channel (in this case 15.5-16.5 km s −1 ).
Fig. D. 2 .
2Histograms of the fraction of the region mapped where two line tracers are associated at each 1 km s −1 width velocity channel. The corresponding correlation coefficient R xy obtained using Eq. C.1 for each channel map is shown with a continuous line.
Fig. E. 1 .
1Velocity channel maps at 1 km s −1 width of the [C II] emission (in K km s −1 ), associated with the H II (top left), H I (top right), and H 2 (bottom) gas phase, as estimated with model (2) from
Table 1 .
1Fraction of average [C II] emission in the three gas phases. Model No. H II fraction H I fraction H 2 fractionModel 1 a
42.7%
20.9%
36.4%
Model 2 b
36.2%
16.8%
47.0%
Model 3 c
36.8%
17.4%
45.8%
Notes. (a) Combination: [C II] − {τ(H I) + [C I] 609 µm + C 18 O (2 − 1)}
(b) Combination: [C II] − {τ(H I) + [C I] 609 µm + 12 CO (1 − 0)} (c) Com-
bination: [C II] − {τ(H I) + 12 CO (1 − 0) + C 18 O (2 − 1)}
This publication is based on data acquired with the Atacama Pathfinder Experiment (APEX). APEX is a collaboration between the Max-Planck-Institut für Radioastronomie, the European Southern Observatory, and the Onsala Space Observatory Article number, page 2 of 27 J.P. Pérez-Beaupuits et al.: Atomic Gas not associated in M17 SW
http://www.strw.leidenuniv.nl/∼moldata/ Article number, page 5 of 27 A&A proofs: manuscript no. 25020_printer
There is also the CO-dark molecular gas(Wolfire et al. 2010) that we trace in part through our [C I] line. However our [C I] sensitivity is not good enough to also measure the more diffuse molecular, CO-dark gas.
Acknowledgements. We are grateful to the MPIfR team, as well as the APEX and IRAM 30m staff for their help and support during and after the observations. We are grateful to C. Brogan for providing the 21 cm map and the data cube of the velocity-resolved optical depth of HI estimated for M17 SW. We thank the referee for the careful reading of the manuscript and constructive comments that helped to improve our work. We are also grateful with the editor, Dr. Malcolm Walmsley, for his timely comments and suggestions that helped to improve even more our work. Molecular Databases that have been helpful include the NASA/JPL spectroscopy line catalog and the University of Leiden's LAMDA databases.Appendix D: Spatial association in channel mapsWe use a simple method to estimate the fraction of the region mapped where two emission lines are associated in narrow (1 km s −1 ) width channel maps. All maps are first convolved to the lowest spatial resolution (24 FWHM beam) of the C 18 O J = 1 → 0 map in order to increase the S/N, and the size of the [C I], 12 CO and isotope maps is limited to the region mapped in [C II] by using the [C II] data cube as a template in GILDAS/CLASS. In this way, we produce spectral cubes with the same dimensions and number of pixels.Then we determine which region of the maps have significant emission. This can be done naturally by using the rms (noise) level of the spectra corresponding to each pixel in the map (i.e., a 3-σ level detection), or by defining a threshold for the intensities. Since our maps (especially that of the [C II] line) are not homogeneously sampled, the rms level varies among the different pixels. Hence, we prefer to use a fraction of the global peak (maximum) integrated intensity found among all the channel maps as a threshold. This provides a unique value that is used to determine whether the emission of some region (pixel) in a particular channel map is significant or not.We create binary images for each channel map by assigning a zero to all the pixels with intensity values lower than the threshold, and a value of unity to all the pixels that have intensity values larger than or equal to the threshold. We use a conservative value of 10% of the global peak emission for the threshold, which is about ten times higher than the noise level in most of the channel maps of all the lines we consider. This conservative value is used in order to avoid the association of emission levels in one image that would be considered noise in another image.Then we multiply each binary channel map image of the two line tracers we want to compare, to see if there are regions where the two emissions are associated in that particular velocity channel. The product image would contain pixels with 1's in regions where both line tracers have significant emission, and 0's otherwise. Thus, adding up all the pixels from the product image, and dividing by the total number of pixels, we obtain the fraction of the region mapped where the emission from both lines is associated. An example of this procedure, applied to the [C II] 158 µm and [C I] 609 µm lines, is shown inFig. D.1.With this method we can estimate the fraction, and where in the region mapped, two line tracers are associated at each velocity channel. The fraction of the region mapped can be compared with the correlation coefficient described in Appendix C. Test cases of this are shown inFig. D.2.Appendix E: [C II] emission associated with the three gas phasesSubtracting the scaled up spectra of [C I] 609 µm, 12 CO J = 1 → 0, and the velocity-resolved optical depth τ(H I) from the [C II] spectra, we obtained a residual [C II] emission that should be mostly associated with the ionized hydrogen gas (H II). When subtracting the maximum (channel by channel) between the tracers of the H 2 gas (e.g., [C I] 609 µm and 12 CO J = 1 → 0, or [C I] 609 µm and C 18 O J = 2 → 1) from the original [C II] spectra, we obtain a second residual [C II] emission that is expected to be mostly associated with the neutral (H I) and ionized atomic gas (H II). The difference between these residual spectra and the original [C II] spectra gives the [C II] emission that is mostly associated with the H 2 gas. Subtracting the the [C II] emission that is mostly associated with the H 2 gas and the first residual spectra associated with H II from the original [C II] spectra would lead to a third residual [C II] emission that is mostly associated with the neutral atomic hydrogen gas, H I. The corresponding column densities can be estimated assuming the LTE conditions and Eq. 2. The velocity channel maps of the residual [C II] emission associated to each of the gas phases, as estimated with model (2) fromTable 1, is shown inFig. E.1.
. P A Aannestad, R J Emery, A&A. 406155Aannestad, P. A. & Emery, R. J. 2003, A&A, 406, 155
. N P Abel, G J Ferland, G Shaw, P A M Van Hoof, ApJS. 16165Abel, N. P., Ferland, G. J., Shaw, G., & van Hoof, P. A. M. 2005, ApJS, 161, 65
. R Banerjee, R E Pudritz, L Holmes, MNRAS. 355248Banerjee, R., Pudritz, R. E., & Holmes, L. 2004, MNRAS, 355, 248
. G Barinovs, M C Van Hemert, R Krems, A Dalgarno, ApJ. 620537Barinovs,G., van Hemert, M. C., Krems, R., & Dalgarno, A. 2005, ApJ, 620, 537
. M Beetz, H Elsaesser, R Weinberger, C Poulakos, A&A. 5041Beetz, M., Elsaesser, H., Weinberger, R., & Poulakos, C. 1976, A&A, 50, 41
. J Bernard-Salas, E Habart, H Arab, A&A. 53837Bernard-Salas, J., Habart, E., Arab, H., et al. 2012, A&A, 538, A37
. F Bertoldi, B T Draine, ApJ. 458222Bertoldi, F. & Draine, B. T. 1996, ApJ, 458, 222
. C Brogan, T Troland, D Roberts, R Crutcher, ApJ. 515304Brogan, C., Troland, T., Roberts, D., & Crutcher, R. 1999, ApJ, 515, 304
. C L Brogan, T H Troland, ApJ. 560821Brogan, C. L. & Troland, T. H. 2001, ApJ, 560, 821
. P S Broos, E D Feigelson, L K Townsley, ApJS. 169353Broos, P. S., Feigelson, E. D., Townsley, L. K., et al. 2007, ApJS, 169, 353
. J S Carr, ApJ. 323170Carr, J. S. 1987, ApJ, 323, 170
. P Carral, D J Hollenbach, S D Lord, ApJ. 423223Carral, P., Hollenbach, D. J., Lord, S. D., et al. 1994, ApJ, 423, 223
. M Carter, B Lazareff, D Maier, A&A. 53889Carter, M., Lazareff, B., Maier, D., et al. 2012, A&A, 538, A89
. J Castor, R Mccray, R Weaver, ApJ. 107Castor, J., McCray, R., & Weaver, R. 1975, ApJ, 200, L107
. R Chini, H Elsässer, T Neckel, A&A. 91186Chini, R., Elsässer, H., & Neckel, T. 1980, A&A, 91, 186
. P C Clark, S C O Glover, R S Klessen, V Bromm, ApJ. 727110Clark, P. C., Glover, S. C. O., Klessen, R. S., & Bromm, V. 2011, ApJ, 727, 110
. J E Dale, I Bonnell, MNRAS. 414321Dale, J. E. & Bonnell, I. 2011, MNRAS, 414, 321
. C Federrath, R S Klessen, ApJ. 761156Federrath, C. & Klessen, R. S. 2012, ApJ, 761, 156
. M A Frerking, J Keene, G A Blake, T G Phillips, ApJ. 344311Frerking, M. A., Keene, J., Blake, G. A., & Phillips, T. G. 1989, ApJ, 344, 311
. R Genzel, A I Harris, J Stutzki, D T Jaffe, ApJ. 3321049Genzel, R., Harris, A. I., Stutzki, J., & Jaffe, D. T. 1988, ApJ, 332, 1049
J P Pérez-Beaupuits, Atomic Gas not associated in M17 SW Gerin. 50917J.P. Pérez-Beaupuits et al.: Atomic Gas not associated in M17 SW Gerin, M. & Phillips, T. G. 1998, ApJ, 509, L17
. S P Goodwin, MNRAS. 284785Goodwin, S. P. 1997, MNRAS, 284, 785
M A Gordon, H II regions and radio recombination lines. K. I. Kellermann & G. L. VerschuurGordon, M. A. 1988, H II regions and radio recombination lines, ed. K. I. Keller- mann & G. L. Verschuur, 37-94
. M A Gordon, ApJ. 337782Gordon, M. A. 1989, ApJ, 337, 782
. M A Gordon, R L Sorochenko, Radio Recombination Lines. Their Physics and Astronomical Applications. 282Astrophysics and Space Science LibraryGordon, M. A. & Sorochenko, R. L., eds. 2002, Astrophysics and Space Science Library, Vol. 282, Radio Recombination Lines. Their Physics and Astronom- ical Applications
. U U Graf, A Eckart, R Genzel, ApJ. 405249Graf, U. U., Eckart, A., Genzel, R., et al. 1993, ApJ, 405, 249
. H R Griem, ApJ. 148547Griem, H. R. 1967, ApJ, 148, 547
. M J Griffin, P A R Ade, G S Orton, Icarus. 65244Griffin, M. J., Ade, P. A. R., Orton, G. S., et al. 1986, Icarus, 65, 244
. R Güsten, D Fiebig, A&A. 204253Güsten, R. & Fiebig, D. 1988, A&A, 204, 253
. R Güsten, L Å Nyman, P Schilke, A&A. 45413Güsten, R., Nyman, L. Å., Schilke, P., et al. 2006, A&A, 454, L13
. H J Habing, Bull. Astron. Inst. Netherlands. 19421Habing, H. J. 1968, Bull. Astron. Inst. Netherlands, 19, 421
. M M Hanson, I D Howarth, P S Conti, ApJ. 489698Hanson, M. M., Howarth, I. D., & Conti, P. S. 1997, ApJ, 489, 698
. A I Harris, J Stutzki, R Genzel, ApJ. 32249Harris, A. I., Stutzki, J., Genzel, R., et al. 1987, ApJ, 322, L49
. D B Henley, K Kwak, R L Shelton, ApJ. 75358Henley, D. B., Kwak, K., & Shelton, R. L. 2012, ApJ, 753, 58
. S Heyminck, U U Graf, R Güsten, A&A. 5421Heyminck, S., Graf, U. U., Güsten, R., et al. 2012, A&A, 542, L1
. S Heyminck, C Kasemann, R Güsten, G De Lange, U U Graf, A&A. 45421Heyminck, S., Kasemann, C., Güsten, R., de Lange, G., & Graf, U. U. 2006, A&A, 454, L21
. M P Hobson, MNRAS. 256457Hobson, M. P. 1992, MNRAS, 256, 457
. S Hocuk, M Spaans, A&A. 52224Hocuk, S. & Spaans, M. 2010, A&A, 522, A24+
. V H Hoffmeister, R Chini, C M Scheyda, ApJ. 686310Hoffmeister, V. H., Chini, R., Scheyda, C. M., et al. 2008, ApJ, 686, 310
. D J Hollenbach, T Takahashi, A G G M Tielens, ApJ. 377192Hollenbach, D. J., Takahashi, T., & Tielens, A. G. G. M. 1991, ApJ, 377, 192
. D J Hollenbach, A G G M Tielens, Reviews of Modern Physics. 71173Hollenbach, D. J. & Tielens, A. G. G. M. 1999, Reviews of Modern Physics, 71, 173
. J E Howe, M L N Ashby, E A Bergin, ApJ. 539137Howe, J. E., Ashby, M. L. N., Bergin, E. A., et al. 2000, ApJ, 539, L137
. D T Jaffe, A I Harris, R Genzel, ApJ. 316231Jaffe, D. T., Harris, A. I., & Genzel, R. 1987, ApJ, 316, 231
. C O Johnson, C G Depree, W M Goss, ApJ. 500302Johnson, C. O., Depree, C. G., & Goss, W. M. 1998, ApJ, 500, 302
. J Keene, G A Blake, T G Phillips, P J Huggins, C A Beichman, ApJ. 299967Keene, J., Blake, G. A., Phillips, T. G., Huggins, P. J., & Beichman, C. A. 1985, ApJ, 299, 967
. B Klein, S Hochgürtel, I Krämer, A&A. 5423Klein, B., Hochgürtel, S., Krämer, I., et al. 2012, A&A, 542, L3
. D E Kleinmann, Astrophys. Lett. 1349Kleinmann, D. E. 1973, Astrophys. Lett., 13, 49
. R S Klessen, J Ballesteros-Paredes, E Vázquez-Semadeni, C Durán-Rojas, ApJ. 620786Klessen, R. S., Ballesteros-Paredes, J., Vázquez-Semadeni, E., & Durán-Rojas, C. 2005, ApJ, 620, 786
. C Kramer, M Cubick, M Röllig, A&A. 477547Kramer, C., Cubick, M., Röllig, M., et al. 2008, A&A, 477, 547
. C Kramer, H Jakob, B Mookerjea, A&A. 424887Kramer, C., Jakob, H., Mookerjea, B., et al. 2004, A&A, 424, 887
. C Kramer, J Stutzki, R Rohrig, U Corneliussen, A&A. 329249Kramer, C., Stutzki, J., Rohrig, R., & Corneliussen, U. 1998, A&A, 329, 249
. M R Krumholz, A J Cunningham, R I Klein, C F Mckee, ApJ. 7131120Krumholz, M. R., Cunningham, A. J., Klein, R. I., & McKee, C. F. 2010, ApJ, 713, 1120
. C J Lada, D L Depoy, K M Merrill, I Gatley, ApJ. 374533Lada, C. J., Depoy, D. L., Merrill, K. M., & Gatley, I. 1991, ApJ, 374, 533
. R B Loren, ApJ. 338902Loren, R. B. 1989, ApJ, 338, 902
. M L Luhman, S Satyapal, J Fischer, ApJ. 594758Luhman, M. L., Satyapal, S., Fischer, J., et al. 2003, ApJ, 594, 758
. R Meijerink, M Spaans, A&A. 436397Meijerink, R. & Spaans, M. 2005, A&A, 436, 397
. R Meijerink, M Spaans, F P Israel, A&A. 461793Meijerink, R., Spaans, M., & Israel, F. P. 2007, A&A, 461, 793
. M Meixner, M Haas, A Tielens, E Erickson, M Werner, ApJ. 390499Meixner, M., Haas, M., Tielens, A., Erickson, E., & Werner, M. 1992, ApJ, 390, 499
. B Mookerjea, S K Ghosh, H Kaneda, A&A. 404569Mookerjea, B., Ghosh, S. K., Kaneda, H., et al. 2003, A&A, 404, 569
. B Mookerjea, C Kramer, C Buchbender, A&A. 532152Mookerjea, B., Kramer, C., Buchbender, C., et al. 2011, A&A, 532, A152
. D Muders, H Hafok, F Wyrowski, A&A. 45425Muders, D., Hafok, H., Wyrowski, F., et al. 2006, A&A, 454, L25
. D A Neufeld, P Sonnentrucker, T G Phillips, A&A. 518108Neufeld, D. A., Sonnentrucker, P., Phillips, T. G., et al. 2010, A&A, 518, L108
. K Pearson, Biometrica. 1325Pearson, K. 1920, Biometrica, 13, 25
. M Peimbert, N Ukita, T Hasegawa, J Jugaku, PASJ. 40581Peimbert, M., Ukita, N., Hasegawa, T., & Jugaku, J. 1988, PASJ, 40, 581
. E W Pellegrini, J A Baldwin, C L Brogan, ApJ. 6581119Pellegrini, E. W., Baldwin, J. A., Brogan, C. L., et al. 2007, ApJ, 658, 1119
. J P Pérez-Beaupuits, S Aalto, H Gerebro, A&A. 476177Pérez-Beaupuits, J. P., Aalto, S., & Gerebro, H. 2007, A&A, 476, 177
. J P Pérez-Beaupuits, M Spaans, M R Hogerheijde, A&A. 51087Pérez-Beaupuits, J. P., Spaans, M., Hogerheijde, M. R., et al. 2010, A&A, 510, A87+
. J P Pérez-Beaupuits, M Spaans, F F S Van Der Tak, A&A. 503459Pérez-Beaupuits, J. P., Spaans, M., van der Tak, F. F. S., et al. 2009, A&A, 503, 459
J P Pérez-Beaupuits, J Stutzki, R Güsten, V Ossenkopf, H Wiesemeyer, IAU Symposium. T. Wong & J. Ott292Pérez-Beaupuits, J. P., Stutzki, J., Güsten, R., Ossenkopf, V., & Wiesemeyer, H. 2013, in IAU Symposium, Vol. 292, IAU Symposium, ed. T. Wong & J. Ott, 55-55
. J P Pérez-Beaupuits, H Wiesemeyer, V Ossenkopf, A&A. 54213Pérez-Beaupuits, J. P., Wiesemeyer, H., Ossenkopf, V., et al. 2012, A&A, 542, L13
J P Pérez-Beaupuits, H Zinnecker, R Güsten, prep. Poelman, D. R. & Spaans, M. 2005. 440559Pérez-Beaupuits, J. P., Zinnecker, H., Güsten, R., et al. , in prep. Poelman, D. R. & Spaans, M. 2005, A&A, 440, 559
. J L Rodgers, W A Nicewander, The American Statistician. 4259Rodgers, J. L. & Nicewander, W. A. 1988, The American Statistician, 42, 59
. M Röllig, C Kramer, C Rajbahak, A&A. 5258Röllig, M., Kramer, C., Rajbahak, C., et al. 2011, A&A, 525, A8
. R W Russell, G Melnick, S D Smyers, ApJ. 25035Russell, R. W., Melnick, G., Smyers, S. D., et al. 1981, ApJ, 250, L35
. L Sargsyan, A Samsonyan, V Lebouteiller, ApJ. 79015Sargsyan, L., Samsonyan, A., Lebouteiller, V., et al. 2014, ApJ, 790, 15
. N Schneider, R Simon, C Kramer, A&A. 406915Schneider, N., Simon, R., Kramer, C., et al. 2003, A&A, 406, 915
. N Schneider, R Simon, C Kramer, J Stutzki, S Bontemps, A&A. 384225Schneider, N., Simon, R., Kramer, C., Stutzki, J., & Bontemps, S. 2002, A&A, 384, 225
. F L Schöier, F F S Van Der Tak, E F Van Dishoeck, J H Black, A&A. 432369Schöier, F. L., van der Tak, F. F. S., van Dishoeck, E. F., & Black, J. H. 2005, A&A, 432, 369
. M Spaans, A&A. 307271Spaans, M. 1996, A&A, 307, 271
. M Spaans, E F Van Dishoeck, A&A. 323953Spaans, M. & van Dishoeck, E. F. 1997, A&A, 323, 953
. G J Stacey, N Geis, R Genzel, ApJ. 373423Stacey, G. J., Geis, N., Genzel, R., et al. 1991, ApJ, 373, 423
. J Stutzki, R Güsten, ApJ. 356513Stutzki, J. & Güsten, R. 1990, ApJ, 356, 513
. J Stutzki, G J Stacey, R Genzel, ApJ. 332379Stutzki, J., Stacey, G. J., Genzel, R., et al. 1988, ApJ, 332, 379
. G Tenorio-Tagle, A&A. 7159Tenorio-Tagle, G. 1979, A&A, 71, 59
. A P Tsivilev, V V Krasnov, F F S Van Der Tak, J H Black, F L Schöier, D J Jansen, E F Van Dishoeck, Astronomy Reports. 43627A&ATsivilev, A. P. & Krasnov, V. V. 1999, Astronomy Reports, 43, 511 van der Tak, F. F. S., Black, J. H., Schöier, F. L., Jansen, D. J., & van Dishoeck, E. F. 2007, A&A, 468, 627
. V Wakelam, E Herbst, ApJ. 680371Wakelam, V. & Herbst, E. 2008, ApJ, 680, 371
. R Weaver, R Mccray, J Castor, P Shapiro, R Moore, ApJ. 218377Weaver, R., McCray, R., Castor, J., Shapiro, P., & Moore, R. 1977, ApJ, 218, 377
. M G Wolfire, D Hollenbach, C F Mckee, ApJ. 7161191Wolfire, M. G., Hollenbach, D., & McKee, C. F. 2010, ApJ, 716, 1191
. S Yamamoto, H Maezawa, M Ikeda, ApJ. 547165Yamamoto, S., Maezawa, H., Ikeda, M., et al. 2001, ApJ, 547, L165
|
[] |
[
"Skill Preferences: Learning to Extract and Execute Robotic Skills from Human Feedback",
"Skill Preferences: Learning to Extract and Execute Robotic Skills from Human Feedback"
] |
[
"Xiaofei Wang ",
"Kimin Lee ",
"Kourosh Hakhamaneshi ",
"Pieter Abbeel ",
"Michael Laskin "
] |
[] |
[] |
A promising approach to solving challenging long-horizon tasks has been to extract behavior priors (skills) by fitting generative models to large offline datasets of demonstrations. However, such generative models inherit the biases of the underlying data and result in poor and unusable skills when trained on imperfect demonstration data. To better align skill extraction with human intent we present Skill Preferences (SkiP), an algorithm that learns a model over human preferences and uses it to extract human-aligned skills from offline data. After extracting human-preferred skills, SkiP also utilizes human feedback to solve downstream tasks with RL. We show that SkiP enables a simulated kitchen robot to solve complex multi-step manipulation tasks and substantially outperforms prior leading RL algorithms with human preferences as well as leading skill extraction algorithms without human preferences.
| null |
[
"https://arxiv.org/pdf/2108.05382v1.pdf"
] | 236,987,239 |
2108.05382
|
f70e82b8b7c792a3cdbf2c9bf2e7af06fd6a7269
|
Skill Preferences: Learning to Extract and Execute Robotic Skills from Human Feedback
Xiaofei Wang
Kimin Lee
Kourosh Hakhamaneshi
Pieter Abbeel
Michael Laskin
Skill Preferences: Learning to Extract and Execute Robotic Skills from Human Feedback
Reinforcement LearningSkill ExtractionHuman Preferences
A promising approach to solving challenging long-horizon tasks has been to extract behavior priors (skills) by fitting generative models to large offline datasets of demonstrations. However, such generative models inherit the biases of the underlying data and result in poor and unusable skills when trained on imperfect demonstration data. To better align skill extraction with human intent we present Skill Preferences (SkiP), an algorithm that learns a model over human preferences and uses it to extract human-aligned skills from offline data. After extracting human-preferred skills, SkiP also utilizes human feedback to solve downstream tasks with RL. We show that SkiP enables a simulated kitchen robot to solve complex multi-step manipulation tasks and substantially outperforms prior leading RL algorithms with human preferences as well as leading skill extraction algorithms without human preferences.
Introduction
Deep reinforcement learning (RL) is a framework for solving temporally extended tasks that has resulted in a number of breakthroughs in autonomous control including mastery of the game of Go [1,2], learning to play video games [3,4,5], and learning basic robotic control [6,7]. However, today's RL systems require substantial manual human effort to engineer rewards for each task which comes with two fundamental drawbacks. The human effort required to design rewards is impractical to scale across numerous and diverse task categories and the engineered rewards can often be exploited by the RL agent to produce unintended and potentially unsafe control policies [8,9,10]. Moreover, it becomes increasingly difficult to design reward functions for the kinds of complex tasks with compositional structure often encountered real-world settings. In this work, we are interested in the following research question -how can we learn robotic control policies that are aligned with human intent and capable of solving complex real-world tasks?
Human-in-the-loop RL [11,12,13] has emerged as a promising approach to better align RL with human intent that proposes an alternate approach to traditional RL algorithm design. Rather than manually engineering a reward function and then training the RL agent, human-in-the-loop RL proposes for humans to provide feedback interactively to the agent as it is training. This paradigm shift sidesteps reward exploitation by providing the RL algorithm immediate feedback to align it best with human intent and, if efficient in terms of human labels required, has the potential to scale RL training across a diverse variety of tasks more reliably than reward engineering.
So far human-in-the-loop RL systems have been used to play Atari games [12], solve simulated locomotion and manipulation tasks [11,13], and better align the output of language models [14]. While these initial results have been promising, human-in-the-loop methods are still out of reach for the kinds of long-horizon compositional tasks that are desired for real-world robotics. The primary reason is that current methods do not scale efficiently with respect to human labels for more challenging tasks. As task complexity increases, the number of human feedback interactions required to attain a suitable policy becomes impractical.
To address the ability of RL algorithms to scale to more complex long-horizon tasks, a number of recent works [15,16,17] have proposed data-driven extraction of behavioral priors, which we refer Figure 1: Our method -Skill Preferences (SkiP) -consists of two phases. During the skill extractions phase, human feedback is used to learn skills. During the skill execution phase, human feedback is used to finetune the skills to solve various downstream tasks. First, skills are extracted from a noisy offline dataset with human feedback to denoise behavioral prior. Second, skills are executed with RL in the environment with task-specific human feedback.
to as skills. In these methods, a behavioral prior is fit to an offline dataset of demonstrations and is then used to guide the RL policy to solve downstream tasks by regularizing it to stay near the behavioral distribution. Such methods have been shown to successfully solve tasks such as diverse object manipulation [16] and operating a kitchen with a robotic arm [15]. However, they still require engineered rewards for the downstream tasks and, more importantly, assume access to a clean offline dataset of expert demonstrations that are specifically relevant to the downstream tasks. In real-world scenarios, such clean datasets are highly unlikely to exist. We desire skill extraction methods that are robust to noisy datasets, collected by a range of policies, with highly multi-modal structure.
In this work, we introduce Skill Preferences (SkiP), an algorithm that integrates human-in-the-loop RL with data-driven skill extraction. Our main insight is that human feedback can be incorporated not only for downstream RL, as is done in prior work, but also for extracting human-aligned skills. SkiP learns a human preference function and uses it to weigh the likelihood of trajectories in the offline dataset based on their degree of alignment with human intent. By incorporating human feedback during skill extraction, SkiP is able to extract structured human-preferred skills from noisy offline data and addresses the core limitation of prior skill extraction approaches -the dependence on curated expert datasets. SkiP is both capable of efficiently extracting skills and solving different downstream tasks with respect to human labels. Similar to how prior work in human-in-the-loop RL suggested replacing manually engineered reward functions with human feedback, our work suggests to replace the manual effort needed to curate clean offline datasets with human feedback. We summarize our main contributions below:
1. We introduce Skill Preferences (SkiP), an algorithm that incorporates human feedback to extract skills from offline data and utilize those skills to solve downstream tasks.
2. We show that, unlike prior leading methods for data-driven skill extraction, SkiP is able to extract structured skills from noisy offline datasets.
3. We show that SkiP is able to solve complex multi-step manipulation tasks in robotic kitchen environment substantially more efficiently than prior leading human-in-the-loop and skill extraction baselines.
Background
Reinforcement Learning: As is common with RL methods, we assume that the control process is a Markov Decision Process (MDP) with discounted returns. Such MDPs are defined by the tuple M = (S, A, R, ρ 0 , γ) consisting of states s ∈ S, actions a ∈ A, rewards R = R(s, a), an initial state distribution s 0 ∼ ρ 0 (·), and a discount factor γ ∈ [0, 1). A control policy maps states to actions within the MDP and usually takes the form of a probability distribution -a ∼ π(·|s). The value function V π (s) and action-value function Q π (s, a) describe the value with respect to future expected returns with respect to an initial state or state-action pair.
V π (s) := E M,π ∞ t=0 γ t R(s t , a t ) | s 0 = s , Q π (s, a) := R(s, a) + γE s ∼T (·|s,a) [V π (s )] ,
where the first expectation E M,π denotes actions are sampled according to π and future states are sampled according to the MDP dynamics. The goal in RL is the learn the optimal policy:
π * ∈ arg max π J(π, M) := E s∼ρ0 [V π (s)] .
In addition to the standard MDP setting, our method will also learn skills z ∈ Z which consist of an encoder that maps state-action sequences to a skill q (e) (z|s t , a t , . . . , s t+H−1 , a t+H−1 ) and a decoder that maps state-skill pairs to atomic actions q (d) (a 1 , a 2 , ..., a H |s, z).
Method
The two primary contributions of SkiP are (i) introducing human feedback during the skill extraction process to learn structured skills from noisy data and (ii) utilizing human preferences over skills for downstream RL training. Our approach shown schematically in Fig. 1 and detailed in full in Algo. 1. Due to utilizing human feedback to learn the behavioral prior, unlike prior approaches of skill extraction from offline data [17,16,15], our method is robust to suboptimal or noisy data.
The SkiP Algorithm: We first summarize the algorithm and then proceed with its derivation. Shown in Algo. 1, SkiP consists of two phases -(i) skill extraction and (ii) skill execution. A human teacher provides feedback during both phases. During skill extraction, a human teacher labels whether a trajectory is preferred or not (for details see Sec. 4) to train a preference classifier. A behavioral prior is then fit to the offline data with a weighted human preference function. During skill execution, the learned skills are rolled out by an RL agent -a Soft Actor-Critic (SAC) [18] -that is trained with taskspecific human preferences. As such, human feedback is used during both phases of the algorithm. We proceed to define notation and provide a derivation.
Preliminaries and Notation: Our method is composed of two phases -(i) the skill extraction phase and (ii) the skill execution phase. During the skill extraction phase, we are given an offline dataset D which consists of task-agnostic, multi-modal, and potentially noisy demonstrations. We denote trajectory sequences as τ t = (s t , a t , . . . , s t+H−1 , a t+H−1 ), action sequences as a t = (a t , . . . , a t+H−1 ), and skills which decode into action sequences as z ∈ Z.
Learning Behavioral Priors with Human Feedback (Skill Extraction): Our main insight is to use human preferences in order to fit a weighted behavioral prior over an offline dataset of (potentially noisy) demonstrations. Our method builds on prior work for behavioral extraction from offline data via expected maximum likelihood latent variable models [17,16,15].
Specifically, prior work [17,16,15] considers a parameterized generative model p α (a t |s t ) over action sequences where a t = (a t , . . . , a t+H−1 ) that represents a behavioral prior and is trained to replicate the transition statistics in the offline dataset:
p α ∈ arg max α E τ ∼D t=0 log p α (a t |s t ) .(1)
In our approach, we consider an adaptive behavioral prior that is biased towards trajectories that achieve higher rewards according to the human preference function. This can be particularly useful in diverse datasets collected with suboptimal or noisy policies or multiple policies of varying expertise. For example, one could imagine multiple humans collecting demonstrations or multiple robots exploring their environment. Similar to Siegel et al. [19], we seek a behavioral prior that is biased towards the high reward trajectories in the dataset while also staying close to the average statistics in the dataset. However, unlike prior work on weighted behavioral priors [19,20,21] the weight is determined through the human preference function and we aim to maximize action-sequence likelihood as opposed to single-timestep actions.
We formulate this as:
p α ∈ arg max α E τ ∼D |τ | t=0 ω(τ t ) · p α (a t |s t ) such that E τ ∼D [D KL (p α p)] ≤ δ,(2)(y,τ )∼D [y · logP ψ (τ ) + (1 − y) · log(1 − P ψ (τ ))] for each iteration do Update p, q φ 2 , p φ 1 by optimizing L prior (3)
{Update preference weighted behavioral prior} ==== Skill Execution Phase ==== Initialize parameters of actor π θ1 , critics Q θ2 and Qθ 2 and reward model Rη Initialize a dataset of preference D ← ∅ and a dataset of transitions B ← ∅ for Each iteration do for Each environment step do wherep denotes the empirical behavioral policy and ω(s t , a t ) is the weighting function. The nonparametric solution to the above optimization is given by:
zt ∼ π(zt|st), st+H ∼ p(st+H |st, zt), B ← B ∪ (st, zt, Rη(st, zt), st+H ) if iteration % K == 0 then for step t = 1...M do (τ (z) 0 , τ (z) 1 ) ∼ B, query human for label y, D ← D ∪ (τ (z) 0 , τ (z) 1 , y) {Get preference labels} for each gradient step of Rη do Sample (τ (z) 0 , τ (z) 1 , y) ∼ D,p α (a t |s t ) ∝p(a t |s t ) · exp (ω(τ t )/T ) ,
where we have used ∝ to avoid specification of the normalization factor, and T represents a temperature parameter that is related to the constraint level δ. The above non-parametric policy can be projected into the space of parametric neural network policies as [20,19]:
p α ∈ arg max α E τ ∼D |τ | t=0 exp (ω(τ t )/T ) · log p α (a t |s t ) .(3)
For the choice of the weighting function, we use the learned preference classifier P ψ (y|τ ) which inputs a trajectory and outputs the likelihood of this trajectory being human-preferred with y ∈ [0, 1]. P ψ (y|τ ) is learned by sampling a small subset of the offline dataset and soliciting human feedback to label preferred versus not preferred trajectory: ω(τ t ) := log P ψ (τ t ).
In this process, we treat the temperature T as the hyper-parameter choice. This implicitly defines the constraint threshold δ, and makes the problem specification and optimization more straightforward. For our practical implementation, we fit a variational autoencoder similar to [17,15] but softly weighted to maximize the likelihood of human-preferred transitions. We introduce a latent variable z with a Guassian prior such that the ELBO loss is given by:
log p(a t |s t ) ≥ E τ ∼D,z∼q φ 2 (z|τ ) [log p φ1 (a t |s t , z) Lrec +β (log p(z) − log q φ2 (z|τ ) Lreg ].(4)
This is the standard β-VAE loss applied to action sequence modeling where β is a scalar controlling the regularization strength and φ 1 , φ 2 are neural network parameters that are optimized during training. Note that q φ2 encodes trajectories into a latent vecotr and p φ1 decodes latent vectors and the starting state back into action sequences. Our training objective weighs this loss with the preference function. Thus, our overall skill extraction objective is to maximize:
L = arg max φ1,φ2 E τ ∼D,z∼q φ (z|τ ) [P ψ (τ )(L rec + L reg )] .(5)
Reward learning and human preferences over skills (Skill Execution): Unlike traditional RL where the hand-engineered rewards are available, we consider the preference-based RL framework [11,12,13,22]: a (human) teacher provides preferences between the agent's behaviors and the agent uses this feedback to perform the task. In order to incorporate human preferences into deep RL, Christiano et al. [11] proposed a framework that learns a reward function R η from preferences. In this work, we modify the preference framework to operate not over atomic state-action transitions but rather state-skill transitions that have substantially longer time spans.
Formally, we assume access to an offline dataset (the agent's replay buffer) B of state-action transitions and sample state-skill sequence pairs τ
(z) 1 , τ (z) 2
for which a human provides a binary label y ∈ {0, 1}, where τ (z) = (s t , z t , s t+H , z t+H , . . . , s (t+M )H , z (t+M )H ) where H is the length of actions the skill decodes to and M is the total number of state-skill transitions. Note how such trajectories are H times longer than if we were to sample state-action trajectories of length M .
The reward function R therefore fits a Bernoulli distribution across sequences. In this work, we learn a parameterized reward function R η as in [13] utilizing a Bradley-Terry model [23] in the following manner:
P η [τ (z) 1 τ (z) 0 ] = exp t R η (s 1 t , z 1 t ) i∈{0,1} exp t R η (s i t , z i t ) .(6)
Here, the operator A B means that A is preferred to B. R η can therefore be interpreted as a binary preference classifier where labels are provided through human feedback. The parameters η of the neural network are updated by optimizing a binary cross-entropy loss:
L Reward = −E (τ 0 ,τ 1 ,y)∼D y(0) log P η [τ (z) 0 τ (z) 1 ] + y(1) log P η [τ (z) 1 τ (z) 0 ] .(7)
4 Experimental Setup Environments: For our experiments, we use the robot kitchen environment and offline dataset from the D4RL suite [24]. This environment requires a 7-DOF (6-DOF arm and 1-DOF gripper) robotic arm to solve complex multi-step tasks in a kitchen. Due to the 7-DOF control and compositional long-horizon nature of the tasks, this environment cannot be solved by standard methods such as SAC or behavior cloning [15].
Offline dataset: We desire our method to work on suboptimal offline data and, unlike prior skill extraction approaches [15,16,17] do not assume that the offline dataset consists solely of expert demonstrations. We simulate a noisy offline dataset by combining 601 expert trajectories and 601 noisy trajectories generated by random policy. The expert trajectories involve various structured kitchen interactions such as opening the microwave and operating the stove. We solicit human feedback on 10% of the total trajectories or equivalently 120 human labels .
Downstream tasks: We use 6 different downstream tasks shown in Fig. 2 that vary in difficulty to evaluate our approach. The task suite consists of tasks that require one, two, or three subtasks to be completed in a row in order to achieve the overall goal. We note that even the tasks with one subtask is challenging for RL methods that operate over atomic actions and do not leverage skills, as is shown in the experimental results.
Simulated human: Similar to prior work [11,13], we obtain feedback from simulated human teachers instead of real humans. During skill extraction, human provides labels whether a trajectory is noisy or structured. 1 During skill execution, the simulated human assigns positive labels to trajectory segments that have made more progress toward completing the desired task. Progress is calculated by computing ||s M ·H −s|| 2 − ||s 1 −s|| 2 , wheres is the state when the target task is completed.
Baselines: In addition to our method, we compare to Atomic Preferences which we based on PEB-BLE [13]: a state-of-the-art human preference RL method. it pretrains the SAC agent with behavior cloning over the optimal offline dataset and trains the online SAC agent with human preferences over microwave kettle microwave kettle kettle burner microwave kettle burner kettle burner cabinet one task two tasks three tasks Figure 2: We evaluate in the robot kitchen environment from D4RL [24], which requires a 7-DOF robotic arm to operate a kitchen. Within this environment, we consider a variety of manipulation tasks of varying difficulty. The simplest tasks involve one subtask -opening a microwave or moving the kettle -while more challenging tasks require the agent to compose multiple subtasks. Overall, we consider 6 evaluation tasks that require chaining one, two, or three subtasks. Starting with a noisy offline dataset, which consists of both expert and random actions, our method fits a behavioral prior to the offline data using human feedback to identify human-preferred motions which results in a set of diverse skills that can then be finetuned to downstream tasks.
atomic transitions instead of high-level skill transitions. We also compare to Flat Prior which learns a single-step action prior on the atomic action space over the optimal dataset and trains an online SAC agent regularized with the action prior over ground-truth reward. The Oracle we compare to is SPiRL, a leading skill extraction with access to the ground truth (expert demonstrations and ground truth reward) in Fig. 4.
Experimental Results
For the experimental evaluation of our approach, we investigate the following questions: (a) Can SkiP solve challenging long-horizon tasks and how does our method compare to prior leading approaches? (b) How do SkiP compare to an oracle baseline that extracts skills from perfect expert demonstrations and has access to the ground truth reward? (c) Is it necessary to provide human feedback during skill extraction or is it sufficient to fit an unweighted behavioral prior over the offline data? (d) How should we incorporate human feedback during the skill execution phase?
Main Results: We evaluate SkiP and related baselines on the 6 tasks shown in Fig. 2 and display the learning curves in Fig. 4. We observe that SkiP is the only method (except for the Oracle) that is capable of solving the majority of tasks in the robot kitchen task suite and outperforms the baselines on all environments. On 5 out of 6 tasks, SkiP is able to match the oracle baseline asymptotically which means that it arrives at the optimal solution.
SkiP is also human-label efficient. During skill extraction, only 120 labels are required to train the preference classifier. During skill execution, 300-1K labels are required to solve most tasks depending on the task's complexity. We hypothesize that human label efficiency is better during the skill extraction phase because classifying structured and noisy skills from a static offline dataset is easier than classifying task-specific preferences from an evolving replay buffer. Further human label efficiency improvements pose interesting research directions for future work. Fig. 2. SkiP outperforms both baselines across the majority of the tasks and is the only method that is capable of matching the Oracle on most tasks. We also compare SkiP to SkiP with 3x more human labels and find comparable performance between the two versions. SkiP solves most tasks given 300-1000 human labels depending on the complexity of the task. Figure 5: SkiP with human feedback vs SkiP without human feedback during skill extraction. learning curve with shaded region representing standard error across three seeds. Both algorithms learns prior from the suboptimal dataset and were evaluated with online RL. SkiP with human feedback outperforms SkiP without human feedback on all 6 environments Ablation Studies: To further understand the properties of the SkiP algorithm, we investigate whether human feedback is necessary during skill extraction as well as how the human preference reward function compares to alternate approaches to human feedback during skill execution.
Is it necessary to provide human feedback during skill extraction or is it sufficient to fit an unweighted behavioral prior over the offline data? The offline dataset used throughout this paper consists of suboptimal data that is a mixture of expert and random actions. We compare fitting a human-feedback weighted behavioral prior as opposed to an unweighted behavioral prior that maximizes the likelihood of all action sequences equally. For the skill execution phase, both methods have access to the same human preference reward function. The results shown in Fig. 5 indicate that the method, which extracts skills without human feedback, is unable to solve any of the tasks suggesting that human feedback is essential for skill extraction from suboptimal offline data.
How should we incorporate human feedback during the skill execution phase? Instead of preferences, a simpler approach to learning from human feedback is to provide binary feedback if a task (or subtask) has been solved and learning a reward classifier to guide the RL agent. We implement this by providing a positive reward of 1 for a high-level transition (s t , z, s t+H ) when a subtask has been completed and 0 otherwise. Using the same number of human queries for both approaches, we compare learning with preferences as opposed to learning from sparse rewards. For both approaches, we use human feedback for skill extraction. As shown in Fig. 6, RL with a reward classifier for subtask completion is able to solve some tasks but generally performs much worse than RL with human preferences.
Related Work
Human-in-the-loop Reinforcement Learning: Several works have successfully utilized feedback from real humans to train RL agents [25,11,12,26,13,27,28]. One of major directions is directly utilizing the human feedback as a learning signal [29,27,25] but assumed unlimited access to human labels which limited their practicality for more challenging tasks. To address this limitation, a number of works proposed learning reward model from human feedback [26,28,30,31,32,33].
Recently, several works have successfully combined human preferences with deep RL algorithms to learn basic locomotion skills as well as playing video games from pixels using human [11,12,34,13]. However, these methods are limited to short-horizon or cyclic tasks and do not scale to more challenging compositional multi-step tasks. In this work, we investigate how to scale human preferences to such challenging tasks by specifying preferences over skills.
Data-driven Extraction of Behavioral Priors: Behavioral prior or skill extraction refers to fitting a distribution over an offline dataset of demonstrations and biasing the agent's policy towards the most likely actions from that distribution. Commonly used for offline RL [21,19,20], behavioral priors learned through maximum likelihood latent variable models can also been used as skills for structured exploration in RL [16], to solve complex long-horizon tasks from sparse rewards [15,17], and regularize offline RL policies [21,20,35]. A limitation of these skill extraction methods is that the quality of the behavioral prior is highly dependent on the demonstrations in the offline dataset. Since a behavioral prior models maximum likelihood transitions in the offline dataset, suboptimal, noisy, or irrelevant transitions can degrade downstream policy learning. In this work, we introduce human feedback into the skill extractions phase to learn a human preferred behavioral prior which enables skill extraction methods to be robust to suboptimal offline data.
Conclusion
We presented Skill Preferences (SkiP) an algorithm that uses human feedback for both skill extraction as well as execution, and showed that SkiP enables robotic agents to solve long-horizon compositional manipulation tasks. We hope that this work excites other researchers about the potential of learning with skills and human feedback.
Acknowledgements
We would like to thank Berkeley DeepDrive, Tencent, ONR Pecase N000141612723, and NSF NRI 2024675 for supporting this research.
A Background
Off-policy RL with Soft Actor-Critic. The Soft Actor-Critic (SAC) [18] is a leading off-policy RL algorithm. Like other off-policy RL methods, such as DQN [3] or DDPG [36], SAC optimizes a Q function but does so based on the maximum entropy framework for RL [37]. In addition to maximizing the reward function, SAC also maximizes the policy entropy which leads to improved exploration and helps prevent overfitting. As an actor-critic method, SAC optimizes both the actor's policy by maximizing a value function as well as a critic with a Bellman loss. The actor's parameters are updated to maximize the Q function and policy entropy which is encapsulated by the following equation:
L SAC actor = E st∼B,at∼π θ 1 α log π θ1 (a t |s t ) − Q θ2 (s t , a t ) .
Here, (s t , a t ) are state-action pairs, B is a replay buffer, θ 1 is the actor's parameters, θ 2 are the critic's parameters, and α is a scalar value that control the entropy strength. The policy π θ1 is parametrized by a multi-variate Gaussian with a diagonal covariance matrix and outputs the means and standard deviations that are then used to sample actions from the Gaussian distribution. To update the critic's parameters, SAC optimizes a soft Q function by minimizing the soft Bellman loss:
L SAC critic = E τt Q θ2 (s t , a t ) − R t − γ Qθ 2 (s t , a t ) − α log π θ1 (a t |s t ) 2 ,(9)
where τ t = (s t , a t , s t+1 , R t ) is a single timestep transition,θ denotes the Polyak averaging of the critic's parameters, and α is a temperature parameter.
B Implementation Details
B.1 Hyperparamters
Because we built off of SPiRL [15], we used the same set of hyperparamters for skill extraction and online RL training. The reward model learning from human preference has the same hyperparamters as in PEBBLE. [13]. C Effect of segment size Figure 7: The plot compares SkiP with different segment size over the Kettle-Burner-Cab environment. Lines and shaded area represent mean and standard error over three seeds, respectively.
Hyperparameters for Skill Extraction Value
As shown in Fig 7, unlike PEBBLE [13], we did not find segment size to affect our method's performance.
Figure 3 :
3An illustration of the skill extraction procedure within the robot kitchen environment.
Figure 4 :
4SkiP and baselines (Sec. 4) evaluated over six tasks in the robot kitchen environment shown in
Figure 6 :
6SkiP with preferences vs SkiP with learned sparse reward. Learning curve with shaded region representing standard error across three seeds. both algorithms use the same prior. SkiP with preferences outperforms SkiP with learned sparse reward on 5 out of 6 environments.
Algorithm 1 SkiP: Skill Preferences==== Skill Extraction Phase ==== INPUT: offline datasetB Initialize prior p, skill encoder q φ 2 and skill decoder p φ 1 . Initialize learned preference classifier P ψ A human provides labels (y1, y2, ...) for 10% of the trajectories inB and stores them in a new bufferD for each iteration do Update ψ by maximizing E
update Rη with min L reward (7) {Update preferences} Relabel entire replay buffer B with Rη for each gradient step of agent do Sample (s, a, s , R) ∼ B, update π θ1 by optimizing L SAC actor (Appendix A) {Update agent} Update Q θ2 and Qθ 2 by optimizing L SAC critic (Appendix A)
University of California, Berkeley. 2 Covariant. Correspondence: [email protected]. Video and Codes: https://sites.google.com/view/skill-pref.
Here, we remark that limited number of human labels (10% of the total trajectories) is utilized in our experiments for skill extraction.
A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. D Silver, T Hubert, J Schrittwieser, I Antonoglou, M Lai, A Guez, M Lanctot, L Sifre, D Kumaran, T Graepel, Science. 3626419D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, et al. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science, 362(6419):1140-1144, 2018.
Mastering the game of go without human knowledge. D Silver, J Schrittwieser, K Simonyan, I Antonoglou, A Huang, A Guez, T Hubert, L Baker, M Lai, A Bolton, Nature. 5507676D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, et al. Mastering the game of go without human knowledge. Nature, 550(7676):354-359, 2017.
Human-level control through deep reinforcement learning. V Mnih, K Kavukcuoglu, D Silver, A A Rusu, J Veness, M G Bellemare, A Graves, M Riedmiller, A K Fidjeland, G Ostrovski, Nature. 5187540V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep rein- forcement learning. Nature, 518(7540):529-533, 2015.
. Openai, Openai Five, OpenAI. Openai five. https://blog.openai.com/openai-five/, 2018.
Grandmaster level in starcraft ii using multi-agent reinforcement learning. O Vinyals, I Babuschkin, W M Czarnecki, M Mathieu, A Dudzik, J Chung, D H Choi, R Powell, T Ewalds, P Georgiev, Nature. 5757782O. Vinyals, I. Babuschkin, W. M. Czarnecki, M. Mathieu, A. Dudzik, J. Chung, D. H. Choi, R. Powell, T. Ewalds, P. Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature, 575(7782):350-354, 2019.
Solving rubik's cube with a robot hand. I Akkaya, M Andrychowicz, M Chociej, M Litwin, B Mcgrew, A Petron, A Paino, M Plappert, G Powell, R Ribas, arXiv:1910.07113arXiv preprintI. Akkaya, M. Andrychowicz, M. Chociej, M. Litwin, B. McGrew, A. Petron, A. Paino, M. Plappert, G. Powell, R. Ribas, et al. Solving rubik's cube with a robot hand. arXiv preprint arXiv:1910.07113, 2019.
Qt-opt: Scalable deep reinforcement learning for visionbased robotic manipulation. D Kalashnikov, A Irpan, P Pastor, J Ibarz, A Herzog, E Jang, D Quillen, E Holly, M Kalakrishnan, V Vanhoucke, Conference on Robot Learning. D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly, M. Kalakrishnan, V. Vanhoucke, et al. Qt-opt: Scalable deep reinforcement learning for vision- based robotic manipulation. In Conference on Robot Learning, 2018.
Inverse reward design. D Hadfield-Menell, S Milli, P Abbeel, S Russell, A Dragan, Advances in Neural Information Processing Systems. D. Hadfield-Menell, S. Milli, P. Abbeel, S. Russell, and A. Dragan. Inverse reward design. In Advances in Neural Information Processing Systems, 2017.
D Amodei, C Olah, J Steinhardt, P Christiano, J Schulman, D Mané, arXiv:1606.06565Concrete problems in ai safety. arXiv preprintD. Amodei, C. Olah, J. Steinhardt, P. Christiano, J. Schulman, and D. Mané. Concrete prob- lems in ai safety. arXiv preprint arXiv:1606.06565, 2016.
Avoiding side effects in complex environments. A M Turner, N Ratzlaff, P Tadepalli, arXiv:2006.06547arXiv preprintA. M. Turner, N. Ratzlaff, and P. Tadepalli. Avoiding side effects in complex environments. arXiv preprint arXiv:2006.06547, 2020.
Deep reinforcement learning from human preferences. P F Christiano, J Leike, T Brown, M Martic, S Legg, D Amodei, Advances in neural information processing systems. P. F. Christiano, J. Leike, T. Brown, M. Martic, S. Legg, and D. Amodei. Deep reinforcement learning from human preferences. In Advances in neural information processing systems, 2017.
Reward learning from human preferences and demonstrations in atari. B Ibarz, J Leike, T Pohlen, G Irving, S Legg, D Amodei, Advances in neural information processing systems. B. Ibarz, J. Leike, T. Pohlen, G. Irving, S. Legg, and D. Amodei. Reward learning from human preferences and demonstrations in atari. Advances in neural information processing systems, 2018.
Pebble: Feedback-efficient interactive reinforcement learning via relabeling experience and unsupervised pre-training. K Lee, L Smith, P Abbeel, arXiv:2106.05091arXiv preprintK. Lee, L. Smith, and P. Abbeel. Pebble: Feedback-efficient interactive reinforcement learning via relabeling experience and unsupervised pre-training. arXiv preprint arXiv:2106.05091, 2021.
Fine-tuning language models from human preferences. CoRR, abs/1909.08593. D M Ziegler, N Stiennon, J Wu, T B Brown, A Radford, D Amodei, P F Christiano, G Irving, D. M. Ziegler, N. Stiennon, J. Wu, T. B. Brown, A. Radford, D. Amodei, P. F. Christiano, and G. Irving. Fine-tuning language models from human preferences. CoRR, abs/1909.08593, 2019. URL http://arxiv.org/abs/1909.08593.
Accelerating reinforcement learning with learned skill priors. K Pertsch, Y Lee, J J Lim, Conference on Robot Learning (CoRL). 2020K. Pertsch, Y. Lee, and J. J. Lim. Accelerating reinforcement learning with learned skill priors. In Conference on Robot Learning (CoRL), 2020.
Parrot: Data-driven behavioral priors for reinforcement learning. A Singh, H Liu, G Zhou, A Yu, N Rhinehart, S Levine, International Conference on Learning Representations. A. Singh, H. Liu, G. Zhou, A. Yu, N. Rhinehart, and S. Levine. Parrot: Data-driven behavioral priors for reinforcement learning. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=Ysuv-WOFeKR.
{OPAL}: Offline primitive discovery for accelerating offline reinforcement learning. A Ajay, A Kumar, P Agrawal, S Levine, O Nachum, International Conference on Learning Representations. A. Ajay, A. Kumar, P. Agrawal, S. Levine, and O. Nachum. {OPAL}: Offline primitive discov- ery for accelerating offline reinforcement learning. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=V69LGwJ0lIN.
Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. T Haarnoja, A Zhou, P Abbeel, S Levine, International Conference on Machine Learning. T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International Conference on Machine Learning, 2018.
Keep doing what worked: Behavioral modelling priors for offline reinforcement learning. N Siegel, J T Springenberg, F Berkenkamp, A Abdolmaleki, M Neunert, T Lampe, R Hafner, M A Riedmiller, abs/2002.08396ArXiv. N. Siegel, J. T. Springenberg, F. Berkenkamp, A. Abdolmaleki, M. Neunert, T. Lampe, R. Hafner, and M. A. Riedmiller. Keep doing what worked: Behavioral modelling priors for offline reinforcement learning. ArXiv, abs/2002.08396, 2020.
Advantage-weighted regression: Simple and scalable off-policy reinforcement learning. X B Peng, A Kumar, G Zhang, S Levine, abs/1910.00177CoRRX. B. Peng, A. Kumar, G. Zhang, and S. Levine. Advantage-weighted regression: Simple and scalable off-policy reinforcement learning. CoRR, abs/1910.00177, 2019. URL http: //arxiv.org/abs/1910.00177.
Behavior regularized offline reinforcement learning. CoRR, abs/1911.11361. Y Wu, G Tucker, O Nachum, Y. Wu, G. Tucker, and O. Nachum. Behavior regularized offline reinforcement learning. CoRR, abs/1911.11361, 2019. URL http://arxiv.org/abs/1911.11361.
J Leike, D Krueger, T Everitt, M Martic, V Maini, S Legg, arXiv:1811.07871Scalable agent alignment via reward modeling: a research direction. arXiv preprintJ. Leike, D. Krueger, T. Everitt, M. Martic, V. Maini, and S. Legg. Scalable agent alignment via reward modeling: a research direction. arXiv preprint arXiv:1811.07871, 2018.
Rank analysis of incomplete block designs: I. the method of paired comparisons. R A Bradley, M E Terry, Biometrika. 393/4R. A. Bradley and M. E. Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324-345, 1952.
D4rl: Datasets for deep data-driven reinforcement learning. J Fu, A Kumar, O Nachum, G Tucker, S Levine, arXiv:2004.07219arXiv preprintJ. Fu, A. Kumar, O. Nachum, G. Tucker, and S. Levine. D4rl: Datasets for deep data-driven reinforcement learning. arXiv preprint arXiv:2004.07219, 2020.
D Arumugam, J K Lee, S Saskin, M L Littman, arXiv:1902.04257Deep reinforcement learning from policy-dependent human feedback. arXiv preprintD. Arumugam, J. K. Lee, S. Saskin, and M. L. Littman. Deep reinforcement learning from policy-dependent human feedback. arXiv preprint arXiv:1902.04257, 2019.
Interactively shaping agents via human reinforcement: The tamer framework. W B Knox, P Stone, International Conference on Knowledge Capture. W. B. Knox and P. Stone. Interactively shaping agents via human reinforcement: The tamer framework. In International Conference on Knowledge Capture, 2009.
Interactive learning from policy-dependent human feedback. J Macglashan, M K Ho, R Loftin, B Peng, D Roberts, M E Taylor, M L Littman, International Conference on Machine Learning. J. MacGlashan, M. K. Ho, R. Loftin, B. Peng, D. Roberts, M. E. Taylor, and M. L. Littman. Interactive learning from policy-dependent human feedback. In International Conference on Machine Learning, 2017.
Deep tamer: Interactive agent shaping in high-dimensional state spaces. G Warnell, N Waytowich, V Lawhern, P Stone, Conference on Artificial Intelligence. G. Warnell, N. Waytowich, V. Lawhern, and P. Stone. Deep tamer: Interactive agent shaping in high-dimensional state spaces. In Conference on Artificial Intelligence, 2018.
Online human training of a myoelectric prosthesis controller via actor-critic reinforcement learning. P M Pilarski, M R Dawson, T Degris, F Fahimi, J P Carey, R S Sutton, International Conference on Rehabilitation Robotics. P. M. Pilarski, M. R. Dawson, T. Degris, F. Fahimi, J. P. Carey, and R. S. Sutton. Online human training of a myoelectric prosthesis controller via actor-critic reinforcement learning. In International Conference on Rehabilitation Robotics, 2011.
Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. L Pinto, A Gupta, International Conference on Robotics and Automation. L. Pinto and A. Gupta. Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. In International Conference on Robotics and Automation, 2016.
Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. S Levine, P Pastor, A Krizhevsky, J Ibarz, D Quillen, The International Journal of Robotics Research. 374-5S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International Journal of Robotics Research, 37(4-5):421-436, 2018.
Variational inverse control with events: A general framework for data-driven reward definition. J Fu, A Singh, D Ghosh, L Yang, S Levine, Advances in Neural Information Processing Systems. J. Fu, A. Singh, D. Ghosh, L. Yang, and S. Levine. Variational inverse control with events: A general framework for data-driven reward definition. In Advances in Neural Information Processing Systems, 2018.
Few-shot goal inference for visuomotor learning and planning. A Xie, A Singh, S Levine, C Finn, Conference on Robot Learning. A. Xie, A. Singh, S. Levine, and C. Finn. Few-shot goal inference for visuomotor learning and planning. In Conference on Robot Learning, 2018.
Human preference scaling with demonstrations for deep reinforcement learning. Z Cao, K Wong, C.-T Lin, arXiv:2007.12904arXiv preprintZ. Cao, K. Wong, and C.-T. Lin. Human preference scaling with demonstrations for deep reinforcement learning. arXiv preprint arXiv:2007.12904, 2020.
Awac: Accelerating online reinforcement learning with offline datasets. A Nair, A Gupta, M Dalal, S Levine, A. Nair, A. Gupta, M. Dalal, and S. Levine. Awac: Accelerating online reinforcement learning with offline datasets, 2020.
Continuous control with deep reinforcement learning. T P Lillicrap, J J Hunt, A Pritzel, N Heess, T Erez, Y Tassa, D Silver, D Wierstra, International Conference on Learning Representations. T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. In International Conference on Learning Representations, 2016.
Modeling purposeful adaptive behavior with the principle of maximum causal entropy. B D Ziebart, B. D. Ziebart. Modeling purposeful adaptive behavior with the principle of maximum causal entropy. 2010.
|
[] |
[
"Loop corrections for approximate inference",
"Loop corrections for approximate inference"
] |
[
"Joris Mooij [email protected] \nDepartment of Biophysics\nDepartment of Biophysics\nRadboud University\n6525 EZNijmegen, NijmegenThe Netherlands\n",
"Bert Kappen [email protected] \nRadboud University\n6525 EZNijmegen, NijmegenThe Netherlands Editor\n"
] |
[
"Department of Biophysics\nDepartment of Biophysics\nRadboud University\n6525 EZNijmegen, NijmegenThe Netherlands",
"Radboud University\n6525 EZNijmegen, NijmegenThe Netherlands Editor"
] |
[] |
We propose a method for improving approximate inference methods that corrects for the influence of loops in the graphical model. The method is applicable to arbitrary factor graphs, provided that the size of the Markov blankets is not too large. It is an alternative implementation of an idea introduced recently by Montanari and Rizzo (2005). In its simplest form, which amounts to the assumption that no loops are present, the method reduces to the minimal Cluster Variation Method approximation (which uses maximal factors as outer clusters). On the other hand, using estimates of the effect of loops (obtained by some approximate inference algorithm) and applying the Loop Correcting (LC) method usually gives significantly better results than applying the approximate inference algorithm directly without loop corrections. Indeed, we often observe that the loop corrected error is approximately the square of the error of the approximate inference method used to estimate the effect of loops. We compare different variants of the Loop Correcting method with other approximate inference methods on a variety of graphical models, including "real world" networks, and conclude that the LC approach generally obtains the most accurate results.
| null |
[
"https://arxiv.org/pdf/cs/0612030v1.pdf"
] | 1,328,418 |
cs/0612030
|
4e55cdb7e2b39c8a83e11d857e981e622138f921
|
Loop corrections for approximate inference
5 Dec 2006
Joris Mooij [email protected]
Department of Biophysics
Department of Biophysics
Radboud University
6525 EZNijmegen, NijmegenThe Netherlands
Bert Kappen [email protected]
Radboud University
6525 EZNijmegen, NijmegenThe Netherlands Editor
Loop corrections for approximate inference
5 Dec 2006Loop corrections for approximate inferenceLoop CorrectionsApproximate InferenceGraphical ModelsFactor Graphs
We propose a method for improving approximate inference methods that corrects for the influence of loops in the graphical model. The method is applicable to arbitrary factor graphs, provided that the size of the Markov blankets is not too large. It is an alternative implementation of an idea introduced recently by Montanari and Rizzo (2005). In its simplest form, which amounts to the assumption that no loops are present, the method reduces to the minimal Cluster Variation Method approximation (which uses maximal factors as outer clusters). On the other hand, using estimates of the effect of loops (obtained by some approximate inference algorithm) and applying the Loop Correcting (LC) method usually gives significantly better results than applying the approximate inference algorithm directly without loop corrections. Indeed, we often observe that the loop corrected error is approximately the square of the error of the approximate inference method used to estimate the effect of loops. We compare different variants of the Loop Correcting method with other approximate inference methods on a variety of graphical models, including "real world" networks, and conclude that the LC approach generally obtains the most accurate results.
Introduction
In recent years, much research has been done in the field of approximate inference on graphical models. One of the goals is to obtain accurate approximations of marginal probabilities of complex probability distributions defined over many variables, using limited computation time and memory. This research has led to a large number of approximate inference methods. Apart from sampling ("Monte Carlo") methods, the most well-known methods and algorithms are variational approximations such as Mean Field (MF), which originates in statistical physics (Parisi, 1988); Belief Propagation (BP), also known as the Sum-Product Algorithm and as Loopy Belief Propagation (Pearl, 1988;Kschischang et al., 2001), which is directly related to the Bethe approximation used in statistical physics (Bethe, 1935;Yedidia et al., 2005); the Cluster Variation Method (CVM) (Pelizzola, 2005) and other region-based approximation methods (Yedidia et al., 2005), which are related to the Kikuchi approximation (Kikuchi, 1951), a generalization of the Bethe approximation using larger clusters; Expectation Propagation (EP) (Minka, 2001), which includes TreeEP (Minka and Qi, 2004) as a special case. To calculate the results of CVM and other region based approximation methods, one can use the Generalized Belief Propagation (GBP) algorithm (Yedidia et al., 2005) or double-loop algorithms that have guaranteed convergence (Yuille, 2002;Heskes et al., 2003).
It is well-known that Belief Propagation yields exact results if the graphical model is a tree, or, more generally, if each connected component is a tree. If the graphical model does contain loops, BP can still yield surprisingly accurate results using little computation time. However, if the influence of loops is large, the approximate marginals calculated by BP can have large errors and the quality of the BP results may not be satisfactory. One way to correct for the influence of short loops is to increase the cluster size of the approximation, using CVM (GBP) with clusters that subsume as many loops as possible. However, choosing a good set of clusters is highly nontrivial (Welling et al., 2005), and in general this method will only work if the clusters do not have many intersections, or in other words, if the loops do not have many intersections. Another method that corrects for loops to a certain extent is TreeEP, which does exact inference on the base tree, a subgraph of the graphical model which has no loops, and approximates the other interactions. This corrects for the loops that consist of part of the base tree and exactly one additional factor and yields good results if the graphical model is dominated by the base tree, which is the case in very sparse models. However, loops that consist of two or more interactions that are not part of the base tree are approximated in a similar way as in BP. Hence, for denser models, the improvement of TreeEP over BP usually diminishes.
In this article we propose a method that takes into account all the loops in the graphical model in an approximate way and therefore obtains more accurate results in many cases. Our method is a variation on the theme introduced by Montanari and Rizzo (2005). The basic idea is to first estimate the cavity distributions of all variables and subsequently improve these estimates by cancelling out errors using certain consistency constraints. A cavity distribution of some variable is the probability distribution on its Markov blanket (all its neighbouring variables) of a modified graphical model, in which all factors involving that variable have been removed. The removal of the factors breaks all the loops in which that variable takes part. This allows an approximate inference algorithm to estimate the strength of these loops in terms of effective interactions or correlations between the variables of the Markov blanket. Then, the influence of the removed factors is taken into account, which yields accurate approximations to the probability distributions of the original graphical model. Even more accuracy is obtained by imposing certain consistency relations between the cavity distributions, which results in a cancellation of errors to some extent. This error cancellation is done by a message passing algorithm which can be interpreted as a generalization of BP in the pairwise case and of the minimal CVM approximation in general. Indeed, the assumption that no loops are present, or equivalently, that the cavity distributions factorize, yields the BP / minimal CVM results. On the other hand, using better estimates of the effective interactions in the cavity distributions yields accurate loop corrected results.
Although the basic idea underlying our method is very similar to that described in (Montanari and Rizzo, 2005), the alternative implementation that we propose here offers two advantages. Most importantly, it is directly applicable to arbitrary factor graphs, whereas the original method has only been formulated for the rather special case of graphical models with binary variables and pairwise factors, which excludes e.g. many interesting Bayesian networks. Furthermore, our implementation appears to be more robust and also gives improved results for relatively strong interactions, as will be shown numerically.
This article is organised as follows. First we explain the theory behind our proposed method and discuss the differences with the original method by Montanari and Rizzo (2005). Then we report extensive numerical experiments regarding the quality of the approximation and the computation time, where we compare with other approximate inference methods. Finally, we discuss the results and state conclusions.
Theory
In this work, we consider graphical models such as Markov random fields and Bayesian networks. We use the general factor graph representation since it allows for formulating approximate inference algorithms in a unified way (Kschischang et al., 2001). In the next subsection, we introduce our notation and basic definitions.
Graphical models and factor graphs
Consider N discrete random variables {x i } i∈V with V := {1, . . . , N }. Each variable x i takes values in a discrete domain X i . We will use the following multi-index notation: for any subset I ⊆ V, we write x I := (x i 1 , x i 2 , . . . , x im ) if I = {i 1 , i 2 , . . . , i m } and i 1 < i 2 < . . . i m . We consider a probability distribution over x = (x 1 , . . . , x N ) that can be written as a product of factors ψ I :
P (x) = 1 Z I∈F ψ I (x I ), Z = x I∈F ψ I (x I ).(1)
The factors (which we will also call "interactions") are indexed by (small) subsets of V, i.e. F ⊆ P(V) := {I : I ⊆ V}. Each factor is a non-negative function ψ I : i∈I X i → [0, ∞). For a Bayesian network, the factors are conditional probability tables. In case of Markov random fields, the factors are often called potentials (not to be confused with statistical physics terminology, where "potential" refers to minus the logarithm of the factor instead). Henceforth, we will refer to a triple (V, F, {ψ I } I∈F ) that satisfies the description above as a discrete graphical model (or network ). In general, the normalizing constant Z is not known and exact computation of Z is infeasible, due to the fact that the number of terms to be summed is exponential in N . Similarly, computing marginal distributions P (x J ) of P for subsets of variables J ⊆ V is intractable in general. In this article, we focus on the task of accurately approximating single node marginals P (x i ) = x V\i P (x).
We can represent the structure of the probability distribution (1) using a factor graph. This is a bipartite graph, consisting of variable nodes i ∈ V and factor nodes I ∈ F, with an edge between i and I if and only if i ∈ I, i.e. if x i participates in the factor ψ I . We
(x) = 1 Z ψ L (x j , x n , x o )ψ I (x i , x j )ψ M (x j , x k )ψ N (x m )ψ K (x i , x m , x n )ψ J (x i , x k , x l )ψ O (x l , x m ); (b)
Factor graph corresponding to the cavity network of variable i, obtained by removing variable i and the factor nodes that contain i (i.e. I,J and K). The Markov blanket of i is ∂i = {j, k, l, m, n}. The cavity distribution Z \i (x ∂i ) is the (unnormalized) marginal on x ∂i of the probability distribution corresponding to the cavity graph (b).
will represent factor nodes visually as rectangles and variable nodes as circles. See Figure 1(a) for an example of a factor graph. We denote the neighbouring nodes of a variable node i by N i := {I ∈ F : i ∈ I} and the neighbouring nodes of a factor node I simply by I = {i ∈ V : i ∈ I}. Further, we define for each variable i ∈ V the set ∆i := N i consisting of all variables that appear in some factor in which variable i participates, and the set ∂i := ∆i \ {i}, the Markov blanket of i.
In the following, we will often abbreviate the set theoretical notation X \ Y (i.e. all elements in X that are not in Y ) by \Y if it is obvious from the context what the set X is. Further, we will use lowercase for variable indices and uppercase for factor indices. For convenience, we will define for any subset I ⊂ F the product of the corresponding factors:
Ψ I (x S I ) := I∈I ψ I (x I ).
Cavity networks and loop corrections
The notion of a cavity stems from statistical physics, where it was used originally to calculate properties of random ensembles of certain graphical models (Mézard et al., 1987). A cavity is obtained by removing one variable from the graphical model, together with all the factors in which that variable participates. In our context, we define cavity networks as follows (see also Figure 1):
Definition 2.1 Given a graphical model (V, F, {ψ I } I∈F ) and a variable i ∈ V, the cavity network of variable i is the graphical model
(V \ i, F \ N i , {ψ I } I∈F\N i ).
The probability distribution corresponding to the cavity network of variable i is thus proportional to:
Ψ \N i (x \i ) = I∈F i ∈I ψ I (x I ).
Summing out all the variables, except for the neighbours ∂i of i, gives what we will call the cavity distribution:
Definition 2.2 Given a graphical model (V, F, {ψ I } I∈F ) and a variable i ∈ V, the cavity distribution of i is Z \i (x ∂i ) := x \∆i Ψ \N i (x \i ).(2)
Thus the cavity distribution of i is proportional to the marginal of the cavity network of i on the Markov blanket ∂i. The cavity distribution describes the effective interactions (or correlations) induced by the cavity network on the neighbours ∂i of variable i. Indeed, from equations (1) and (2) and the trivial observation that Ψ F = Ψ N i Ψ \N i we conclude:
P (x ∆i ) ∝ Z \i (x ∂i )Ψ N i (x ∆i ).(3)
Thus given the cavity distribution Z \i (x ∂i ), one can calculate the marginal distribution of the original graphical model P on x ∆i , provided that the cardinality of X ∆i is not too large. In practice, exact cavity distributions are not known, and the only way to proceed is to use approximate cavity distributions. Given some approximate inference method (e.g. BP), there are two ways to calculate P (x ∆i ): either use the method to approximate P (x ∆i ) directly, or use the method to approximate Z \i (x ∂i ) and use relation (3) to obtain an approximation to P (x ∆i ). The latter method generally gives more accurate results, since the complexity of the cavity network is less than that of the original network. In particular, the cavity network of variable i contains no loops involving that variable, since all factors in which i participates have been removed (e.g. Figure 1(a), is not present in the cavity network, Figure 1(b)). Thus the latter method of calculating P (x ∆i ) takes into account loops involving variable i, although in an approximate way. It does not, however, take into account the other loops in the original graphical model. The basic idea of the loop correction approach of Montanari and Rizzo (2005) is to use the latter method for all variables in the network, but to adjust the approximate cavity distributions in order to cancel out approximation errors before (3) is used to obtain the final approximate marginals. This approach takes into account all the loops in the original network, in an approximate way.
the loop i − J − l − O − m − K − i in the original network,
This basic idea can be implemented in several ways. Here we propose an implementation which we will show to have certain advantages over the original implementation proposed in (Montanari and Rizzo, 2005). In particular, it is directly applicable to arbitrary factor graphs with variables taking an arbitrary (discrete) number of values and factors that may contain zeroes and consist of an arbitrary number of variables. In the remaining subsections, we will first discuss our proposed implementation in detail. In section 2.6 we will discuss differences with the original approach.
Combining approximate cavity distributions to cancel out errors
Suppose that we have obtained an initial approximation ζ \i 0 (x ∂i ) of the (exact) cavity distribution Z \i (x ∂i ), for each i ∈ V. Let i ∈ V and consider the approximation error of the cavity distribution of i, i.e. the exact cavity distribution of i divided by its approximation:
Z \i (x ∂i ) ζ \i 0 (x ∂i )
.
In general, this is an arbitrary function of the variables x ∂i . However, for our purposes, we can approximate the error as a product of factors defined on small subsets of ∂i in the following way:
Z \i (x ∂i ) ζ \i 0 (x ∂i ) ≈ I∈N i φ \i I (x I\i ).
Thus we assume that the approximation error lies near a submanifold parameterized by the error factors {φ \i I (x I\i )} I∈N i . If we were able to calculate these error factors, we could improve our initial approximation ζ \i 0 (x ∂i ) by replacing it with the product
ζ \i (x ∂i ) := ζ \i 0 (x ∂i ) I∈N i φ \i I (x I\i ) ≈ Z \i (x ∂i ).(4)
Using (3), this would then yield an improved approximation of P (x ∆i ). It turns out that the error factors can indeed be calculated by exploiting the redundancy of the information in the initial cavity approximations {ζ \i 0 } i∈V . The fact that all ζ \i provide approximations to marginals of the same probability distribution P (x) via (3) can be used to obtain consistency constraints. The number of constraints obtained in this way is enough to solve for the unknown error factors {φ \i I (x I\i )} i∈V,I∈N i . Here we propose the following consistency constraints. Let Y ∈ F, i ∈ Y and j ∈ Y with i = j (see also Figure 2). Consider the graphical model (V, F \ Y, {ψ I } I∈F\Y ) that is obtained from the original graphical model by removing factor ψ Y . The product of all factors (except ψ Y ) obviously satisfies:
Ψ \Y = Ψ N i \Y Ψ \N i = Ψ N j \Y Ψ \N j .
Using (2) and summing over all x k for k ∈ Y \ i, we obtain the following equation, which holds for the exact cavity distributions Z \i and Z \j :
x i x ∆i\Y Ψ N i \Y Z \i = x i x ∆j\Y Ψ N j \Y Z \j .
Substituting our basic assumption (4) on both sides and pulling the factor φ \i Y (x Y \i ) in the l.h.s. through the summation, we obtain: Figure 2: Part of the factor graph, illustrating the derivation of (6). The two grey variable nodes correspond to Y \ i = {j, k}.
φ \i Y x i x ∆i\Y Ψ N i \Y ζ \i 0 I∈N i \Y φ \i I = x i x ∆j\Y Ψ N j \Y ζ \j 0 J∈N j φ \j J(5)I 0 I 1 I 1 I 2 J 0 J 1 J 2 Y i j k
This should hold for each j ∈ Y \ i. We can thus take the geometrical average of the r.h.s. over all j ∈ Y \ i. After rearranging, this yields:
φ \i Y = j∈Y \i x i x ∆j\Y Ψ N j \Y ζ \j 0 J∈N j φ \j J 1/|Y \i| x i x ∆i\Y Ψ N i \Y ζ \i 0 I∈N i \Y φ \i I for all i ∈ V, Y ∈ N i .(6)
Note that the numerator is an approximation of the joint marginal P \Y (x Y \i ) of the modified graphical model (V, F \ Y, {ψ I } I∈F\Y ) on the variables Y \ i.
Solving the consistency equations (6) simultaneously for the error factors {φ \i I } i∈V,I∈N i can be done using a simple fixed point iteration algorithm, e.g. Algorithm 1. The input consists of the initial approximations {ζ \i 0 } i∈V to the cavity distributions. It calculates the error factors that satisfy (6) by fixed point iteration and from the fixed point, it calculates improved approximations of the cavity distributions {ζ \i } i∈V using relation (4). 1 From the improved cavity distributions, we can calculate the loop corrected approximations to the single variable marginals of the original probability distribution (1) as follows:
b i (x i ) ∝ x ∂i Ψ N i (x ∆i )ζ \i (x ∂i )(7)
where the factor ψ Y is now included. Algorithm 1 uses a sequential update scheme, but other update schemes are possible (e.g. random sequential or parallel). In practice, the fixed sequential update scheme often converges without the need for damping.
for all i ∈ V do 3: for all Y ∈ N i do 4: φ \i Y (x Y \i ) ← j∈Y \i x i x ∆j\Y Ψ N j \Y ζ \j 0 J∈N j φ \j J 1/|Y \i| x i x ∆i\Y Ψ N i \Y ζ \i 0 I∈N i \Y φ \iζ \i (x ∂i ) ← ζ \i 0 (x ∂i ) I∈N i φ \i I (x I\i ) 10: end for
Alternatively, one can formulate Algorithm 1 in terms of the "beliefs"
Q i (x ∆i ) ∝ Ψ N i (x ∆i )ζ \i 0 (x ∂i ) I∈N i φ \i I (x I\i ) = Ψ N i (x ∆i )ζ \i (x ∂i ).(8)
As one easily verifies, the following update equation
Q i ← Q i j∈Y \i x ∆j\(Y \i) Q j ψ −1 Y 1/|Y \i| x ∆i\(Y \i) Q i ψ −1 Y
is equivalent to line 1 of Algorithm 1. Intuitively, the update improves the approximate distribution Q i on ∆i by replacing its marginal on Y \ i (in the absence of Y ) by a more accurate approximation of this marginal, namely the numerator. Written in this form, the algorithm is reminiscent of Iterative Proportional Fitting (IPF). However, contrary to IPF, the desired marginals are also updated each iteration. Note that after convergence, the large beliefs Q i (x ∆i ) need not be consistent, i.e. in general x ∆i\J Q i = x ∆j\J Q j for i, j ∈ V, J ⊆ ∆i ∩ ∆j.
A special case: factorized cavity distributions
In the previous subsection we have discussed how to improve approximations of cavity distributions. We now discuss what happens when we use the simplest possible initial approximations {ζ \i 0 } i∈V , namely constant functions, in Algorithm 1. This amounts to the assumption that no loops are present. We will show that if the factor graph does not contain short loops consisting of four nodes, fixed points of the standard BP algorithm are also fixed points of Algorithm 1. In this sense, Algorithm 1 can be considered to be a generalization of the BP algorithm. In fact, this holds even if the initial approximations factorize in a certain way, as we will show below.
If all factors involve at most two variables, one can easily arrange for the factor graph to have no loops of four nodes. See figure 1(a) for an example of a factor graph which has no loops of four nodes. The factor graph depicted in Figure 2 does have a loop of four nodes:
k − Y − j − J 2 − k.
Theorem 2.1 If the factor graph corresponding to (1) has no loops of exactly four nodes, and all initial approximate cavity distributions factorize in the following way:
ζ \i 0 (x ∂i ) = I∈N i ξ \i I (x I\i ) ∀i ∈ V,(9)
then fixed points of the BP algorithm can be mapped to fixed points of Algorithm 1. Furthermore, the corresponding variable and factor marginals obtained from (8) are identical to the BP beliefs.
Proof Note that replacing the initial cavity approximations by
ζ \i 0 (x ∂i ) → ζ \i 0 (x ∂i ) I∈N i ǫ \i I (x I\i )(10)
for arbitrary positive functions ǫ \i I (x I\i ) does not change the beliefs (8) corresponding to the fixed points of (6). Thus, without loss of generality, we can assume ζ \i 0 (x ∂i ) = 1 for all i ∈ V. The BP update equations are:
µ j→I (x j ) ∝ J∈N j \I µ J→j (x j ) j ∈ V, I ∈ N j µ I→i (x i ) ∝ x I\i ψ I (x I ) j∈I\i µ j→I (x j ) I ∈ F, i ∈ I(11)
in terms of messages {µ J→j (x j )} j∈V,J∈N j and {µ j→J (x j )} j∈V,J∈N j . Assume that the messages µ are a fixed point of (11) and take the Ansatz
φ \i I (x I\i ) = k∈I\i µ k→I (x k ) for i ∈ V, I ∈ N i . Then, for i ∈ V, Y ∈ N i , j ∈ Y \ i,
we can write out part of the numerator of (6) as follows:
x i x ∆j\Y Ψ N j \Y ζ \j 0 J∈N j φ \j J = x i x ∆j\Y φ \j Y J∈N j \Y ψ J φ \j J = x i k∈Y \j µ k→Y J∈N j \Y x J \j ψ J k∈J\j µ k→J = x i k∈Y \j µ k→Y µ j→Y = x i k∈Y µ k→Y ∝ k∈Y \i µ k→Y = φ \i Y ,
where we used the BP update equations (11) and rearranged the summations and products using the assumption that the factor graph has no loops of four nodes. Thus, the numerator of the r.h.s. of (6) is simply φ \i Y . Using a similar calculation, one can derive that the denominator of the r.h.s. of (6) is constant, and hence equation (6) is valid (up to an irrelevant constant).
For Y ∈ F, i ∈ Y , the marginal on x Y of the belief (8) can be written in a similar way:
x ∆i\Y Q i ∝ x ∆i\Y Ψ N i I∈N i φ \i I = x ∆i\Y I∈N i ψ I k∈I\i µ k→I = ψ Y k∈Y \i µ k→Y I∈N i \Y x I\i ψ I k∈I\i µ k→I = ψ Y k∈Y \i µ k→Y I∈N i \Y µ I→i = ψ Y k∈Y \i µ k→Y µ i→Y = ψ Y k∈Y µ k→Y . which is proportional to the BP belief b Y (x Y ) on x Y . Hence, also the single variable marginal b i defined in (7) corresponds to the BP single variable belief, since both are marginalizations of b Y for Y ∈ N i .
If the factor graph does contain loops of four nodes, we find empirically that the fixed point of Algorithm 1, when using factorized initial cavity approximations as in (9), corresponds to the "minimal" CVM approximation, i.e. the CVM approximation that uses all (maximal) factors as outer clusters (Kikuchi, 1951;Pelizzola, 2005). 2 In that case, the factor beliefs found by Algorithm 1 are consistent, i.e. x ∆i\Y Q i = x ∆j\Y Q j for i, j ∈ Y , and are identical to the minimal CVM factor beliefs. Thus it appears that Algorithm 1 can be considered as a generalization of the minimal CVM approximation (which can e.g. be calculated using the GBP algorithm (Yedidia et al., 2005) or a double-loop implementation (Heskes et al., 2003)).
We have not yet been able to prove this, so currently this claim stands as a conjecture, which we have empirically verified to be true for all the graphical models used for numerical experiments in section 3. Note that in case the factor graph has no loops of length four, the minimal CVM approximation reduces to the Bethe approximation, which yields a proof for this case. The proof in the general case is expected to be more involved, since one needs to keep track of various overlapping sets and it requires a translation of the structure of (6) (where the basic sets of variables involved are of three types, namely i, Y \ i, and ∆i) and the GBP equations or Lagrange multiplier equations corresponding to the minimal CVM approximation (where the basic variable sets are those that can be written as an intersection of a finite number of factors).
2. Provided that the factor graph is connected.
Obtaining initial approximate cavity distributions
There is no principled way to obtain the initial cavity approximations ζ \i 0 (x ∂i ). In the previous subsection, we saw that factorized cavity approximations result in the minimal CVM approximation, which does not yet take into account the effect of loops in the cavity network. More sophisticated approximations that do take into account the effect of loops can significantly enhance the accuracy of the final result. In principle, there are many possibilities to obtain the initial cavity approximations. Here, we will describe one method, which uses BP on clamped cavity networks. This method captures all interactions in the cavity distribution of i in an approximate way and can lead to very accurate results. Instead of BP, any other approximate inference method that gives an approximation of the normalizing constant Z in (1) can be used, such as Mean Field, TreeEP (Minka and Qi, 2004), a double-loop version of BP (Heskes et al., 2003) which has guaranteed convergence towards a minimum of the Bethe free energy, or some variant of GBP (Yedidia et al., 2005). One could also choose the method for each cavity separately, trading accuracy versus computation time. We focus here on BP because it is a very fast and often relatively accurate algorithm.
Let i ∈ V and consider the cavity network of i. For each possible state of x ∂i , run BP on the cavity network clamped to that state x ∂i and calculate the corresponding Bethe free energy F \i Bethe (x ∂i ) (Yedidia et al., 2005). Take as initial approximate cavity distribution:
ζ \i 0 (x ∂i ) ∝ e −F \i Bethe (x ∂i ) .
This procedure is exponential in the size of ∂i: it uses j∈∂i |X j | BP runs. However, many networks encountered in applications are relatively sparse and have limited cavity size and the computational cost may be acceptable.
This particular way of obtaining initial cavity distributions has the following interesting property: in case the factor graph contains only a single loop, the final beliefs (8) resulting from Algorithm 1 are exact. This can be shown using an argument similar to that given in (Montanari and Rizzo, 2005). Suppose that the graphical model contains exactly one loop and let i ∈ V. Consider first the case that i is part of the loop; removing i will break the loop and the remaining cavity network will be singly connected. The cavity distribution approximated by BP will thus be exact. Now if i is not part of the loop, removing i will divide the network into several connected components, one for each neighbour of i. This implies that the cavity distribution calculated by BP contains no higher order interactions, i.e. ζ If all interactions are pairwise and each variable is binary and has exactly |∂i| = d neighbours, the time complexity of the resulting "Loop Corrected BP" (LCBP) algorithm is given by N 2 d T BP + N d2 d+1 N LC , where T BP is the average time of one BP run on a clamped cavity network and N LC is the number of iterations needed to obtain convergence in Algorithm 1. Montanari and Rizzo (2005) As mentioned before, the idea of estinating the cavity distributions and imposing certain consistency relations amongst them has been first presented in Montanari and Rizzo (2005). In its simplest form (i.e. the so-called first order correction), the implementation of that basic idea as proposed by Montanari and Rizzo (2005) differs from our proposed implementation in the following aspects.
Differences with
First, the original method described by Montanari and Rizzo (2005) is only formulated for the rather special case of binary variables and pairwise interactions. In contrast, our method is formulated in a general way that makes it applicable to factor graphs with variables having more than two possible values and factors consisting of more than two variables. Also, factors may contain zeroes. The generality that our implementation offers is important for many practical applications. 3 In the rest of this section, we will assume that the graphical model (1) belongs to the special class of binary variables with pairwise interactions, allowing further comparison of both implementations.
An important difference is that Montanari and Rizzo (2005) suggest to deform the initial approximate cavity distributions by altering certain cumulants (also called "connected correlations"), instead of altering certain interactions. In general, for a set A of ±1-valued random variables {x i } i∈A , one can define for any subset B ⊆ A the moment
M B := x A P (x A ) j∈B x j .
The moments {M B } B⊆A are a parameterization of the probability distribution P (x A ). An alternative parameterization is given in terms of the cumulants. The (joint) cumulants {C E } E⊆A are certain polynomials of the moments, defined implicitly by the following relations: Montanari and Rizzo (2005) propose to approximate the cavity distributions by estimating the pair cumulants and assuming higher order cumulants to be zero. Then, the singleton cumulants (i.e. the single node marginals) are altered, keeping higher order cumulants fixed, in such a way as to impose consistency of the single node marginals, in the absence of interactions shared by two neighbouring cavities. We refer the reader to the appendix for a more detailed description of the implementation in terms of cumulants suggested by Montanari and Rizzo (2005). A minor difference lies in the method to obtain initial approximations to the cavity distributions. Montanari and Rizzo (2005) propose to use BP in combination with linear response theory to obtain the initial pairwise cumulants. This difference is not very important, since one could also use BP on clamped cavity networks instead, which turns out to give almost identical results.
M B = C∈Part(B) E∈C C E where Part(B) is the set of partitions of B. 4 In particular, C i = M i and C ij = M ij − M i M j for all i, j ∈ A with i = j.
As we will show in section 3, our method of altering interactions appears to be more robust and still works in regimes with strong interactions, whereas the cumulant implementation suffers from convergence problems for strong interactions.
An advantage of the cumulant based scheme is that it allows for a linearized version (by expanding up to first order in terms of the pairwise cumulants, see appendix) which is quadratic in the size of the cavity. This means that this linearized, cumulant based version is currently the only one that can be applied to networks with large Markov blankets (cavities), i.e. where the maximum number of states max i∈V |X ∆i | is large (provided that all variables are binary and interactions are pairwise).
Numerical experiments
We have performed various numerical experiments to compare the quality of the results and the computation time of the following approximate inference methods:
MF Mean Field, with a random sequential update scheme and no damping.
BP Belief Propagation. We have used the recently proposed update scheme Elidan et al. (2006), which converges also for difficult problems without the need for damping.
TreeEP TreeEP (Minka and Qi, 2004), without damping. We generalized the method of choosing the base tree described in Minka and Qi (2004) to multiple variable factors as follows: when estimating the mutual information between x i and x j , we take the product of the marginals on {i, j} of all the factors that involve x i and/or x j . Other generalizations of TreeEP to higher order factors are possible (e.g. by clustering variables), but it is not clear how to do this in general in an optimal way.
LCBP ("Loop Corrected Belief Propagation") Algorithm 1, where the approximate cavities are initialized according to the description in section 2.5.
LCBP-Cum
The original cumulant based loop correction scheme by Montanari and Rizzo (2005), using response propagation (also known as linear response; see (Welling and Teh, 2004)) to approximate the initial pairwise cavity cumulants. The full update equations (18) are used and higher order cumulants are assumed to vanish.
LCBP-Cum-Lin Similar to LCBP-Cum, but instead of the full update equations (18), the linearized update equations (19) are used. (Heskes et al., 2003) of the minimal CVM approximation, which uses (maximal) factors as outer clusters.
CVM-Min
A double-loop implementation
CVM-∆ A double-loop implementation of CVM using the sets {∆i} i∈V as outer clusters. These are the same sets of variables as used by LCBP (c.f. (8)) and therefore it is interesting to compare both algorithms.
CVM-Loopk A double-loop implementation of CVM, using as outer clusters all (maximal) factors together with all loops in the factor graph that consist of up to k different variables (for k = 3, 4, 5, 6, 8).
We have used a double-loop implementation of CVM instead of GBP because the former is guaranteed to converge to a local minimum of the Kikuchi free energy (Heskes et al., 2003) without damping, whereas the latter often only converges with strong damping. The difficulty with damping is that the optimal damping constant is not known a priori, which necessitates multiple trial runs with different damping constants, until a suitable one is found. Using too much damping slows down convergence, whereas a certain amount of damping is required to obtain convergence in the first place. Therefore, in general we expect that GBP is not much faster than a double-loop implementation because of the computational cost of finding the optimal damping constant.
To be able to assess the errors of the various approximate methods, we have only considered problems for which exact inference (using a standard JunctionTree method) was still feasible.
For each approximate inference method, we report the maximum ℓ ∞ error of the approximate single node marginals b i , calculated as follows:
Error := max i∈V max x i ∈X i |b i (x i ) − P (x i )|(12)
where P (x i ) is the exact marginal calculated using the JunctionTree method.
The computation time was measured as CPU time in seconds on a 2.4 GHz AMD Opteron 64bits processor with 4 GB memory. The timings should be seen as indicative because we have not spent equal amounts of effort optimizing each method. 5 We consider an iterative method to be "converged" after T timesteps if for each variable i ∈ V, the ℓ ∞ distance between the approximate probability distributions of that variable at timestep T and T + 1 is less than ǫ = 10 −9 .
We have studied four different model classes: (i) random graphs of uniform degree with pairwise interactions and binary variables; (ii) random factor graphs with binary variables and factor nodes of uniform degree k = 3; (iii) the ALARM network, which has variables taking on more than two possible values and factors consisting of more than two variables; (iv) PROMEDAS networks, which have binary variables but factors consisting of more than two variables.
Random regular graphs with binary variables
We have compared various approximate inference methods on random graphs, consisting of N binary (±1-valued) variables, having only pairwise interactions, where each variable has the same degree |∂i| = d. In this case, the probability distribution (1) can be written in the following way:
P (x) = 1 Z exp i∈V θ i x i + 1 2 i∈V j∈∂i J ij x i x j ,
where the parameters {θ i } i∈V are called the local fields and the parameters {J ij = J ji } i∈V,j∈∂i are called the couplings. The graph structure and the parameters θ and J were drawn randomly for each instance. The local fields {θ i } were drawn independently from a N (0, βΘ) distribution (i.e. a normal distribution with mean 0 and standard deviation βΘ). For the couplings {J ij }, we distinguished two different cases: mixed ("spin-glass") and attractive ("ferromagnetic") couplings. The couplings were drawn independently from the following distributions:
J ij ∼ N 0, β tanh −1 1 √ d − 1 mixed couplings J ij = J ′ ij , J ′ ij ∼ N 0, β tanh −1 1 d − 1 attractive couplings
The constant β (called "inverse temperature" in statistical physics) controls the overall interaction strength and thereby the difficulty of the inference problem, larger β corresponding usually to more difficult problems. The constant Θ controls the relative strength of the local fields, where larger Θ result in easier inference problems. The particular d-dependent scaling of the couplings is used in order to obtain roughly d-independent behaviour. In case of mixed couplings, for Θ = 0 and for β ≈ 1 a phase transition occurs in the limit of N → ∞, going from an easy "paramagnetic" phase for β < 1 to a complicated "spin-glass" phase for β > 1. In the case of attractive couplings and Θ = 0, a phase transition also occurs at β = 1, now going from the easy "paramagnetic" phase for β < 1 to a "ferromagnetic" phase for β > 1. In this section we study regular random graphs of low degree d = 3, consisting of N = 100 variables, with mixed couplings and relatively strong local fields of strength Θ = 2. We considered various overall interaction strengths β between 0.01 and 10. For each value of β, we used 16 random instances. On each instance, we ran various approximate inference algorithms. Figures 3 and 4 show selected results. 7 Both figures consist of two parts; the first row in both figures shows averages (in the logarithmic domain) of errors and computation time as a function of β for various methods.
In addition, Figure 3 shows the fraction of instances on which each method converged; for Figure 4, all methods converged for all values of β. The averages of errors and computation time were calculated from the converged instances only. The other rows in the figures contain scatter plots that compare errors of various methods one-to-one. The solid red lines in the scatter plots indicate equality; the dotted red lines indicate that the error of the method on the vertical axis is the square of the error on the horizontal axis. Saturation of errors around 10 −9 is an artefact due to the convergence criterion. The CVM methods are often seen to saturate around 10 −8 , which indicates that single iterations are less effective than for other methods.
We conclude from both figures that BP is the fastest but also the least accurate method and that LCBP is the most accurate method and that it converges for all β. Furthermore, the error of LCBP is approximately the square of the BP error.
6. More precisely, in case of zero local fields (Θ = 0), the PA-SG phase transition occurs at (d − 1) = tanh 2 (βJij )¸, where · is the average over all Jij , and the PA-FE phase transition occurs at (d − 1) = tanh(βJij ) (Mooij and Kappen, 2005). What happens for Θ > 0 is not known, to the best of our knowledge. 7. We apologize to readers for the use of colours; we saw no viable alternative for creating clear plots. Figure 3 shows further that TreeEP is able to obtain a significant improvement over BP using little computation time. For small values of β, LCBP-Cum and LCBP-Cum-Lin both converge and yield high quality results and the error introduced by the linearization is relatively small. However, for larger values of β, both methods get more and more convergence problems, although for the few cases where they do converge, they still yield accurate results. At β ≈ 10, both methods have completely stopped converging. The error introduced by the linearization increases for larger values of β. The computation times of LCBP-Cum, LCBP-Cum-Lin and LCBP do not differ substantially in the regime where all methods converge. The difference in quality between LCBP and LCBP-Cum is mainly due to the fact that LCBP does take into account triple interactions in the cavity (however, extending LCBP-Cum in order to take into account triple interactions is easy for this case of low d).
The break-down of the cumulant based LCBP methods for high β is probably due to the choice of cumulants for parameterizing cavity distributions, which seem to be less robust than interactions. Indeed, consider two random variables x 1 and x 2 with fixed pair interaction exp(Jx 1 x 2 ). By altering the singleton interactions exp(θ 1 x 1 ) and exp(θ 2 x 2 ), one can obtain any desired marginals of x 1 and x 2 . However, a fixed pair cumulant C 12 = x 1 x 2 − x 1 x 2 imposes a constraint on the range of possible expectation values x 1 and x 2 (hence on the single node marginals of x 1 and x 2 ); the freedom of choice in these marginals becomes less as the pair cumulant becomes stronger. We believe that something similar happens for LCBP-Cum: for strong interactions, the approximate pair cumulants in the cavity are strong, and even tiny errors can lead to inconsistencies. 8
The results of the CVM approach to loop correcting is shown in 4. The CVM-Loop methods, with clusters reflecting the short loops present in the factor graph, do improve on BP. The use of larger clusters that subsume longer loops improves the results, but computation time quickly increases. CVM-Loop3 does not obtain any improvement over BP, simply because there are (almost) no loops of 3 variables present. The most accurate CVM method, CVM-Loop8, needs more computation time than LCBP, whereas the quality of its results is not as good. Surprisingly, although CVM-∆ uses larger cluster than BP, its quality is similar to that of BP and its computation time is enormous. This is remarkable, since one would expect that CVM-∆ should improve on BP because it uses larger clusters. In any case, we conclude that although LCBP and CVM-∆ use identical clusters, the nature of both approximations is very different.
We have also done experiments for weak local fields (Θ = 0.2). The behaviour is similar to that of strong local fields, apart from the following differences. First, the influence of the phase transition is more pronounced; many methods have severe convergence problems around β = 1. Further, the negative effect of linearization on the error (LCBP-Cum-Lin compared to LCBP-Cum) is smaller.
8. Indeed, for strong interactions, the update equations (18) often yield values for the M \i j outside of the valid interval [−1, 1]. In this case, we project these values back into the valid interval in the hope that the method will converge to a valid result, which it sometimes does. This phenomenon also indicates the lack of robustness of a cumulant parameterization in the regime of strong interactions.
Fixed β and varying relative local field strength Θ
In addition, we have done experiments for fixed β = 1.0 for various values of the relative local field strength Θ between 0.01 and 10. The results are shown in Figures 5 and 6.
Computation time is seen to decrease with increasing local field strength Θ. The errors on the other hand first increase slowly, and then suddenly decrease rapidly. Again, LCBP-Cum and LCBP-Cum-Lin are the only methods that have convergence problems. The ranking in terms of accuracy of various methods does not depend on the local field strength, nor does the ranking in terms of computation time.
Larger degree (d = 6)
To study the influence of the degree d = |∂i|, we have done additional experiments for d = 6. We had to reduce the number of variables to N = 50, because exact inference was infeasible for larger values of N due to quickly increasing treewidth. The results are shown in Figure 7.
As in the previous experiments, BP is the fastest and least accurate method, whereas LCBP yields the most accurate results, even for high β.
The differences with the case of low degree (d = 3) are the following. The relative improvement of TreeEP over BP has decreased. This could have been expected, because in denser networks, the effect of taking out a tree becomes less. Further, the relative improvement of CVM-Loop4 over BP has increased, probably because there are more short loops present. On the other hand, computation time of CVM-Loop4 has also increased and it is the slowest of all methods. We decided to abort the calculations for CVM-Loop6 and CVM-Loop8, because computation time was prohibitive due to the enormous amount of short loops present. We conclude that the CVM-Loop approach to loop correcting is not very efficient. Surprisingly, the results of LCBP-Cum-Lin are now very similar in quality to the results of LCBP-Cum, except for a few isolated cases (presumably on the edge of the convergence region). LCBP now clearly needs more computation time than LCBP-Cum and LCBP-Cum-Lin, but also obtains significantly better results due to the fact that it takes into account higher order cavity interactions.
Influence of the coupling type
To study the influence of coupling type, we have done additional experiments for (N = 50, d = 6) random regular graphs with attractive couplings and strong local fields (Θ = 2). The results are shown in 8.
By comparing Figures 7 and 8, it becomes clear that the influence of the coupling type is rather small. Differences might be more pronounced in case of weak local fields (for which we have not done additional experiments).
Scaling with N
We have investigated how computation time scales with the number of variables N , for fixed β = 0.1, Θ = 2 and d = 6 for mixed couplings. We used a machine with more memory (16 GB) to be able to do exact inference without swapping also for N = 60. The results can be found in Figure 9. For larger values of N , the computation time for exact inference would increase exponentially with N . The error of all methods is approximately constant. BP should scale approximately linearly in N . LCBP variants are expected to scale quadratic in N (since d is fixed) which indeed appears to be the case. The computation time of the exact JunctionTree method quickly increases due to increasing treewidth; for N = 60 it is already ten times larger than the computation time of the slowest approximate inference method. The computation time of CVM-Loop3 and CVM-Loop4 seems to be approximately constant, probably because the large number of overlaps of short loops for small values of N causes difficulties.
We conclude that for large N , exact inference is infeasible, whereas LCBP still yields very accurate results using moderate computation time.
Scaling with d
It is also interesting to see how various methods scale with d, the variable degree, which is directly related to the cavity size. We have done experiments for random graphs of size N = 24 with fixed β = 0.1 and Θ = 2 for mixed couplings for different values of d between 3 and 23. The results can be found in Figure 10. We aborted the calculations of the slower methods (LCBP, LCBP-Cum, CVM-Loop3) at d = 15.
Due to the particular dependence of the interaction strength on d, the errors of most methods depend only slightly on d. TreeEP is an exception: for larger d, the relative improvement of TreeEP over BP diminishes, and the TreeEP error approaches the BP error. CVM-Loop3 gives better quality, but needs relatively much computation time and becomes very slow for large d due to the large increase in the number of loops of 3 variables. LCBP is the most accurate method, but becomes very slow for large d. LCBP-Cum is less accurate and becomes slower than LCBP for large d, because of the additional overhead of the combinatorics needed to perform the update equations. The accuracy of LCBP-Cum-Lin is indistinguishable from that of LCBP-Cum, although it needs significantly less computation time.
Alternative methods to obtain initial approximate cavity distributions
Until now we have used BP to estimate initial cavity approximations. We now show that other approximate inference methods can be used as well and that a similar relative improvement in accuracy is obtained. Figure 11 shows the results of Algorithm 1 for cavity approximations initialized using the method described in Section 2.5 with MF and TreeEP instead of BP. For reference, also the BP results are plotted. In all cases, the loop corrected error is approximately the square of the error of the uncorrected approximate inference method. Because BP is very fast yet relatively accurate, we focus on LCBP in this article.
Multi-variable factors
We now go beyond pairwise interactions and study a class of random factor graphs with binary variables and uniform factor degree |I| = k (for all I ∈ F) with k > 2. The number of variables is N and the number of factors is M . The factor graphs are constructed by starting from an empty graphical model (V, ∅, ∅) and adding M random factors, where each factor is obtained in the following way: a subset I = {I 1 , . . . , I k } ⊆ V of k different variables is drawn; a vector of 2 k independent random numbers {J I (x I )} x I ∈X I is drawn from a N (0, β) distribution; the factor ψ I (x I ) := exp J I (x i ) is added to the graphical model. We only use those constructed factor graphs that are connected. 9 The parameter β again controls the interaction strength.
We have done experiments for (N = 50, M = 50, k = 3) for various values of β between 0.01 and 2. For each value of β, we have used 16 random instances. For higher values of β, computation times increased and convergence became problematic for some methods, which can probably be explained as the effects of a phase transition. The results are shown in Figure 12. Looking at the error and the computation time in Figure 12, the following ranking can be made, where accuracy and computation time both increase: BP, TreeEP, CVM-Min, CVM-Loop3, LCBP. CVM-Loop4 uses more computation time than LCBP but gives worse results. LCBP-Cum and LCBP-Cum-Lin are not available due to the fact that the factors involve more than two variables. The improvement of TreeEP over BP is rather small.
ALARM network
The ALARM network 10 is a well-known Bayesian network consisting of 37 variables (some of which can take on more than two possible values) and 37 factors (many of which involve more than two variables). In addition to the usual approximate inference methods, we have compared with GBP-Min, a GBP implementation of the minimal CVM approximation that uses maximal factors as outer clusters. The results are reported in Table 1.
The accuracy of GBP-Min (and CVM-Min) is almost identical to that of BP for this graphical model; GBP-Min converges without damping and is faster than CVM-Min. TreeEP on the other hand significantly improves the BP result in roughly the same time as GBP-Min needs. Simply enlarging the cluster size (CVM-∆) slightly deteriorates the quality of the results and also causes an enormous increase of computation time. The quality of the CVM-Loop results is roughly comparable to that of TreeEP. Suprisingly, increasing the loop depth beyond 4 deteriorates the quality of the results and results in an explosion of computation time. We conclude that the CVM-Loop method is not a very good approach to correcting loops in this case. LCBP uses considerable computation time, but yields errors that are approximately 10 4 times smaller than BP errors. The cumulant based loop LCBP methods are not available, due to the presence of factors involving more than two variables and variables that can take more than two values.
PROMEDAS networks
In this subsection, we study the performance of LCBP on another "real-world" example, the PROMEDAS medical diagnostic network (Wiegerinck et al., 1999). The diagnostic model in PROMEDAS is based on a Bayesian network. The global architecture of this network is similar to QMR-DT (Shwe et al., 1991). It consists of a diagnosis-layer that is connected to a layer with findings 11 . Diagnoses (diseases) are modeled as a priori independent binary variables causing a set of symptoms (findings), which constitute the bottom layer. The PROMEDAS network currently consists of approximately 2000 diagnoses and 1000 findings. The interaction between diagnoses and findings is modeled with a noisy-OR structure. The conditional probability of the finding given the parents is modeled by m + 1 numbers, m of which represent the probabilities that the finding is caused by one of the diseases and one that the finding is not caused by any of the parents.
The noisy-OR conditional probability tables with m parents can be naively stored in a table of size 2 m . This is problematic for the PROMEDAS networks since findings that are affected by more than 30 diseases are not uncommon in the PROMEDAS network. We use an efficient implementation of noisy-OR relations as proposed by Takikawa and D'Ambrosio (1999) to reduce the size of these tables. The trick is to introduce dummy variables s and 11. In addition, there is a layer of variables, such as age and gender, that may affect the prior probabilities of the diagnoses. Since these variables are always clamped for each patient case, they merely change the prior disease probabilities and are irrelevant for our current considerations.
The factors on the right hand side involve at most 3 variables instead of the initial 4 (left). Repeated application of this formula reduces all factors to triple interactions or smaller. When a patient case is presented to PROMEDAS, a subset of the findings will be clamped and the rest will be unclamped. If our goal is to compute the marginal probabilities of the diagnostic variables only, the unclamped findings and the diagnoses that are not related to any of the clamped findings can be summed out of the network as a preprocessing step. The clamped findings cause an effective interaction between their parents. However, the noisy-OR structure is such that when the finding is clamped to a negative value, the effective interaction factorizes over its parents. Thus, findings can be clamped to negative values without additional computation cost (Jaakkola and Jordan, 1999).
The complexity of the problem now depends on the set of findings that is given as input. The more findings are clamped to a positive value, the larger the remaining network of disease variables and the more complex the inference task. Especially in cases where findings share more than one common possible diagnosis, and consequently loops occur, the model can become complex.
We use the PROMEDAS model to generate virtual patient data by first clamping one of the disease variables to be positive and then clamping each finding to its positive value with probability equal to the conditional distribution of the finding, given the positive disease. The union of all positive findings thus obtained constitute one patient case. For each patient case, the corresponding truncated graphical model is generated. The number of disease nodes in this truncated graph is typically quite large.
The results can be found in Figures 13 and 14. Surprisingly, neither TreeEP nor any of the CVM methods gives substantial improvements over BP. TreeEP even gives worse results compared to BP. The CVM-Min and CVM-Loop3 results appear to be almost identical to the BP results. CVM-Loop4 manages to improve over BP in a few cases. Increased loop depth (k = 5, 6) results in worse quality in many cases and also in an enormous increase in computation time.
LCBP, on the other hand, is the only method that gives a significant improvement over BP, in each case. Considering all patient cases, LCBP corrects the BP error with more than one order of magnitude in half of the cases for which BP was not already exact. The improvement obtained by LCBP has its price: the computation time of LCBP is rather large compared to that of BP, as shown in Figure 14. The deviation from the quadratic scaling t ∝ N 2 is due to the fact that the size of the Markov blankets varies over instances and instances with large N often also have larger Markov blankets. The cumulant based loop LCBP methods are not available, due to the presence of factors involving more than two variables and variables that can take more than two values.
Discussion and conclusion
We have proposed a method to improve the quality of an approximate inference method (e.g. BP) by correcting for the influence of loops in the factor graph. We found empirically that if one applies this Loop Correcting method, assuming that no loops are present (by taking factorized initial approximate cavity distributions), the method reduces to the minimal CVM approximation. We have proved this for the case of factor graphs that do not have short loops of exactly four nodes. If, on the other hand, the loop correction method is applied in combination with BP estimates of the effective cavity interactions, we have seen that the loop corrected error is approximately the square of the uncorrected BP error. Similar observations have been made for loop corrected MF and TreeEP. For practical purposes, we suggest to apply loop corrections to BP ("LCBP"), because the loop correction approach requires many runs of the approximate inference method and BP is well suited for this job because of its speed. We have compared the performance of LCBP with other approximate inference methods that (partially) correct for the presence of loops. We have shown that LCBP is the most accurate method and that it even works for relatively strong interactions.
On sparse factor graphs, TreeEP obtains improvements over BP by correcting for loops that consist of part of the base tree and one additional interaction, using little computation time. However, for denser graphs, the difference between the quality of TreeEP and BP marginals diminishes. LCBP almost always obtained more accurate results. However, LCBP also needs more computation time than TreeEP.
The CVM-Loop approximation, which uses small loops as outer clusters, can also provide accurate results if the number of short loops is not too large and the number of intersections of clusters is limited. However, the computation time becomes prohibitive in many cases. In order to obtain the same accuracy as LCBP, the CVM-Loop approach usually needs significantly more computation time. This behaviour is also seen on "real world" instances such as the ALARM network and PROMEDAS test cases. There may exist other cluster choices that give better results for the CVM approximation, but no general method for obtaining "good" cluster choices seems to be known (see also (Welling et al., 2005) for a discussion of what constitutes a "good" CVM cluster choice).
We have also compared the performance of LCBP with the original implementation proposed by Montanari and Rizzo (2005). This implementation works with cumulants instead of interactions and we believe that this is the reason that it has more difficulties in the regime of strong interactions. Although the differences were rather small in some cases, LCBP obtained better results than LCBP-Cum using approximately similar amounts of computation time. The linearized version LCBP-Cum-Lin, which is applicable to factor graphs with large Markov blankets, performed surprisingly well, often obtaining similar accuracy as LCBP-Cum. For random graphs with high degree d (i.e. large Markov blankets), it turned out to be the most accurate of the applicable approximate inference methods. It is rather fortunate that the negative effect of the linearization error on the accuracy of the result becomes smaller as the degree increases, since it is precisely for high degree where one needs the linearization because of performance issues.
In the experiments reported here, the standard JunctionTree method was almost always faster than LCBP. The reason is that we have intentionally selected experiments for which exact inference is still feasible, in order to be able to compare the quality of various approximate inference methods. However, as implied by Figure 9, there is no reason to expect that LCBP will suddenly give inaccurate results when exact inference is no longer feasible. Thus we suggest that LCBP may be used to obtain accurate marginal estimates in cases where exact inference is impossible because of high treewidth. As illustrated in Figure 9, the computation time of LCBP scales very different from that of the JunctionTree method: whereas the latter is exponential in treewidth, LCBP is exponential in the size of the Markov blankets.
The fact that computation time of LCBP (in its current form) scales exponentially with the size of the Markov blankets can be a severe limitation in practice. Many real world Bayesian networks have large Markov blankets, prohibiting application of LCBP. The linear cumulant based implementation LCBP-Cum-Lin proposed by Montanari and Rizzo (2005) does not suffer from this problem, as it is quadratic in the size of the Markov blankets. Unfortunately, this particular implementation can in its current form only be applied to graphical models that consist of binary variables and factors that involve at most two variables (which excludes any interesting Bayesian network, for example). Furthermore, problems may arise if some factors contain zeroes. For general application of loop correcting methods, it will be of paramount importance to derive an implementation that combines the generality of LCBP with the speed of LCBP-Cum-Lin. At this point, it is not obvious whether it would be better to use cumulants or interactions as the parameterization of the cavity distribution. This topic will be left for future research. The work presented here provides some intuition that may be helpful for constructing a general and fast loop correcting method that is applicable to arbitrary factor graphs that can have large Markov blankets.
Another important direction for future research would be to find an extension of the loop correcting framework that also gives a loop corrected approximation of the normalization constant Z in (1). Additionally, (and possibly related to that), it would be desirable to find an approximate "free energy", a function of the beliefs, whose stationary points coincide with the fixed points of the Algorithm 1. This can be done for many approximate inference methods (MF, BP, CVM, EP) so it is natural to expect that the Loop Correction algorithm can also be seen as a minimization procedure of a certain approximate free energy. Despite some efforts, we have not yet been able to find such a free energy.
Recently, other loop correcting approaches (to the Bethe approximation) have been proposed in the statistical physics community (Parisi and Slanina, 2005;Chertkov and Chernyak, 2006). In particular, Chertkov and Chernyak (2006) have derived a series expansion of the exact normalizing constant Z in terms of the BP solution. The first term of the series is precisely the Bethe free energy evaluated at the BP fixed point. The number of terms in the series is finite, but can be very large, even larger than the number of total states of the graphical model. Each term is associated with a "generalized loop", which is a subgraph of the factor graph for which each node has at least connectivity two. By truncating the series, it is possible to obtain approximate solutions that improve on BP by taking into account a subset of all generalized loops (Gómez et al., 2006). Summarizing, this approach to loop corrections takes a subset of loops into account in an exact way, whereas the loop correcting approach presented in this article takes all loops into account in an approximate way. More experiments should be done to compare both approaches.
Concluding, we have proposed a method to correct approximate inference methods for the influence of loops in the factor graph. We have shown that it can obtain very accurate results, also on real world graphical models, outperforming existing approximate inference methods in terms of quality, robustness or applicability. We have shown that it can be applied to problems for which exact inference is infeasible. The rather large computation time required is an issue which deserves further consideration; it may be possible to use additional approximations on top of the loop correcting framework that trade quality for computation time.
Appendix: Original approach proposed by Montanari and Rizzo (2005) For completeness, we describe the implementation based on cumulants as originally proposed by Montanari and Rizzo (2005). The approach can be applied in recursive fashion. Here we will only discuss the first recursion level. Consider a graphical model which has only binary (±1-valued) variables and factors that involve at most two variables. The corresponding probability distribution can be parameterized in terms of the local fields {θ i } i∈V and the couplings {J ij = J ji } i∈V,j∈∂i :
P (x) = 1 Z exp i∈V θ i x i + 1 2 i∈V j∈∂i J ij x i x j .
Let i ∈ V and consider the corresponding cavity network of i. For A ⊆ ∂i, the cavity moment M \i A is defined as the following expectation value under the cavity distribution:
M \i A := x ∂i Z \i (x ∂i ) j∈A x j ,
where we will not explicitly distinguish between approximate and exact quantities, following the physicists' tradition. 12 The cavity cumulants (also called "connected correlations") C \i A are related to the moments in the following way:
M \i A = B∈Part(A) E∈B C \i E
where Part(A) is the set of partitions of A. We introduce some notation: we define for A ⊆ ∂i:
t iA := k∈A tanh J ik .
Further, for a set X, we denote the even subsets of X as P + (X) := {Y ⊆ X : |Y | is even} and the odd subsets of X as P − (X) := {Y ⊆ X : |Y | is odd}. Using standard algebraic manipulations, one can show that for j ∈ ∂i, the expectation value of x j in the absence of the interaction ψ ij = exp(J ij x i x j ) can be expressed in terms of cavity moments of i as follows:
A∈P + (∂i\j) t iA M \i A∪j + tanh θ i A∈P − (∂i\j) t iA M \i A∪j A∈P + (∂i\j) t iA M \i A + tanh θ i A∈P − (∂i\j) t iA M \i A .(14)
On the other hand, the same expectation value can also be expressed in terms of cavity moments of j as follows:
tanh θ j
12. In (Montanari and Rizzo, 2005), the notationC The consistency equations are now given by equating (14) to (15) for all i ∈ V, j ∈ ∂i. The expectation value of x i (in the presence of all interactions) can be similarly expressed in terms of cavity moments of i:
M i := x i =±1 P (x i )x i = tanh θ i A∈P + (∂i) t iA M \i A + A∈P − (∂i) t iA M \i A A∈P + (∂i) t iA M \i A + tanh θ i A∈P − (∂i) t iA M \i A .
(16)
Neglecting higher order cumulants
Montanari and Rizzo proceed by neglecting cavity cumulants C \i A with |A| > 2. Denote by Part 2 (A) the set of all partitions of A into subsets which have cardinality 2 at most. Thus, neglecting higher order cavity cumulants amounts to the following approximation:
M \i A ≈ B∈Part 2 (A) E∈B C \i E .(17)
By some algebraic manupulations, one can express the consistency equations (14) = (15) in this approximation as follows:
One can use (17) to write (18) in terms of the singleton cumulants {M \i j } i∈V,j∈∂i and the pair cumulants {C \i jk } i∈V,j∈∂i,k∈∂i\j . Given (estimates of) the pair cumulants, the consistency equations (18) are thus fixed point equations in the singleton cumulants.
The procedure is now:
• Estimate the pair cumulants {C \i jk } i∈V,j∈∂i,k∈∂i\j using BP in combination with linear response (called "response propagation" in Montanari and Rizzo (2005)).
• Calculate the fixed point {M \i j } i∈V,j∈∂i of (18) using the estimated the pair cumulants.
• Use (16) in combination with (17) to calculate the final expectation values {M j } j∈V using the estimated pair cumulants and the fixed point of (18).
Figure 1 :
1(a) Original factor graph, corresponding to the probability distribution P
\i 0 is exact modulo single variable interactions. Because the final beliefs (8) are invariant under perturbation of the ζ \i 0 by single variable interactions, the final beliefs calculated by Algorithm 1 are exact.
6 3
6.1.1 N = 100, d = 3, mixed couplings, strong local fields (Θ = 2)
Figure 3 :
3Results for (N = 100, d = 3) regular random graphs with mixed couplings and strong local fields Θ = 2. First row, from left to right: error, computation time and fraction of converged instances, as a function of β for various methods, averaged over 16 randomly generated instances (where results are only included if the method has converged). For the same instances, scatter plots of errors are shown in the next rows for various pairs of methods. The solid red lines correspond with y = x, the dotted red lines with y = x 2 . Only points have been plotted for which both approximate inference methods converged.
Figure 4 :
4Additional results for (N = 100, d = 3) regular random graphs with mixed couplings and strong local fields Θ = 2, for the same instances as inFigure 3. All methods converged on all instances.
Figure 5 :
5Results for (N = 100, d = 3) regular random graphs with mixed couplings and β = 1.0. First row, from left to right: error, computation time and fraction of converged instances, as a function of relative local field strength Θ for various methods, averaged over 16 randomly generated instances (where results are only included if the method has converged). For the same instances, scatter plots of errors are shown in the next rows for various pairs of methods.
Figure 6 :Figure 7 :
67Additional results for (N = 100, d = 3) regular random graphs with mixed couplings and β = 1.0, for the same instances as inFigure 5. All methods converged on all instances. Results for (N = 50, d = 6) regular random graphs with mixed couplings and strong local fields Θ = 2.
Figure 8 :
8Results for (N = 50, d = 6) regular random graphs with attractive couplings and strong local fields Θ = 2.
Figure 9 :
9Error (left) and computation time (right) as a function of N (the number of variables), for random graphs with uniform degree d = 6, mixed couplings, β = 0.1 and Θ = 2. Points are averages over 16 randomly generated instances. Each method converged on all instances.
Figure 10 :
10Error (left) and computation time (right) as a function of variable degree d for regular random graphs of N = 24 variables with mixed couplings for β = 0.1 and Θ = 2. Points are averages over 16 randomly generated instances. Each method converged on all instances.
Figure 11 :
11Results for different methods of obtaining initial estimates of cavity distributions, for (N = 100, d = 3) regular random graphs with mixed couplings and strong local fields Θ = 2.
Figure 12 :
12Results for (N = 50, M = 50, k = 3) random factor graphs.
+ (∂j\i) t jB M \j B + tanh θ j A∈P − (∂j\i) t jB M \j B .
A
is used for the cavity moment M \i A .
Figure 13 :
13Scatter plots of errors for PROMEDAS instances.
Figure 14 :
14Computation time (in seconds) of LCBP for PROMEDAS instances vs. N , the number of variables in the preprocessed graphical model. The solid line corresponds to t ∝ N 2 .
Output: improved approximate cavity distributions {ζ \i } i∈VAlgorithm 1 Loop Correcting algorithm
Input:
initial approximate cavity distributions {ζ
\i
0 } i∈V
1: repeat
2:
5 .
5Our C++ implementation of various approximate inference algorithms is free/open source software and can be downloaded from http://www.mbfys.ru.nl/ ∼ jorism/libDAI
Table 1 :
1Results for the ALARM networkMethod
Time (s) Error
BP
0.00 2.026 · 10 −01
TreeEP
0.21 3.931 · 10 −02
GBP-Min
0.18 2.031 · 10 −01
CVM-Min
1.13 2.031 · 10 −01
CVM-∆
280.67 2.233 · 10 −01
CVM-Loop3
1.19 4.547 · 10 −02
CVM-Loop4
154.97 3.515 · 10 −02
CVM-Loop5
1802.83 5.316 · 10 −02
CVM-Loop6 84912.70 5.752 · 10 −02
LCBP
23.67 3.412 · 10 −05
. The method byMontanari and Rizzo (2005) could probably be generalized in a way that stays closer to the original one than our proposal, but it is not so obvious how to do this. 4. For a set X, a partition of X is a nonempty set Y such that each Z ∈ Y is a nonempty subset of X and S Y = X.
. The reason that we require the factor graph to be connected is that not all our approximate inference method implementations currently support connected factor graphs that consist of more than one connected component.
. The ALARM network can be downloaded from http://compbio.cs.huji.ac.il/Repository/ Datasets/alarm/alarm.dsc
AcknowledgmentsThe research reported here is part of the Interactive Collaborative Information Systems (ICIS) project, supported by the Dutch Ministry of Economic Affairs, grant BSIK03024. We thank Bastian Wemmenhove for stimulating discussions and for providing the PROMEDAS test cases.Linearized versionThe update equations can be linearized by expanding up to first order in the pair cumulants C \i jk . This yields the following linearized consistency equation(Montanari and Rizzo, 2005):The final magnetizations (16) are, up to first order in the pair cumulants:
Statistical theory of superlattices. H Bethe, Proc. R. Soc. A. 150H. Bethe. Statistical theory of superlattices. Proc. R. Soc. A, 150:552-575, 1935.
Loop series for discrete statistical models on graphs. Michael Chertkov, Y Vladimir, Chernyak, Journal of Statistical Mechanics: Theory and Experiment. 066009Michael Chertkov and Vladimir Y Chernyak. Loop series for discrete statistical models on graphs. Journal of Statistical Mechanics: Theory and Experiment, 2006(06):P06009, 2006. URL http://stacks.iop.org/1742-5468/2006/P06009.
Residual belief propagation: Informed scheduling for asynchronous message passing. G Elidan, I Mcgraw, D Koller, Proceedings of the Twenty-second Conference on Uncertainty in AI (UAI). the Twenty-second Conference on Uncertainty in AI (UAI)Boston, MassachussettsG. Elidan, I. McGraw, and D. Koller. Residual belief propagation: Informed scheduling for asynchronous message passing. In Proceedings of the Twenty-second Conference on Uncertainty in AI (UAI), Boston, Massachussetts, July 2006.
V Gómez, J Mooij, H Kappen, preparation. V. Gómez, J. Mooij, and H. Kappen. In preparation, 2006.
Approximate inference and constrained optimization. Tom Heskes, C A Albers, Hilbert J Kappen, Proc. of the 19th Annual Conf. on Uncertainty in Artificial Intelligence (UAI-03). of the 19th Annual Conf. on Uncertainty in Artificial Intelligence (UAI-03)San Francisco, CAMorgan Kaufmann PublishersTom Heskes, C.A. Albers, and Hilbert J. Kappen. Approximate inference and constrained optimization. In Proc. of the 19th Annual Conf. on Uncertainty in Artificial Intelligence (UAI-03), pages 313-320, San Francisco, CA, 2003. Morgan Kaufmann Publishers.
Variational probabilistic inference and the QMR-DT network. Tommi Jaakkola, Michael I Jordan, Journal of Artificial Intelligence Research. 10Tommi Jaakkola and Michael I. Jordan. Variational probabilistic inference and the QMR- DT network. Journal of Artificial Intelligence Research, 10:291-322, 1999. URL http: //www.jair.org/papers/paper583.html.
A theory of cooperative phenomena. R Kikuchi, Phys. Rev. 81R. Kikuchi. A theory of cooperative phenomena. Phys. Rev., 81:988-1003, 1951.
Factor graphs and the Sum-Product Algorithm. R Frank, Brendan J Kschischang, Hans-Andrea Frey, Loeliger, IEEE Trans. Inform. Theory. 472Frank R. Kschischang, Brendan J. Frey, and Hans-Andrea Loeliger. Factor graphs and the Sum-Product Algorithm. IEEE Trans. Inform. Theory, 47(2):498-519, February 2001.
Spin glass theory and beyond. M Mézard, G Parisi, M A Virasoro, World ScientificSingaporeM. Mézard, G. Parisi, and M. A. Virasoro. Spin glass theory and beyond. World Scientific, Singapore, 1987.
Expectation Propagation for approximate Bayesian inference. Thomas Minka, Proc. of the 17th Annual Conf. on Uncertainty in Artificial Intelligence (UAI-01). of the 17th Annual Conf. on Uncertainty in Artificial Intelligence (UAI-01)San Francisco, CAMorgan Kaufmann PublishersThomas Minka. Expectation Propagation for approximate Bayesian inference. In Proc. of the 17th Annual Conf. on Uncertainty in Artificial Intelligence (UAI-01), pages 362-369, San Francisco, CA, 2001. Morgan Kaufmann Publishers.
Tree-structured approximations by Expectation Propagation. Thomas Minka, Yuan Qi, Advances in Neural Information Processing Systems. Sebastian Thrun, Lawrence Saul, and Bernhard SchölkopfCambridge, MAMIT Press16Thomas Minka and Yuan Qi. Tree-structured approximations by Expectation Propagation. In Sebastian Thrun, Lawrence Saul, and Bernhard Schölkopf, editors, Advances in Neural Information Processing Systems 16, Cambridge, MA, 2004. MIT Press.
How to compute loop corrections to the Bethe approximation. Andrea Montanari, Tommaso Rizzo, Journal of Statistical Mechanics: Theory and Experiment. 1010011Andrea Montanari and Tommaso Rizzo. How to compute loop corrections to the Bethe ap- proximation. Journal of Statistical Mechanics: Theory and Experiment, 2005(10):P10011, 2005. URL http://stacks.iop.org/1742-5468/2005/P10011.
On the properties of the bethe approximation and loopy belief propagation on binary networks. J M Mooij, H J Kappen, Journal of Statistical Mechanics: Theory and Experiment. 1111012J M Mooij and H J Kappen. On the properties of the bethe approximation and loopy belief propagation on binary networks. Journal of Statistical Mechanics: Theory and Experiment, 2005(11):P11012, 2005. URL http://stacks.iop.org/1742-5468/2005/ P11012.
G Parisi, Statistical Field Theory. Redwood City, CaAddison-WesleyG. Parisi. Statistical Field Theory. Addison-Wesley, Redwood City, Ca, 1988.
Loop expansion around the Bethe-Peierls approximation for lattice models. Giorgio Parisi, Frantisek Slanina, cond-mat/0512529arXiv.org preprintGiorgio Parisi and Frantisek Slanina. Loop expansion around the Bethe-Peierls approxima- tion for lattice models. arXiv.org preprint, cond-mat/0512529, 2005.
Probabilistic Reasoning in Intelligent systems: Networks of Plausible Inference. J Pearl, Morgan KaufmannSan Francisco, CAJ. Pearl. Probabilistic Reasoning in Intelligent systems: Networks of Plausible Inference. Morgan Kaufmann, San Francisco, CA, 1988.
Cluster variation method in statistical physics and probabilistic graphical models. A Pelizzola, J. Phys. A: Math. Gen. 38A. Pelizzola. Cluster variation method in statistical physics and probabilistic graphical models. J. Phys. A: Math. Gen., 38:R309-R339, August 2005.
M A Shwe, B Middleton, D E Heckerman, M Henrion, E J Horvitz, H P Lehmann, G F Cooper, Probabilistic diagnosis using a reformulation of the INTERNIST. M. A. Shwe, B. Middleton, D. E. Heckerman, M. Henrion, E. J. Horvitz, H. P. Lehmann, and G. F. Cooper. Probabilistic diagnosis using a reformulation of the INTERNIST-
The probabilistic model and inference algorithms. I /Qmr Knowledge Base, Methods of information in Medicine. 304/QMR knowledge base. I. The probabilistic model and inference algorithms. Methods of information in Medicine, 30(4):241-255, October 1991.
Multiplicative factorization of noisy-max. Masami Takikawa, D' Bruce, Ambrosio, Proceedings of the 15th Annual Conference on Uncertainty in Artificial Intelligence (UAI-99). the 15th Annual Conference on Uncertainty in Artificial Intelligence (UAI-99)San Francisco, CAMorgan KaufmannMasami Takikawa and Bruce D'Ambrosio. Multiplicative factorization of noisy-max. In Proceedings of the 15th Annual Conference on Uncertainty in Artificial Intelligence (UAI- 99), pages 622-63, San Francisco, CA, 1999. Morgan Kaufmann.
Linear response for approximate inference. Max Welling, Yee Whye Teh, Advances in Neural Information Processing Systems. Sebastian Thrun, Lawrence Saul, and Bernhard SchölkopfCambridge, MAMIT Press16Max Welling and Yee Whye Teh. Linear response for approximate inference. In Sebastian Thrun, Lawrence Saul, and Bernhard Schölkopf, editors, Advances in Neural Information Processing Systems 16, Cambridge, MA, 2004. MIT Press.
Structured Region Graphs: Morphing EP into GBP. Max Welling, Thomas Minka, Yee Whye Teh, Proceedings of the 21th Annual Conference on Uncertainty in Artificial Intelligence (UAI-05). the 21th Annual Conference on Uncertainty in Artificial Intelligence (UAI-05)Arlington, VirginiaAUAI Press609Max Welling, Thomas Minka, and Yee Whye Teh. Structured Region Graphs: Morphing EP into GBP. In Proceedings of the 21th Annual Conference on Uncertainty in Artificial Intelligence (UAI-05), page 609, Arlington, Virginia, 2005. AUAI Press.
Approximate inference for medical diagnosis. W Wiegerinck, H J Kappen, E W M T Ter Braak, W J P P Burg, M J Nijman, J P Neijt, Pattern Recognition Letters. 20W. Wiegerinck, H. J. Kappen, E. W. M. T. ter Braak, W. J. P. P. ter Burg, M. J. Nijman, Y. L. O, and J. P. Neijt. Approximate inference for medical diagnosis. Pattern Recognition Letters, 20:1231-1239, 1999.
Constructing free-energy approximations and Generalized Belief Propagation algorithms. J S Yedidia, W T Freeman, Y Weiss, IEEE Transactions on Information Theory. 517J. S. Yedidia, W. T. Freeman, and Y. Weiss. Constructing free-energy approximations and Generalized Belief Propagation algorithms. IEEE Transactions on Information Theory, 51(7):2282-2312, July 2005.
CCCP algorithms to minimize the Bethe and Kikuchi free energies: Convergent alternatives to belief propagation. A L Yuille, Neural Computation. 147A. L. Yuille. CCCP algorithms to minimize the Bethe and Kikuchi free energies: Convergent alternatives to belief propagation. Neural Computation, 14(7):1691-1722, 2002.
|
[] |
[
"A Simplicial Complex Model for Dynamic Epistemic Logic to study Distributed Task Computabilitý",
"A Simplicial Complex Model for Dynamic Epistemic Logic to study Distributed Task Computabilitý"
] |
[
"Eric Goubault [email protected] \nInstituto de Matemáticas\nLIX\nÉcole Polytechnique Palaiseau\nFrance\n",
"Jérémy Ledent [email protected] \nInstituto de Matemáticas\nLIX\nÉcole Polytechnique Palaiseau\nFrance\n",
"Sergio Rajsbaum [email protected] \nUNAM Ciudad Universitaria\n04510MexicoMexico\n"
] |
[
"Instituto de Matemáticas\nLIX\nÉcole Polytechnique Palaiseau\nFrance",
"Instituto de Matemáticas\nLIX\nÉcole Polytechnique Palaiseau\nFrance",
"UNAM Ciudad Universitaria\n04510MexicoMexico"
] |
[
"9th Symposium on Games, Automata, Logics and Formal Verification (GandALF'18) EPTCS"
] |
The usual epistemic S5 n model for a multi-agent system is based on a Kripke frame, which is a graph whose edges are labeled with agents that do not distinguish between two states. We propose to uncover the higher dimensional information implicit in this structure, by considering a dual, simplicial complex model. We use dynamic epistemic logic (DEL) to study how an epistemic simplicial complex model changes after a set of agents communicate with each other. We concentrate on an action model that represents the so-called immediate snapshot communication patterns of asynchronous agents, because it is central to distributed computability (but our setting works for other communication patterns). There are topological invariants preserved from the initial epistemic complex to the one after the action model is applied, which determine the knowledge that the agents gain after communication. Finally, we describe how a distributed task specification can be modeled as a DEL action model, and show that the topological invariants determine whether the task is solvable. We thus provide a bridge between DEL and the topological theory of distributed computability, which studies task solvability in a shared memory or message passing architecture.
|
10.4204/eptcs.277.6
|
[
"https://arxiv.org/pdf/1809.03095v1.pdf"
] | 52,181,751 |
1809.03095
|
dc614ce0b7c904d6c5faf1c25cd77f9bab7b5f60
|
A Simplicial Complex Model for Dynamic Epistemic Logic to study Distributed Task Computabilitý
2018
Eric Goubault [email protected]
Instituto de Matemáticas
LIX
École Polytechnique Palaiseau
France
Jérémy Ledent [email protected]
Instituto de Matemáticas
LIX
École Polytechnique Palaiseau
France
Sergio Rajsbaum [email protected]
UNAM Ciudad Universitaria
04510MexicoMexico
A Simplicial Complex Model for Dynamic Epistemic Logic to study Distributed Task Computabilitý
9th Symposium on Games, Automata, Logics and Formal Verification (GandALF'18) EPTCS
277201810.4204/EPTCS.277.6c É . Goubault, J. Ledent & S. Rajsbaum This work is licensed under the Creative Commons Attribution License.
The usual epistemic S5 n model for a multi-agent system is based on a Kripke frame, which is a graph whose edges are labeled with agents that do not distinguish between two states. We propose to uncover the higher dimensional information implicit in this structure, by considering a dual, simplicial complex model. We use dynamic epistemic logic (DEL) to study how an epistemic simplicial complex model changes after a set of agents communicate with each other. We concentrate on an action model that represents the so-called immediate snapshot communication patterns of asynchronous agents, because it is central to distributed computability (but our setting works for other communication patterns). There are topological invariants preserved from the initial epistemic complex to the one after the action model is applied, which determine the knowledge that the agents gain after communication. Finally, we describe how a distributed task specification can be modeled as a DEL action model, and show that the topological invariants determine whether the task is solvable. We thus provide a bridge between DEL and the topological theory of distributed computability, which studies task solvability in a shared memory or message passing architecture.
Introduction
The usual epistemic logic model for a multi-agent system is based on a Kripke frame, which is a graph whose edges are labeled with agents that do not distinguish between two states. A Kripke S5 n model represents the knowledge of the agents about a given situation. Our first goal is to expose the topological information implicit in a Kripke model, replacing it by its dual, a simplicial complex model. We prove that these simplicial models are very closely related to the usual Kripke models: there is an equivalence of categories between the two structures. Thus, simplicial models retain the nice properties of Kripke models, such as soundness and completeness w.r.t. (a slightly modified version of) the logic S5 n .
To explain the interest of this duality, we extend it to a dynamic setting. We found that in this context, a very natural setting is dynamic epistemic logic (DEL) [4,10] with action models [3]. We extend the duality to this setting by defining a simplicial version of action models and a corresponding product update operator. Thus, the product update of an initial simplicial model I and an action model A yields a simplicial model I [A]. The possible patterns of communication permitted by the action model determine the topological invariants of I that are preserved in I[A].
We apply our framework to study fault-tolerant distributed computability, because its intimate relation to topology is well understood [16]. Also, DEL has applications to numerous research areas, but to the best of our knowledge it has not been used to study fault-tolerant distributed computing systems. We define a particular action model of interest, the immediate snapshot action model, which is well known in distributed computing because it fully preserves the topology of the initial complex. This model corresponds to wait-free asynchronous processes operating on a shared memory, which means that the processes run at an arbitrary speed, independent from the others, and are not allowed to wait for events to happen in other processes.
Another goal is to show how DEL can be used to specify a distributed task. A task is the equivalent of a function in distributed computability [2]. Agents start with an input value, and after communicating with the others, produce an output value. The task defines the possible inputs to the agents, and for each set of inputs, it specifies the set of outputs that the agents may produce. An important example is the consensus task, where all the agents must agree on one of their input values. We use DEL in a novel way,
I[A] I[T ] I π I δ π I
to represent the task itself. A Kripke model I represents the possible initial states of the system. The task is specified by an action model T , which describes the output values that the agents should be able to produce, as well as preconditions specifying which inputs are allowed to produce which outputs. The product update of the input model I with T yields an epistemic model I[T ] representing the knowledge that the agents should acquire to solve the task. Once the task is specified, given an action model A that represents some distributed protocol, the product update of I with A yields a Kripke model I[A] that models how agents perceive the world after the protocol has been executed. The protocol A solves the task if there exists a morphism δ that makes the above diagram commute (Definition 2). This intuitively happens when there is sufficient knowledge in I[A] to solve the task.
Beyond the applications that we provide in this paper, our main goal is to construct a general framework that connects epistemic logic and distributed computability. In one direction, uncovering the higherdimensional topological structure hidden in Kripke models allows us to transport methods that have been used successfully in the algebraic topological approach to fault-tolerant distributed computability [16] to the realm of DEL. In particular, the knowledge gained by applying an action model is intimately related to how well it preserves the topology of the initial model. The benefit in the other direction is in providing a formal epistemic logic semantics to distributed task computability. This allows one to understand better the abstract topological arguments in terms of how much knowledge is necessary to solve a task.
We concentrate on the specific setting of asynchronous wait-free shared read/write memory. However, there are known equivalences between task solvability in our model and other shared memory and message passing models, and this model can be used as a basis to study task solvability in other more complex models, e.g. where the number of processes that can crash is bounded or even where Byzantine failures are possible [16]. Nevertheless, this is far from telling the whole story. In the Conclusions section we discuss many interesting avenues that remain to be explored. And additional technical details appear in the companion Technical Reports [13,14].
Related work. Work on knowledge and distributed systems is of course one of the inspirations of the present work [26], especially where connectivity [7,8] is used. But the authors know of no previous work using DEL [4,10] to study such systems, and neither on directly connecting the combinatorial topological setting of [16] with Kripke models. In [20], the author proposes a variant of (non dynamic) epistemic logic for a restricted form of wait-free task specification that cannot account for important tasks such as consensus. Similar to [24], we show that even though a problem may not explicitly mention the agents' knowledge, it can in fact be restated as knowledge gain requirements. Nevertheless, we exploit the "runs and systems" framework in an orthogonal way, and the knowledge requirements we obtain are about inputs; common knowledge in the case of consensus, but other forms of nested knowledge for other tasks. In contrast, the knowledge of precondition principle of [24] implies that common knowledge is a necessary condition for performing simultaneous actions. Our formulation of carrier maps as products has been partially observed in [15]. There are other (categorical) connections between Kripke frames and geometry [25].
DEL is often thought of as inherently being capable of modeling only agents that are synchronous, but as discussed in [9], this is not the case. More recently, [21] proposes a variant of public announcement logic for asynchronous systems that introduces two different modal operators for sending and receiving messages. As we show here, DEL can naturally model the knowledge in an asynchronous distributed system, at least as far as it is concerned with task solvability. Further work is needed to study more in depth the knowledge that is represented in this way.
A simplicial model for epistemic logic
We describe here the new kind of model for epistemic logic, based on chromatic simplicial complexes. The link with DEL and distributed computing will be developed in the next sections.
Syntax. Let AP be a countable set of propositional variables and A a finite set of agents. The language L K is generated by the following BNF grammar:
ϕ ::= p | ¬ϕ | (ϕ ∧ ϕ) | K a ϕ p ∈ AP, a ∈ A
In the following, we work with n + 1 agents, and write A = {a 0 , . . . , a n }.
Simplicial complexes and Kripke frames. Given a set V , a simplicial complex C is a family of nonempty finite subsets of V such that for all X ∈ C, Y ⊆ X implies Y ∈ C. We say Y is a face of X. Elements of V (identified with singletons) are called vertices. Elements of C are simplexes, and those which are maximal w.r.t. inclusion are facets. The set of vertices of C is noted V(C), and the set of facets F(C). The dimension of a simplex X ∈ C is |X| − 1. A simplicial complex C is pure if all its facets are of the same dimension, n. In this case, we say C is of dimension n. Given the set A of agents (that we will represent as colors), a chromatic simplicial complex C, χ consists of a simplicial complex C and a coloring map χ : V(C) → A, such that for all X ∈ C, all the vertices of X have distinct colors. Let C and D be two simplicial complexes. A simplicial map f : C → D maps the vertices of C to vertices of D, such that if X is a simplex of C, f (X) is a simplex of D. A chromatic simplicial map between two chromatic simplicial complexes is a simplicial map that preserves colors. Let S A be the category of pure chromatic simplicial complexes on A, with chromatic simplicial maps for morphisms.
A Kripke frame M = S, ∼ over a set A of agents consists of a set of states S and a family of equivalence relations on S, written ∼ a for every a ∈ A. Two states u, v ∈ S such that u ∼ a v are said to be indistinguishable by a. A Kripke frame is proper if any two states can be distinguished by at least one agent. Let M = S, ∼ and N = T, ∼ be two Kripke frames. A morphism from M to N is a function f from S to T such that for all u, v ∈ S, for all a ∈ A, u ∼ a v implies f (u) ∼ a f (v). We write K A for the category of proper Kripke frames, with morphisms of Kripke frames as arrows.
The following theorem states that we can canonically associate a Kripke frame with a pure chromatic simplicial complex, and vice versa. In fact, this correspondence extends to morphisms, and thus we have an equivalence of categories, meaning that the two structures contain the same information.
Theorem 1. S A and K A are equivalent categories.
Proof. We construct functors F : S A → K A and G : K A → S A as follows.
Let C be a pure chromatic simplicial complex on the set of agents A. Its associated Kripke frame is F(C) = S, ∼ , where S is the set of facets of C, and the equivalence relation ∼ a , for each a ∈ A, is generated by the relations X ∼ a Y (for X and Y facets of C) if a ∈ χ(X ∩Y ).
For a morphism f : C → D in S A , we define F( f ) : F(C) → F(D) that takes a facet X of C to its image f (X), which is a facet of D since f is a chromatic map. Assume X and Y are facets of C such that
X ∼ a Y in F(C), that is, a ∈ χ(X ∩Y ). So there is a vertex v ∈ V(C) such that v ∈ X ∩Y and χ(v) = a. Then f (v) ∈ f (X) ∩ f (Y ) and χ( f (v)) = a, so a ∈ χ( f (X) ∩ f (Y )). Therefore, f (X) ∼ a f (Y ), and F( f )f : M → N be a morphism in K A . We define G( f ) : G(M) → G(N) that maps a vertex [v s i ] of G(M) to the vertex [v f (s) i ] of G(N)
. This map is well-defined (i.e., the image of a vertex does not depend on the chosen representative) because f is a morphism of Kripke frames, and thus it preserves the indistinguishability relations. It is easily checked that this is moreover a simplicial map.
Consider now a Kripke frame M = S, ∼ in K A with agent set A. FG(M) is the Kripke frame N = T, ∼ such that T is the set of facets of G(M). But we have seen above that the facets of Consider now a pure chromatic simplicial complex C ∈ S A . It is easily seen that GF(C) is isomorphic, as a pure chromatic simplicial complex, to C, hence S A and K A are equivalent categories. Example 1. The picture below shows a Kripke frame (left) and its associated chromatic simplicial complex (right). The three agents, named b, g, w, are represented as colors black, grey and white on the vertices of the simplicial complex. The three worlds of the Kripke frame correspond to the three triangles (i.e., 2-dimensional simplexes) of the simplicial complex. The two worlds indistinguishable by agent b, are glued along their black vertex; the two worlds indistinguishable by agents g and w are glued along the grey-and-white edge.
G(M) are of the form {[v s 0 ], . . . , [v s n ]} (where s ∈ S), therefore, T is in bijection with S. Finally, in FG(M), X ∼ a Y if and only if a ∈ χ(X ∩ Y ),g, w b F G
We now decorate our simplicial complexes with atomic propositions in order to get a notion of simplicial model.
Simplicial models and Kripke models. For technical reasons, we restrict to models where all the atomic propositions are saying something about some local value held by one particular agent. All the examples that we are interested in will fit in that framework. Let V be some countable set of values, and AP = {p a,x | a ∈ A, x ∈ V} be the set of atomic propositions. Intuitively, p a,x is true if agent a holds the value x. We write AP a for the atomic propositions concerning agent a.
A simplicial model M = C, χ, consists of a pure chromatic simplicial complex C, χ of dimension n, and a labeling : V(C) → P(AP) that associates with each vertex v ∈ V(C) a set of atomic propositions concerning agent
χ(v), i.e., such that (v) ⊆ AP χ(v) . Given a facet X = {v 0 , . . . , v n } ∈ C, we write (X) = n i=0 (v i ).
A morphism of simplicial models f : M → M is a chromatic simplicial map that preserves the labeling: ( f (v)) = (v) (and χ). We denote by SM A,AP the category of simplicial models over the set of agents A and atomic propositions AP. We can now extend the two maps F and G of Theorem 1 to an equivalence between simplicial models and Kripke models.
(X) = v∈X (v). This Kripke model is local since X ∼ a Y means that X and Y share an a-colored vertex v, so L(X) ∩ AP a = L(Y ) ∩ AP a = (v).
Conversely, given a Kripke model M = S, ∼, L , the underlying simplicial complex of G(M) is obtained by gluing together n-simplexes of the form {v s 0 , . . . , v s n }, with s ∈ S. We label the vertex v s i (colored by a i ) by (v s i ) = L(s) ∩ AP a i . This is well defined because two vertices v s i and v s i are identified whenever s ∼ a i s , so L(s) ∩ AP a i = L(s ) ∩ AP a i since M is local.
The action of F and G on morphisms is the same as in Theorem 1. It is easy to check that the additional properties of morphisms between models are verified. Checking that FG(M) M and GF(M) M also works the same as in the previous theorem.
Example 2. The figure below shows the so-called binary input complex and its associated Kripke model, for 2 and 3 agents. Each agent gets a binary value 0 or 1, but doesn't know which value has been received by the other agents. So, every possible combination of 0's and 1's is a possible world.
In the Kripke model, the agents are called b, g, w, and the labeling L of the possible worlds is represented as a sequence of values, e.g., 101, representing the values chosen by the agents b, g, w (in that order). In the 3-agents case, the labels of the dotted edges have been omitted to avoid overloading the picture, as well as all the edges labeled by only one agent.
In the simplicial model, agents are represented as colors (black, grey, and white). The labeling is represented as a single value in a vertex, e.g., "1" in a grey vertex means that agent g has chosen value 1. The possible worlds correspond to edges in the 2-agents case, and triangles in the 3-agents case. It is well known in the context of distributed computing [16] that the binary input simplicial complex for n + 1 agents is a n-dimensional sphere.
Example 3. Consider the following situation. There are three agents black, grey and white, and a deck of four cards, {0, 1, 2, 3}. One card is given to each agent, and the last card is kept hidden. Each agent knows its own card, but not the other agents' cards. The simplicial model corresponding to that situation is depicted below on the left. The color of vertices indicate the corresponding agent, and the labeling is its card. In the planar drawing, vertices that appear several times with the same color and value should be identified. The arrows A and B indicate how the edges should be glued together. What we obtain is a triangulated torus.
If the deck of cards is {0, 1, 2}, we get the figure on the right, where the two white vertices (with card 0) should be identified, as well as the two black ones.
Thus, Theorem 2 says that simplicial models are closely related to Kripke models. Keeping that translation in mind, we can reformulate the usual semantics of formulas in Kripke models, in terms of simplicial models. Definition 1. We define the truth of a formula ϕ in some epistemic state (M, X) with M = C, χ, a simplicial model, X ∈ F(C) a facet of C and ϕ ∈ L K (A, AP). The satisfaction relation, determining when a formula is true in an epistemic state, is defined as:
M, X |= p iff p ∈ (X) M, X |= ¬ϕ iff M, X |= ϕ M, X |= ϕ ∧ ψ iff M, X |= ϕ and M, X |= ψ M, X |= K a ϕ iff for all Y ∈ F(C), a ∈ χ(X ∩Y ) implies M,Y |= ϕ
We can show that this definition of truth agrees with the usual one (which we write |= K to avoid confusion) on the corresponding Kripke model. Proof. This is straightforward by induction on the formula ϕ.
It is well-known that the axiom system S5 n is sound and complete with respect to the class of Kripke models [10]. Since we restrict here to local Kripke models, we need to add the following axiom (or axiom schema, if V is infinite), saying that every agent knows which values it holds:
Loc = a∈A,x∈V K a (p a,x ) ∨ K a (¬p a,x )
Corollary 1. The axiom system S5 n + Loc is sound and complete w.r.t. the class of simplicial models.
Proof. Adapting the proof of [10] for S5 n , it can be shown that S5 n + Loc is sound and complete w.r.t. the class of local proper Kripke models. Then, we transpose it to simplicial models using Proposition 1.
Indeed, suppose a formula ϕ is true for every local proper Kripke model and any state. Then given a simplicial model and facet (M, X), since by assumption F(M), X |= K ϕ, we also have M, X |= ϕ by Proposition 1. So ϕ is true in every simplicial model. Similarly, the converse also holds.
The following Theorem shows that morphisms of simplicial models cannot "gain knowledge about the world". This will be useful in Section 4 when we formulate the solvability of a task as the existence of some morphism. Suppose now that M , f (X) |= K a ϕ. In order to show M, X |= K a ϕ, assume that a ∈ χ(X ∩Y ) for some facet Y , and let us prove M,Y |= ϕ.
Let v be the a-colored vertex in X ∩ Y . Then f (v) ∈ f (X) ∩ f (Y )
and χ( f (v)) = a. So a ∈ χ( f (X) ∩ f (Y )) and thus M , f (Y ) |= ϕ. By induction hypothesis, we obtain M,Y |= ϕ. Finally, suppose that M , f (X) |= C B ϕ. We want to show that M, X |= C B ϕ, i.e., for every Y reachable from X following a sequence of simplexes sharing a B-colored vertex, M,Y |= ϕ. By the same reasoning as in the K a case, f (Y ) is B-reachable from f (X), so M , f (Y ) |= ϕ, and thus M,Y |= ϕ.
The restriction on ϕ forbids formulas saying something about what an agent does not know. Indeed, one can "gain" the knowledge that some agent does not know something; but this is not relevant information for solving the tasks that we have in mind. In fact, Theorem 3 is still true if the formula ϕ contains other knowledge operators such as group and common knowledge. For B a subgroup of agents, group knowledge is defined as E B ϕ = b∈B K b ϕ and common knowledge for group B is, semantically, the least solution of the equation C B ϕ = ϕ ∧ E B (C B ϕ).
DEL via simplicial complexes
We describe here our adaptation of Dynamic Epistemic Logic (DEL) to simplicial models, and an action model that is fundamental in distributed computing.
DEL basic notions
DEL is the study of modal logics of model change [4,10]. A modal logic studied in DEL is obtained by using action models [3], which are relational structures that can be used to describe a variety of informational actions. An action can be thought of as an announcement made by the environment, which is not necessarily public, in the sense that not all agents receive these announcements. An action model describes all the possible actions that might happen, as well as how they affect the different agents. We first recall the usual notion of action model; then describe a dual version, appropriate to represent epistemic change in simplicial models.
Dynamic Epistemic
Logic. An action model is a structure T, ∼, pre , where T is a domain of action points, such that for each a ∈ A, ∼ a is an equivalence relation on T , and pre : T → L K is a function that assigns a precondition pre A simplicial complex version of DEL. To work in the category of simplicial models, we consider a simplicial version of action models. First, let us define cartesian products. Given two pure chromatic simplicial complexes C and T of dimension n, the cartesian product C ×T is the following pure chromatic simplicial complex of dimension n. Its vertices are of the form (u, v) with u ∈ V(C) and v ∈ V(T ) such that χ(u) = χ(v); the color of (u, v) is χ((u, v)) = χ(u) = χ(v). Its simplexes are of the form
X ×Y = {(u 0 , v 0 ), . . . , (u k , v k )} where X = {u 0 , . . . , u k } ∈ C, Y = {v 0 , . . . , v k } ∈ T and χ(u i ) = χ(v i ).
A simplicial action model, T, χ, pre consists of a pure chromatic simplicial complex T, χ , where the facets F(T ) represent communicative actions, and pre assigns to each facet X ∈ F(T ) a precondition formula pre(X) in L K . Let M = C, χ, be a simplicial model, and A = T, χ, pre be a simplicial action Recall from Theorem 2 the two functors F and G that define an equivalence of categories between simplicial models and Kripke models. We have a similar correspondence between action models and simplicial action models, which we still write F and G. On the underlying Kripke frame and simplicial complex they are the same as before; and the precondition of an action point is just copied to the corresponding facet. The simplicial version of the product update model agrees with the usual one on Kripke models:
Proposition 3. Consider a simplicial model M and simplicial action model A, and their corresponding Kripke model F(M) and action model F(A). Then, the Kripke models F(M[A]) and F(M)[F(A)] are isomorphic. The same is true for G, starting with a Kripke model M and action model A.
Proof. The main observation is that both constructions of product update model rely on a notion of cartesian product (in the category of pure chromatic simplicial complexes for M[A], and in the category of Kripke frames for F(M)[F(A)]). These are both cartesian products in the categorical sense, therefore they are preserved by the functor F because it is part of an equivalence of category.
A basic action model for distributed computing
We describe here the immediate snapshot action model IS for one communication exchange among asynchronous agents. As an action model, it is new and to the best of our knowledge it has not been studied from the DEL perspective; immediate snapshots operations are important in distributed computing, and many variants of computational models based on them have been considered, including multi-round communication exchanges, see e.g. [1,16] but for the point we want to make about using DEL, the main issues can be studied with this very simple action model.
The situation we have in mind is the following. The n + 1 agents correspond to n + 1 concurrent processes. Initially, each process has some input value, and they communicate (only once) through a shared memory array in order to try to learn each other's input value. They use the following protocol: each process has a dedicated memory cell in the array, to which it writes its input value. Then, it reads one by one all the cells of the array, to see which other input values have been written. Based on this information, each process decides an output value. The processes are asynchronous, meaning that an execution consists of an arbitrary interleaving of the write and read operations of all the processes (one write per process, and n + 1 reads per process).
We could describe the action model corresponding to this situation, and present all of our results using it, but to illustrate more easily the basic ideas, we define instead an action model IS corresponding to a subset of all the executions in the previous situation. And we do so without loss of generality, because from the task computability perspective, they are known to be equivalent [1].
The interleavings we consider can be represented by a sequence of concurrency classes, c 1 , c 2 , . . . , c m . For each concurrency class c i , all the agents in c i execute their write operations simultaneously, then all of them execute their read operations simultaneously, then we move on to the next concurrency class c i+1 . Thus, all the agents in c i see each other's values, as well as the values of the agents from the previous concurrency classes.
Let us define formally the simplicial action model corresponding to such immediate snapshot schedules. A sequential partition of agents A is a sequence c = c 1 , c 2 , . . . , c m , of non-empty, disjoint subsets of A, whose union is equal to A. Each c i is called a concurrency class. Notice that 1 ≤ m ≤ |A|, and when m = 1 all agents take an immediate snapshot concurrently, while if m = |A|, all take immediate snapshots sequentially. The agents in a concurrency class c j learn the input values of all the agents in earlier concurrency classes c i for i ≤ j, and which agent wrote which value. In particular, agents in c m learn the inputs of all agents (and there is always at least one such agent), and if m = 1, then all agents learn all the values. Define Aview a (c) ('A' stands for "agent" view) to be the set of agents whose inputs are seen by a in c: if a ∈ c j , Aview a (c) = i≤ j c i . Notice that two executions of the immediate snapshot are indistinguishable by a when the corresponding sequential partitions yield the same Aview for a, and additionally, the agents in Aview have the same inputs.
Consider for instance the simplicial model of Example 2 where three agents A = {b, g, w} each have a binary input value 0 or 1. Let M = C, χ, be the corresponding simplicial model, and denote a facet X ∈ F(C) by a binary sequence b 0 b 1 b 2 , corresponding to the three values of b, g, w, in that order.
In the immediate snapshot simplicial action model IS = T, χ, pre for three agents A = {b, g, w}, each action in T is associated with a sequential partition of A. Furthermore, there is one copy of each sequential partition c for each facet X ∈ F(C) of model M. Thus, an action of T is given by the data
c, b 0 , b 1 , b 2 , which we write c b 0 b 1 b 2 . The precondition of the action c b 0 b 1 b 2 is true precisely in the facet b 0 b 1 b 2 of M (it{ b, view b (c b 0 b 1 b 2 ) , g, view g (c b 0 b 1 b 2 ) , w, view w (c b 0 b 1 b 2 ) }
The figure above illustrates (part of) the action model IS. It consists of the subdivisions of two triangles; the green copy above has one triangle for each sequential partition (and has four sequential partitions depicted). Similarly, the yellow subdivided triangle below repeats again all sequential partitions, but for a different precondition. The precondition for all facets in the subdivided triangle above is 000, while for the facets of the subdivided triangle below it is 100. The subdivision on top has four facets identified, X,Y, Z,W , each one corresponding to one of the four types of sequential partitions of A = {b, w, g}, along with the corresponding Aviews shown in bubbles. The colors black, grey, white of the vertices correspond respectively to agents b, g, w. Notice that, for example, neither b nor w distinguish between actions Y and Z, and indeed, their views are equal in Y and Z: the view of b consists of itself and the view of w consists of the three inputs. The numbers on the subdivided triangles indicate the views. In the corners, an agent does not learn the input of any other agent. In the boundary, two agents learn each other's inputs, and in the center, all three learn each other's inputs. Finally, let us look at what happens on the boundary shared by both subdivisions. For example, the two facets in the middle of the figure correspond to the sequential partition {gw}{b}; neither w nor g have seen b, so they cannot tell whether the input of b is 0 or 1.
An action model is uniform if its set of actions (facets) can be partitioned into k copies of a complex C, called components, such that all actions in C i have the same precondition, which is true in exactly one facet X i of the simplicial model M. The action model IS is indeed uniform, and its components are isomorphic to a simplicial complex C, called the standard chromatic subdivision, that has been thoroughly studied. It is clear from the figure that C is a subdivision, but for an arbitrary number of agents, the proof is not simple [16,23]. It has been shown to have several other topological properties, such as being collapsible [5]. But in fact, for many applications such as consensus and set agreement, it is sufficient to observe the following (see ch.9 of [16]). For a detailed proof see [1]. To complete the example, notice that the effect of applying the action model IS to the model M of Example 2, which consists of a triangulated sphere, is to subdivide each of the triangles in the sphere. Remarkably, the topology of the initial simplicial complex is preserved.
In the IS model, each agent executes a single immediate snapshot. Iterating this model gives rise to the iterated immediate snapshot model IS r [1,27], where each agent executes r consecutive immediate snapshots. Each component is a chromatic subdivision, where every triangle is subdivided r times.
A DEL semantics for distributed task computability 4.1 Tasks
Consider the situation where a set of agents A starts in an initial global state, defined by values given to each agent. The values are local, in the sense that each agent knows its own initial value, but not necessarily the values given to other agents. The agents communicate to each other their initial values, via the immediate snapshot action model IS of Section 3.2. Then, based on the information each agent has after communication, the agent produces an output value. A task specifies the output values that the agents may decide, when starting in a given input state. Tasks have been studied since early on in distributed computability [6]. Here we provide, for the first time, a DEL semantics for tasks. Consider a simplicial model I = I, χ, called the initial simplicial model. Each facet of I, with its labeling , represents a possible initial configuration. Let us fix I to be the binary inputs model of Example 2, to illustrate the ideas, and because it appears frequently in distributed computing.
A task for I is a simplicial action model T = T, χ, pre for agents A, where each facet is of the form X = { b, d b , g, d g , w, d w }, where the values d b , d g , d w are taken from an arbitrary domain of output values. Each such X has a precondition that is true in one or more facets of I, interpreted as "if the input configuration is a facet in which pre(X) holds, and every agent a ∈ A decides the value d a , then this is a valid execution".
The most important task in distributed computing is binary consensus, where the agents must agree on a value 0 or 1, such that at least one agent had the agreed value as input. Thus, T has only two facets, X 0 where all decisions are 0 and X 1 , where all decisions are 1. pre(X 0 ) is true in all facets of I, except for the one where all agents start with input 1. Similarly, pre(X 1 ) is true in all facets of I, except for the one where all agents start with input 0. The following generalization of consensus has been well studied in distributed computability [16]. In the k-set agreement task, agents start with inputs from a set of at least k + 1 values, and have to decide on at most k different inputs.
Semantics of task solvability
Given the simplicial input model I and a communication model A such as IS, we get the simplicial protocol model I[A], that represents the knowledge gained by the agents after executing A. To solve a task T , each agent, based on its own knowledge, should produce an output value, such that the vector of output values corresponds to a facet of T , respecting the preconditions of the task.
The following gives a formal epistemic logic semantics to task solvability, Recall that a morphism δ of simplicial models is a chromatic simplicial map that preserves the labeling: ( f (v)) = (v). Also recall that the product update model I[A] is a sub-complex of the cartesian product I ×A, whose vertices are of the form (i, ac) with i a vertex of I and ac a vertex of A. We write π I for the first projection on I, which is a morphism of simplicial models.
Definition 2. A task T is solvable in A if there exists a morphism δ : I[A] → I[T ]
such that π I • δ = π I , i.e., the diagram of simplicial complexes below commutes.
I[A]
I[T ] I π I δ π I
The justification for this definition is the following. A facet X in I[A] corresponds to a pair (i, ac), where i ∈ F(I) represents input value assignments to all agents, and ac ∈ F(A) represents an action, codifying the communication exchanges that took place. The morphism δ takes X to a facet δ (X) = (i, dec) of I[T ], where dec ∈ F(T ) is the set of decision values that the agents will choose in the situation X. Moreover, pre(dec) holds in i, meaning that dec corresponds to valid decision values for input i. The commutativity of the diagram expresses the fact that both X and δ (X) correspond to the same input assignment i. Now consider a single vertex v ∈ X with χ(v) = a ∈ A. Then, agent a decides its value solely according to its knowledge in I[A]: if another facet X contains v, then δ (v) ∈ δ (X) ∩ δ (X ), meaning that a has to decide the same value in both situations.
The diagram above has two illuminating interpretations. First, by Theorem 3, we know that the knowledge about the world of each agent can only decrease (or stay constant) along the δ arrow. Recall that a simplicial map is the discrete equivalent of a continuous map, and hence task solvability is of a topological nature. This leads us to the connection with distributed computability described in this extended abstract; further details are in [13,14].
Applications
Here we describe how to use our DEL setting to analyze solvability in the immediate-snapshot model of three well-studied distributed computing tasks: consensus, set agreement, and approximate agreement. Their solvability is already well-understood; our aim here is to understand the epistemic logic content of the known topological arguments that are used to prove unsolvability.
Consensus Let I = I, χ, be the initial simplicial model for binary input values, and T = T, χ, pre be the action model for binary consensus. Thus, T has only two facets, X 0 where all decisions are 0 and X 1 , where all decisions are 1. The underlying complex of I[T ] consists of two disjoint simplicial complexes: I 0 × X 0 and I 1 × X 1 , where I 0 consists of all input facets with at least one 0, and I 1 consists of all input facets with at least one 1. Notice that, in fact, each of the two complexes I i × X i , for i ∈ {0, 1}, is isomorphic to I i , since X i consists of just one facet.
To show that binary consensus cannot be solved by the immediate snapshot protocol, we must prove that the map δ : I[A] → I[T ] of Definition 2 does not exist. The usual proof of impossibility uses a topological obstruction to the existence of δ . Here, instead, we exhibit a logical obstruction. Theorem 4. The binary consensus task is not solvable by IS.
Proof. We first state some required knowledge at I[T ] to solve the task. Let ϕ i be a formula denoting that at least one agent has input i. We claim that, for i ∈ {0, 1}, at any facet Y of I i × X i , there is common knowledge that at least one input is i,
I[T ],Y |= C A ϕ i .
Now, consider the simplicial model I[IS], for the immediate snapshot action model. By Lemma 1 the underlying complex of I[IS] is strongly connected, and hence there is a path from the facet with valuation indicating that all inputs are 0 to the facet where all inputs are 1. Namely, we claim that at any facet X of I[IS], it is not the case that I[IS], X |= C A ϕ i , for both i = 0 and i = 1.
Finally, we know that morphisms of simplicial models cannot "gain knowledge about the world" from Theorem 3, and hence, there cannot be a morphism δ from I[IS] to I[T ], by the two previous claims.
Two observations. First, notice that the proof argument holds for any other model, instead of IS, which is connected. This is the case for any number of communication rounds by wait-free asynchronous agents [19], and even if only one may crash in a message passing system [11], which are the classic consensus impossibility results. Secondly, the usual topological argument for impossibility is the following: because simplicial maps preserve connectivity, δ cannot send a connected simplicial complex into a disconnected simplicial complex. Notice how in both the logical and the topological proofs, the main ingredient is a connectedness argument.
Set agreement Let I = I, χ, be the initial simplicial model for A = {b, w, g}, and three possible input values, {0, 1, 2}. Let T = T, χ, pre be the action model for 2-set agreement, requiring that each agent decides on one of the input values, and at most 2 different values are decided. Thus, T has facets X d 0 ,d 1 ,d 2 , for each vector d 0 , d 1 , d 2 , such that d i ∈ {0, 1, 2}, |{d 0 , d 1 , d 2 }| ≤ 2, and pre(X d 0 ,d 1 ,d 2 ) = ϕ d 0 ∧ ϕ d 1 ∧ ϕ d 2 , where ϕ i is as above.
Theorem 5. The 2-set agreement task is not solvable by IS.
Proof (Sketch). The usual topological argument [16] roughly goes as follows. We can visualize the complex I[T ] as having the structure of a triangle with a hole in the middle. The three "corners" of the triangle, indexed by i ∈ {0, 1, 2}, consist of just one facet of the form I i × X i , where I i is the input facet with only one input, i, and in X i all decisions are i. The three "edges" of the triangle are of the form I i j × X i j , where I i j consists of all facets with inputs in {i, j}, and X i j the facets whose decisions are in {i, j}. Notice that I i j × X i j contains I i × X i and I j × X j . This triangle must have a hole in the middle: otherwise, by Sperner's lemma (see e.g. [22,16] Approximate agreement We discuss now the approximate agreement task, where agents have to decide values which are 1/N apart from each other. Its solvability depends on the number of immediate snapshot communication rounds r that the agents perform. We did not describe in detail the action model IS r , so we only briefly mention that the task is solvable if and only if the number of rounds is large enough, with respect to N. Very roughly, there is a center facet X c in the product update of the task, where I[T ], X c |= E k φ c , where k is roughly N/2, for a formula φ c representing the input values in X c . On the other hand, there is no facet X in I[IS r ] where this knowledge exists, unless r is large enough. A detailed proof is in the technical report [13].
Conclusions
We have made a first step into defining a version of fault-tolerant multi-agent DEL using simplicial complexes as models, providing a different perspective from the classical knowledge approach based on Kripke frames. Also, we have defined problem specifications based on DEL using simplical complexes, instead of based on formula specifications. We have thus established a bridge between the theory of distributed computability and epistemic logic. We illustrated the setting with a simple one-round communication action model IS, that corresponds to a well-studied situation in distributed computing, but many other models can be treated similarly.
Many interesting questions are left for future work. We have developed here all our theory on pure simplicial complexes, where all the facets are of the same dimension. Extending it to complexes with lower dimensional facets would allow us to model detectable failures. In two preliminary reports we give additional details, and explore further some of these issues [13,14]. It is known to be undecidable whether a task is solvable in the immediate snapshot model, even for three processes [12,17], and hence the connection we establish with DEL implies that it is undecidable if certain knowledge has been gained in multi-round immediate snapshot action models, but further work is needed to study this issue. Future work is needed to study bisimulations and their relation to the simulations studied in task computability [18]. It would be of interest to study other distributed computing settings, especially those which have stronger communication objects available, and which are known to yield complexes that might not preserve the topology of the input complex.
is a morphism of Kripke frames. Conversely, consider a Kripke frame M = S, ∼ on the set of agents A = {a 0 , . . . , a n }. Intuitively, what we want to do is take one n-simplex {v s 0 , . . . , v s n } for each s ∈ S, and glue them together according to the indistiguishability relation. Formally, let V = {v s i | s ∈ S, 0 ≤ i ≤ n}, and equip it with the equivalence relation R defined by v s i R v s i if and only if s ∼ a i s . Then define G(M) whose vertices are the equivalence classes [v s i ] ∈ V /R, and whose simplexes are of the form {[v s 0 ], . . . , [v s n ]} for s ∈ S, as well as their sub-simplexes. The coloring map is given by χ([v s i ]) = a i . It is a well-defined chromatic simplicial complex since all elements of an equivalence class of R have the same color. The facets are exactly the {[v s 0 ], . . . , [v s n ]} for s ∈ S, since the Kripke frame M is proper, we cannot equate two facets together. Now let
where χ is the coloring, in G(M), of X and Y which are facets in G(M). But facets in G(M) are just in direct bijection with the worlds of M, i.e. X = {[v s 0 ], . . . , [v s n ]} and Y = {[v t 0 ], . . . , [v t n ]} where s,t ∈ M. Note that χ([v s i ]) = a i and χ([v t i ]) = a i so a ∈ χ(X ∩ Y ) means that a = a i for some i and v s i R v t i . This can only be the case, by definition of G(M) if s ∼ a i t. This proves that FG(M) and M are isomorphic Kripke frames.
A
Kripke model M = S, ∼, L consists of a Kripke frame S, ∼ and a function L : S → P(AP). Intuitively, L(s) is the set of atomic propositions that are true in the state s. A Kripke model is proper if the underlying Kripke frame is proper. A Kripke model is local if for every agent a ∈ A, s ∼ a s implies L(s) ∩ AP a = L(s ) ∩ AP a , i.e., an agent always knows its own values. Let M = S, ∼, L and M = S , ∼ , L be two Kripke models on the same set AP. A morphism of Kripke models f : M → M is a morphism of the underlying Kripke frames such that L ( f (s)) = L(s) for every state s in S. We write KM A,AP for the category of local proper Kripke models.
Theorem 2 .
2SM A,AP and KM A,AP are equivalent categories. Proof. We describe the functors F : SM → KM and G : KM → SM. On the underlying Kripke frame and simplicial complex, they act the same as in the proof of Theorem 1. Given a simplicial model M = C, χ, , we associate the Kripke model F(M) = F(C), ∼, L where the labeling L of a facet X ∈ F(C) is given by L
Proposition 1 .
1Given a simplicial model M and a facet X, M, X |= ϕ iff F(M), X |= K ϕ. Conversely, given a local proper Kripke model N and state s, N, s |= K ϕ iff G(N), G(s) |= ϕ, where G(s) is the facet {v s 0 , . . . , v s n } of G(N).
Theorem 3
3(knowledge gain). Consider simplicial models M = C, χ, and M = C , χ , , and a morphism f : M → M . Let X ∈ F(C) be a facet of M, a an agent, and ϕ a formula which does not contain negations except, possibly, in front of atomic propositions. Then, M , f (X) |= ϕ implies M, X |= ϕ.
Proof. We proceed by induction on ϕ. First, for p an atomic proposition, since morphisms preserves the valuation , we have M , f (Y ) |= p iff M,Y |= p. Thus the theorem is true for (possibly negated) atomic propositions. The case of the conjunction follows trivially from the induction hypothesis.
(t) to each t ∈ T . For an initial Kripke model M, the effect of action model A is a Kripke model M[A]. Let M = S, ∼, L be a Kripke model and A = T, ∼, pre be an action model. The product update model is M[A] = S[A], ∼ [A] , L[A] , where each world of S[A] is a pair (s,t) with s ∈ S, t ∈ T such that pre(t) holds in s. Then, (s,t) ∼ [A]a (s ,t ) whenever it holds that s ∼ a s and t ∼ a t .The valuation L[A] at a pair (s,t) is just as it was at s, i.e., L[A]((s,t)) = L(s). Proposition 2. Let M be a local proper Kripke model and A = T, ∼, pre a proper action model, then M[A] is proper and local. Proof. M[A] is proper: let (s,t) and (s ,t ) be two distinct states of M[A]. Then either s = s or t = t , and in both cases, since M and A are proper, at least one agent can distinguish between the two. Now, M[A] is local: suppose (s,t) ∼ [A] a (s ,t ). Then in particular s ∼ a s and since M is local, L(s) ∩ AP a = L(s ) ∩ AP a . The same goes for L[A] since it just copies L.
model. The product update simplicial model M[A] = C[A], χ[A], [A] is a simplicial model whose underlying simplicial complex is a sub-complex of the cartesian product C × T , induced by all the facets of the form X ×Y such that pre(Y ) holds in X, i.e., M, X |= pre(Y ). The valuation : V(C[A]) → P(AP) at a pair (u, v) is just as it was at u: [A]((u, v)) = (u).
is a conjunction of atomic propositions). Consider an action c b 0 b 1 b 2 , where c = c 1 , . . . , c m . Then c b 0 b 1 b 2 is interpreted as follows. If an agent a is in c j then a learns the values (of facet b 0 b 1 b 2 ) of all agents in c i for i ≤ j, and only those values. We write view a (c b 0 b 1 b 2 ) the vector of values that a learned, namely, the vector obtained from b 0 b 1 b 2 by replacing a value b i by / 0 for agents not in Aview a (c). Formally, the chromatic simplicial complex T, χ , consists of all facets of the form:
Lemma 1 .
1Each component of IS is a pseudomanifold with boundary. If M is a pseudomanifold with boundary, then so is M[IS].
So agents should improve knowledge through communication, by going from I to I[A]. The task is solvable if and only if there is enough knowledge in I[A] to match the knowledge required by I[T ]. Secondly, the possibility of solving a task depends on the existence of a certain simplicial map from the complex of I[A] to the complex of I[T ].
), there would be a facet with three distinct decision values. Thus, I[T ] is not 2-connected. But since I[IS] is 2-connected, and simplicial maps preserve 2-connectivity, there cannot exist a suitable δ . Finding a logical obstruction to the existence of δ is an open question. This would amount to finding a formula ϕ which is true I[T ], but false I[IS], and applying Theorem 3. Doing so, we would understand better what knowledge is necessary to solve set agreement, which is not available in I[IS].
Acknowledgments This work has been partially supported by UNAM-PAPIIT IN109917 and France-Mexico ECOS 207560(M12-M01).
The Combinatorial Structure of Wait-Free Solvable Tasks. H Attiya, & S Rajsbaum, 10.1137/S0097539797330689SIAM J. Comput. 314H. Attiya & S. Rajsbaum (2002): The Combinatorial Structure of Wait-Free Solvable Tasks. SIAM J. Comput. 31(4), pp. 1286-1313, doi:10.1137/S0097539797330689.
H Attiya, & J Welch, 10.1002/0471478210Distributed Computing: Fundamentals, Simulations, and Advanced Topics. Wiley2 editionH. Attiya & J. Welch (2004): Distributed Computing: Fundamentals, Simulations, and Advanced Topics, 2 edition. Wiley, doi:10.1002/0471478210.
The logic of common knowledge, public announcements, and private suspicions. A Baltag, L S Moss, & S Solecki, 10.1007/978-3-319-20451-2_38TARK VII. A. Baltag, L.S. Moss & S. Solecki (1998): The logic of common knowledge, public announcements, and private suspicions. In: TARK VII, pp. 43-56, doi:10.1007/978-3-319-20451-2˙38.
Dynamic Epistemic Logic. A Baltag & B. Renne, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford UniversityA. Baltag & B. Renne (2016): Dynamic Epistemic Logic. In: The Stanford Encyclopedia of Phi- losophy, see https: // plato. stanford. edu/ archives/ win2016/ entries/ dynamic-epistemic/ , Metaphysics Research Lab, Stanford University.
Collapsibility of read/write models using discrete morse theory. F Benavides, & S Rajsbaum, 10.1007/s41468-018-0011-7Journal of Applied and Computational Topology. F. Benavides & S. Rajsbaum (2018): Collapsibility of read/write models using discrete morse theory. Journal of Applied and Computational Topology, pp. 1-32, doi:10.1007/s41468-018-0011-7.
A Combinatorial Characterization of the Distributed 1-Solvable Tasks. O Biran, S Moran, & S Zaks, 10.1016/0196-6774(90)90020-FJ. Algorithms. 113O. Biran, S. Moran & S. Zaks (1990): A Combinatorial Characterization of the Distributed 1-Solvable Tasks. J. Algorithms 11(3), pp. 420-440, doi:10.1016/0196-6774(90)90020-F.
Unbeatable Set Consensus via Topological and Combinatorial Reasoning. A Castañeda, Y A Gonczarowski, & Y Moses, 10.1145/2933057.2933120PODC, ACMA. Castañeda, Y. A. Gonczarowski & Y. Moses (2016): Unbeatable Set Consensus via Topological and Combinatorial Reasoning. In: PODC, ACM, pp. 107-116, doi:10.1145/2933057.2933120.
A Castañeda, Y A Gonczarowski, & Y Moses, 10.1007/978-3-662-45174-8_7Unbeatable Consensus. In: DISC. Springer8784A. Castañeda, Y. A. Gonczarowski & Y. Moses (2014): Unbeatable Consensus. In: DISC, LNCS 8784, Springer, pp. 91-106, doi:10.1007/978-3-662-45174-8˙7.
The Synchronicity of Dynamic Epistemic Logic. C Dégremont, B Löwe, & A Witzel, 10.1145/2000378.2000395TARK XIII, ACM. C. Dégremont, B. Löwe & A. Witzel (2011): The Synchronicity of Dynamic Epistemic Logic. In: TARK XIII, ACM, pp. 145-152, doi:10.1145/2000378.2000395.
Dynamic Epistemic Logic. H Van Ditmarsch, W Van Der Hoek, & B Kooi, 10.1007/978-1-4020-5839-4SpringerH. van Ditmarsch, W. van der Hoek & B. Kooi (2007): Dynamic Epistemic Logic. Springer, doi:10.1007/978- 1-4020-5839-4.
Impossibility Of Distributed Commit With One Faulty Process. M Fischer, N A S Lynch & M, Paterson, 10.1145/3149.214121Journal of the ACM. 322M. Fischer, N. A. Lynch & M. S. Paterson (1985): Impossibility Of Distributed Commit With One Faulty Process. Journal of the ACM 32(2), pp. 374-382, doi:10.1145/3149.214121.
Three-Processor Tasks Are Undecidable. E Gafni, & E Koutsoupias, 10.1137/S0097539796305766SIAM J. Comput. 283E. Gafni & E. Koutsoupias (1999): Three-Processor Tasks Are Undecidable. SIAM J. Comput. 28(3), pp. 970-983, doi:10.1137/S0097539796305766.
A simplicial complex model of dynamic epistemic logic for fault-tolerant distributed computing. E Goubault, & S Rajsbaum, arXiv:1703.11005Technical ReportE. Goubault & S. Rajsbaum (2017): A simplicial complex model of dynamic epistemic logic for fault-tolerant distributed computing. Technical Report, arXiv:1703.11005.
Models of fault-tolerant distributed computation via dynamic epistemic logic. E Goubault, & S Rajsbaum, arXiv:1704.07883Technical ReportE. Goubault & S. Rajsbaum (2017): Models of fault-tolerant distributed computation via dynamic epistemic logic. Technical Report, arXiv:1704.07883.
Computable obstructions to wait-free computability. J Havlicek, 10.1007/s004460050068Distributed Computing. 132J. Havlicek (2000): Computable obstructions to wait-free computability. Distributed Computing 13(2), pp. 59-83, doi:10.1007/s004460050068.
M Herlihy, D Kozlov, & S Rajsbaum, 10.1016/C2011-0-07032-1Distributed Computing Through Combinatorial Topology. Elsevier-Morgan KaufmannM. Herlihy, D. Kozlov & S. Rajsbaum (2013): Distributed Computing Through Combinatorial Topology. Elsevier-Morgan Kaufmann, doi:10.1016/C2011-0-07032-1.
The Decidability of Distributed Decision Tasks. M Herlihy, & S Rajsbaum, 10.1145/258533.258652STOC, ACM. M. Herlihy & S. Rajsbaum (1997): The Decidability of Distributed Decision Tasks. In: STOC, ACM, pp. 589-598, doi:10.1145/258533.258652.
Simulations and reductions for colorless tasks. M Herlihy, & S Rajsbaum, 10.1145/2332432.2332483PODC, ACM. M. Herlihy & S. Rajsbaum (2012): Simulations and reductions for colorless tasks. In: PODC, ACM, pp. 253-260, doi:10.1145/2332432.2332483.
Impossibility and Universality Results for Wait-free Synchronization. M P Herlihy, 10.1145/62546.62593PODC, ACM. M. P. Herlihy (1988): Impossibility and Universality Results for Wait-free Synchronization. In: PODC, ACM, pp. 276-290, doi:10.1145/62546.62593.
An Intuitionistic Epistemic Logic for Sequential Consistency on Shared Memory. Y Hirai, LPAR. Berlin HeidelbergSpringerY. Hirai (2010): An Intuitionistic Epistemic Logic for Sequential Consistency on Shared Memory. In: LPAR, Springer Berlin Heidelberg, pp. 272-289.
Reasoning about knowledge and messages in asynchronous multi-agent systems. S Knight, B Maubert, & F Schwarzentruber, 10.1017/S0960129517000214Mathematical Structures in Computer Science. S. Knight, B. Maubert & F. Schwarzentruber (2017): Reasoning about knowledge and messages in asynchronous multi-agent systems. Mathematical Structures in Computer Science, pp. 1-42, doi:10.1017/S0960129517000214.
D Kozlov, 10.1007/978-3-540-71962-5Combinatorial Algebraic Topology. SpringerD. Kozlov (2007): Combinatorial Algebraic Topology. Springer, doi:10.1007/978-3-540-71962-5.
Chromatic subdivision of a simplicial complex. D N Kozlov, 10.4310/HHA.2012.v14.n2.a12Homology Homotopy Appl. 142D. N. Kozlov (2012): Chromatic subdivision of a simplicial complex. Homology Homotopy Appl. 14(2), pp. 197-209, doi:10.4310/HHA.2012.v14.n2.a12.
Relating Knowledge and Coordinated Action: The Knowledge of Preconditions Principle. Yoram Moses, 10.4204/EPTCS.215.17TARK, EPTCSYoram Moses (2015): Relating Knowledge and Coordinated Action: The Knowledge of Preconditions Prin- ciple. In: TARK, EPTCS, pp. 231-245, doi:10.4204/EPTCS.215.17.
Interpreted systems and Kripke models for multiagent systems from a categorical perspective. T Porter, 10.1016/j.tcs.2004.04.005Theoretical Computer Science. 3231T. Porter (2004): Interpreted systems and Kripke models for multiagent systems from a categorical perspec- tive. Theoretical Computer Science 323(1), pp. 235 -266, doi:10.1016/j.tcs.2004.04.005.
Y , Moses R Fagin, J Halpern, & M Vardi, Reasoning About Knowledge. MIT PressY. Moses R. Fagin, J. Halpern & M. Vardi (1995): Reasoning About Knowledge. MIT Press.
S Rajsbaum, 10.1007/978-3-642-12200-2_36Iterated Shared Memory Models. In: LATIN. Springer6034S. Rajsbaum (2010): Iterated Shared Memory Models. In: LATIN, LNCS 6034, Springer, pp. 407-416, doi:10.1007/978-3-642-12200-2˙36.
|
[] |
[
"Non-parametric Quantile Regression via the K-NN Fused Lasso",
"Non-parametric Quantile Regression via the K-NN Fused Lasso"
] |
[
"Steven Siwei Ye [email protected] \nDepartment of Statistics\nDepartment of Statistics\nUniversity of California\n90095Los Angeles Los AngelesCAUSA\n",
"Oscar Hernan \nUniversity of California\n90095Los Angeles Los AngelesCAUSA\n",
"Madrid Padilla \nUniversity of California\n90095Los Angeles Los AngelesCAUSA\n"
] |
[
"Department of Statistics\nDepartment of Statistics\nUniversity of California\n90095Los Angeles Los AngelesCAUSA",
"University of California\n90095Los Angeles Los AngelesCAUSA",
"University of California\n90095Los Angeles Los AngelesCAUSA"
] |
[] |
Quantile regression is a statistical method for estimating conditional quantiles of a response variable. In addition, for mean estimation, it is well known that quantile regression is more robust to outliers than l 2 -based methods. By using the fused lasso penalty over a K-nearest neighbors graph, we propose an adaptive quantile estimator in a non-parametric setup. We show that the estimator attains optimal rate of n −1/d up to a logarithmic factor, under mild assumptions on the data generation mechanism of the d-dimensional data. We develop algorithms to compute the estimator and discuss methodology for model selection. Numerical experiments on simulated and real data demonstrate clear advantages of the proposed estimator over state of the art methods. All codes that implement the algorithms and the datasets used in the experiments are publicly available on the author's Github page (https://github.com/stevenysw/qt_knnfl).
| null |
[
"https://arxiv.org/pdf/2012.01758v4.pdf"
] | 227,254,651 |
2012.01758
|
1ab7d65777e8272607894846b2aaaa9782344c4e
|
Non-parametric Quantile Regression via the K-NN Fused Lasso
Steven Siwei Ye [email protected]
Department of Statistics
Department of Statistics
University of California
90095Los Angeles Los AngelesCAUSA
Oscar Hernan
University of California
90095Los Angeles Los AngelesCAUSA
Madrid Padilla
University of California
90095Los Angeles Los AngelesCAUSA
Non-parametric Quantile Regression via the K-NN Fused Lasso
quantile regressionnon-parametricfused lassoK-nearest neighborsbounded variation
Quantile regression is a statistical method for estimating conditional quantiles of a response variable. In addition, for mean estimation, it is well known that quantile regression is more robust to outliers than l 2 -based methods. By using the fused lasso penalty over a K-nearest neighbors graph, we propose an adaptive quantile estimator in a non-parametric setup. We show that the estimator attains optimal rate of n −1/d up to a logarithmic factor, under mild assumptions on the data generation mechanism of the d-dimensional data. We develop algorithms to compute the estimator and discuss methodology for model selection. Numerical experiments on simulated and real data demonstrate clear advantages of the proposed estimator over state of the art methods. All codes that implement the algorithms and the datasets used in the experiments are publicly available on the author's Github page (https://github.com/stevenysw/qt_knnfl).
Introduction
Assume that we have n observations, (x 1 , y 1 ), ..., (x n , y n ), of the pair of random variables (X, Y ). The response variable Y is a real-valued vector and X is a multivariate covariate or predictor variable in a metric space X with metric d X . A standard goal of non-parametric regression is to infer, in some way, the underlying relationship between Y and X. The generative model behind this can be expressed as
y i = f 0 (x i ) + i , for i = 1, ..., n,(1)
where f 0 is an unknown function that we want to estimate. While usual regression considers estimation of the conditional mean of the response variable, quantile regression estimates the conditional median (or other desired quantiles) of the response variable. Specifically, given a quantile level τ ∈ (0, 1), we can rewrite (1) as
y i = θ * i + i , for i = 1, ..., n,(2)
where Here, θ * is the vector of τ -quantiles of y, F y i |x i represents the cumulative distribution function of y i given x i , and P( i ≤ 0|x i ) = τ . The goal of quantile regression is to estimate θ * as accurately as possible, and it usually involves an optimization problem in the form θ ∈ arg min θ∈ζ⊂R n L(θ),
θ * i = F −1 y i |x i (τ ).(3)
where L(θ) is the loss function is defined by
L(θ) = n i=1 ρ τ (y i − θ i ),(5)
with ρ τ (t) = (τ − 1{t ≤ 0})t, the asymmetric absolute deviation function (Koenker and Bassett Jr, 1978).
In this paper, we apply total variation denoising for non-parametric quantile regression in a multivariate setup and combine it with the K-NN procedure. We leverage insights gained from recent results for quantile trend filtering (Padilla and Chatterjee, 2020) and K-NN fused lasso (Padilla et al., 2020a). Our proposed estimator, quantile K-NN fused lasso, can adapt to the piecewise linear or piecewise polynomial structure in the true vector properly.
It takes simply two steps to compute our proposed quantile K-NN fused lasso estimator. We first construct a K-nearest-neighbor graph corresponding to the given observations. The second step involves a constrained optimization problem with a lasso-type penalty along the K-NN graph as followθ ∈ arg min θ∈ζ⊂R n L(θ) + λ ∇ G θ 1 ,
where λ > 0 is a tuning parameter. The notation ∇ G in the penalty term represents the oriented incidence matrix of the K-NN graph, and we will provide the detailed definition in Section 2.
We study the rate of convergence of the estimator defined in (6). Towards that end, we denote a loss function ∆ 2 n : R n → R by
∆ 2 n (δ) := 1 n n i=1 min{|δ i |, δ 2 i }.
The function ∆ 2 n (·) appeared in Padilla and Chatterjee (2020) and is similar to a Huber loss function (see page 471 in Wainwright, 2018), which offers a compromise between the leastsquares cost and the l 1 -norm cost function and is thus less sensitive to outliers in data. We show that under mild conditions, (6) attains a convergence rate of n −1/d for d-dimensional data in terms of ∆ 2 n , ignoring the logarithmic factor. The rate is nearly minimax and it matches the mean squared error rates from Padilla et al. (2020a). However, unlike Padilla et al. (2020a), our result holds under general errors allowing for heavy-tailed distributions. Another notable point in our theoretical analysis, different from previous quantile regression work, is that we only require a bounded variation class of signals and hence guarantee our result to hold under very general models.
Previous Work
Quantile regression, since introduced by Koenker and Bassett Jr (1978), has become a commonly used class of methods in many applications thanks to its flexibility and robustness for modelling conditional distributions. The study of quantile regression in non-parametric setups can be dated back to Utreras (1981), Cox (1983) and Eubank (1988), whose works were mainly developed for median regression on one-dimensional data. Later, Koenker et al. (1994) proposed a more general estimator for any desired quantile τ , quantile smoothing spline, and He et al. (1998) provided a bivariate version of the estimator. Quantile smoothing spline problems are of a l 1 penalty structure, which led to a more general study on l 1 -norm regularized quantile regression by Li and Zhu (2008). Other methods for nonparametric quantile regression have also been proposed in the literature. Yu and Jones (1998), Cai and Xu (2008), and Spokoiny et al. (2013) explored local polynomial quantile regression. Belloni et al. (2019) studied non-parametric series quantile regression, and Meinshausen (2006) introduced quantile random forests. Quantile regression with rectified linear unit (ReLU) neural networks is another edge-cutting approach that utilizes some knowledge from the field of deep learning, and the theory is studied in Padilla et al. (2020b).
Our approach exploits the local adaptivity of total variation denoising. Rudin et al. (1992) first proposed total variation denoising for the application of image processing, and Tibshirani and Saunders (2005) studied fused lasso thoroughly. Later, Kim et al. (2009) extended the discussion to the so-called trend filtering on one-dimensional data, and Wang et al. (2016) provided a generalization of trend filtering to the setting of estimation on graphs. These problems can be formulated similarly into an optimization problem of the formθ ∈ arg min
θ∈R n 1 2 n i=1 (θ i − y i ) 2 + λ Dθ 1 ,(7)
where λ > 0 is a regularization parameter to be chosen carefully and D is a matrix. For instance, trend filtering order k (fused lasso k = 0) consists of matrices D that capture the (k + 1)th order total variation of a given signal, see Tibshirani (2014). A substantial amount of literature focused on theoretical guarantees of these estimators. Mammen and van de Geer (1997) and Tibshirani (2014) showed that trend filtering attains nearly minimax rates in mean squared error (MSE) for estimating functions of bounded variation. Hütter and Rigollet (2016) showed a sharp oracle rate of total variation denoising along grid graphs. More recently, Padilla et al. (2020a) incorporated fused lasso with the K-NN procedure and proved that the K-NN fused lasso also achieves nearly minimax convergence rate. In the quantile regression literature, Belloni and Chernozhukov (2011), Kato (2011), Fan et al. (2014 studied quantile model selection via l 1 regularization, but most of these works required strict linear assumptions. Padilla and Chatterjee (2020) provided proofs for theoretical properties of quantile trend filtering estimator in one dimension. They showed that under minimal assumptions of the data generation mechanism, quantile trend filtering estimator attains a minimax rate for one-dimensional piecewise polynomial regression. On the computational side, quantile regression is different from l 2 based methods because it requires a non-trivial reformulation of the optimization problem due to non-differentiability of the loss as (5). The most well-known algorithm for computing a quantile estimator is due to Koenker (2005) and it uses an interior point (IP) approach. Pietrosanu et al. (2017) stud-ied high-dimensional quantile regression problems and obtained estimators by applying the alternating direction method of multipliers (ADMM; Boyd et al., 2011), majorize-minimize (MM;Hunter and Lange, 2000), and coordinate descent (CD; Wu and Lange, 2008) algorithms for variable selection. For computing trend filtering estimates, Hochbaum and Lu (2017) developed a fast algorithm for quantile fused lasso in O(n log n) operations. Recently, Brantley et al. (2020) proposed an ADMM based algorithm for computing kth order quantile trend filtering estimators for one-dimensional data.
Outline of the Paper
In Section 2, we provide the definition of quantile K-nearest-neighbors fused lasso estimator and the constrained version of the problem. Two algorithms to compute the proposed estimators numerically -alternating directions method of multipliers (ADMM), and majorize-minimize (MM), are introduced in Section 3, and the discussion on how to select an appropriate penalty parameter in practice is also included. Section 4 presents the two theoretical developments regarding the constrained and penalized estimators. The theorems demonstrate that under general assumptions, both estimators converge at a rate of n −1/d , up to a logarithmic factor, for estimating d-dimensional data under the loss function ∆ 2 n defined above. Section 5 lists the results of numerical experiments on multiple simulated datasets and two real datasets, California housing data and Chicago crime data. The experiments show that the proposed estimator outperform state-of-the-art methods on both simulated and real datasets. Moreover, the comparison on accuracy and computational time among the algorithms introduced in Section 3 provides the audience with some insights on choosing a suitable algorithm for specific problems. The proofs of theorems in the paper are provided in the Appendix.
Quantile K-NN Fused Lasso
The first step to construct the quantile K-NN fused lasso estimator is to build a K-NN graph G. Specifically, given the observations, G has vertex set V = {1, ..., n}, and its edge set E K contains the pair (i, j), for i ∈ V , j ∈ V , and i = j, if and only if x i is among the K-nearest neighbors of x j , with respect to the metric d X , or vice versa. After constructing the K-NN graph, we can formalize an optimization problem for quantile K-NN fused lasso asθ = arg min
θ∈R n n i=1 ρ τ (y i − θ i ) + λ ∇ G θ 1 ,(8)
where λ > 0 is a tuning parameter, and ∇ G is an oriented incidence matrix of the K-NN graph G. Thus, we define ∇ G as follows: each row of the matrix corresponds to one edge in G; for instance, if the p-th edge in G connects the i-th and j-th observations, then
(∇ G ) p,q = 1 if q = i, −1 if q = j, 0 otherwise.
In this way, the p-th element in ∇ G θ, (∇ G θ) p = θ i − θ j . Notice that we choose the ordering of the nodes and edges in ∇ G arbitrarily without loss of generality.
Onceθ in (8) has been computed, we can predict the value of response corresponding to a new observation x ∈ X \{x 1 , ..., x n } by the averaged estimated response of the K-nearest neighbors of x in {x 1 , ..., x n }. Mathematically, we writê
y = 1 K n i=1θ i · 1{x i ∈ N K (x)},(9)
where N K (x) is the set of K-nearest neighbors of x in the training data. A similar prediction rule was used in Padilla et al. (2020a). A related estimator to the penalized estimatorθ defined in (8) is the the constrained estimatorθ C , of which the corresponding optimization problem can be written aŝ
θ C = arg min θ∈R n n i=1 ρ τ (y i − θ i ) subject to ∇ G θ 1 ≤ C,(10)
for some positive constant C.
Comparison with the K-NN fused lasso estimator
Before proceeding to study the properties of our proposed method we provide some comparisons with its precursor, the K-NN fused lasso estimator from Padilla et al. (2020a). The latter is defined asθ
= arg min θ∈R n n i=1 (y i − θ i ) 2 + λ ∇ G θ 1(11)
where λ > 0 is a tuning parameter. In contrasting (11) with (8) we first highly that from a practical perspective the latter has some important advantages. Firstly, (8) can be used to construct prediction intervals whereas (11) can only provide point estimates for the different locations. We illustrate the construction of prediction intervals with a real data example in Section 5.2. Secondly, the quantile K-NN fused lasso is by construction expected to be more robust to heavy tails and outliers than its counterpart the K-NN fused lasso. We verify this in our experiments section.
On the computational side, the algorithms for solving the optimization problem in (11) cannot be used for finding the estimator in (8). Hence, novel algorithms are needed to efficiently compute our proposed estimator. This is the subject of the next section.
Finally, despite the similarity in the definition of the estimators in (11) and (8), the theory from Padilla et al. (2020a) does not directly translate to analyze our estimator defined in (8). Hence, one of the contributions of this paper is to show that the quantile K-NN fused lasso inherits local adaptivity properties of the K-NN fused lasso in general settings that allow for heavy-tailed error distributions.
Algorithms and Model Selection
To compute the quantile K-NN fused lasso estimator, the first step is to construct the K-NN graph from the data. The computational complexity of constructing the K-NN graph is of O(n 2 ), although it is possible to accelerate the procedure to O(n t ) for some t ∈ (1, 2) using divide and conquer methods (Bentley, 1980;Chen et al., 2009).
The second step of computation is to solve a constrained optimization problem as (8). Here, we introduce three algorithms to solve the problem numerically. Before presenting our algorithms, we stress that both the problems (8) and (10) are linear programs, therefore we can use any linear programming software to obtain an optimal solution. Noticeably, we can take the advantage of sparsity in the penalty matrix for faster computation. However, a shortcoming of linear programming is that the algorithm can become very time-consuming for large sized problems, especially when n is greater than 5000.
Alternating Directions Method of Multipliers (ADMM)
The alternating directions method of multipliers (ADMM) algorithm (Boyd et al., 2011) is a powerful tool for solving constrained optimization problems.
We first reformulate the optimization problem (8) as
minimize θ∈R n ,z∈R n n i=1 ρ τ (y i − θ i ) + λ ∇ G z 1 s.t. z = θ,(12)
and the augmented Lagrangian can then be written as
L R (θ, z, u) = n i=1 ρ τ (y i − θ i ) + λ ∇ G z 1 + R 2 θ − z + u 2 ,
where R is the penalty parameter that controls step size in the update. Thus we can solve (12) by iteratively updating the primal and dual
θ ← arg min θ∈R n n i=1 ρ τ (y i − θ i ) + R 2 θ − z + u 2 ,(13)
z ← arg min
z∈R n 1 2 θ + u − z 2 + λ R ∇ G z 1 .(14)
The primal problem (13) can be solved coordinate-wise in closed form as
θ i = z i − u i + τ R if y i − z i + u i > τ R , z i − u i + τ −1 R if y i − z i + u i < τ −1 R , y i otherwise;
See Appendix A for the steps to derive the solution. The dual problem (14) is a generalized lasso problem that can be solved with the parametric max-flow algorithm from Chambolle and Darbo (2009). The entire procedure is presented in Algorithm 1. In practice, we can simply choose the penalty parameter R to be 1 2 . We require the procedure to stop if it reaches the maximum iteration or the primal residual θ (k) − θ (k−1) 2 is within a tolerance κ, to which we set 10 −2 in our computation. Actually, the ADMM algorithm converges very quickly with only tens of iterations and hence we find it to be faster than linear programming. Another advantage of ADMM is that the algorithm is not sensitive to the initialization. In Appendix B, we present a simulation study to demonstrate the fast convergence of ADMM under different initializations.
Algorithm 1: Alternating Directions Method of Multipliers for quantile K-NN fused lasso Input: Number of nearest neighbor: K, quantile: τ , penalty parameter: λ, maximum iteration: N iter , tolerance: κ Data: X ∈ R n×d , y ∈ R n Output:θ ∈ R n 1. Compute K-NN graph incidence matrix ∇ G from X.
2. Initialize θ (0) = y, z (0) = y, u (0) = 0. 3. For k = 1, 2, ..., until θ (k) − θ (k−1) 2 ≤ κ or the procedure reaches N iter : (a) For i = 1, ..., n, update θ (k) i ← arg min ρ τ (y i − θ i ) + R 2 (θ i − z (k−1) i + u (k−1) i ) 2 . (b) Update z (k) ← arg min 1 2 θ (k) + u (k−1) − z 2 + λ R ∇ G z 1 . (c) Update u (k) ← u (k−1) + θ (k) − z (k) .
Majorize-Minimize (MM)
We now exploit the majorize-minimize (MM) approach from Hunter and Lange (2000) for estimating the conditional median. The main advantage of the MM algorithm versus ADMM is that it is conceptually simple and it is a descent algorithm.
Recall that for τ = 0.5, quantile regression becomes least absolute deviation, or L 1regression, and the loss function L(θ) of our problem becomes
L(θ) = n i=1 |y i − θ i | + λ ∇ G θ 1 .(15)
Next, notice that L(θ) can be majorized at θ (k) by Q(θ | θ (k) ) given as
Q(θ | θ (k) ) = n i=1 (y i − θ i ) 2 |y i − θ (k) i | + λ (i,j)∈E K (θ i − θ j ) 2 |θ (k) i − θ (k) j | + const.,(16)
since it holds that
Q(θ (k) | θ (k) ) = L(θ (k) ), Q(θ | θ (k) ) ≥ L(θ) for all θ.(17)
To avoid possible occurrences of zero, we add a perturbation to the denominator each time. Then, the iterative algorithm optimizes Q(θ | θ (k) ) at each iteration. The stopping criterion for MM algorithm remains the same as for ADMM. Because the optimization problem here has a closed-form solution, we can compute the solution directly by solving a linear system (see Step 3c in Algorithm 2). We find the MM algorithm to be faster in running time than linear programming and ADMM for large-size problems and it produces reasonably stable solutions as the others; see the experiments and discussion in Section 5.1. A major drawback of our fast algorithm is that it can only handle median regression at this moment. We leave for future work studying an extension of an MM-based algorithm for estimating general quantiles in the future.
Algorithm 2: Majorize-Minimize for quantile K-NN fused lasso, τ = 0.5 Input: Number of nearest neighbor: K, penalty parameter: λ, maximum iteration:
N iter , tolerance: κ Data: X ∈ R n×d , y ∈ R n Output:θ ∈ R n 1. Compute K-NN graph incidence matrix ∇ G from X. 2. Initialize θ (0) i = median(y) for i = 1, ..., n. 3. For k = 1, 2, ..., until θ (k) − θ (k−1)
2 ≤ κ or the procedure reaches N iter : (a) Compute weight matrix W ∈ R n×n :
W = diag(1/[|y − θ (k−1) | + ]). (b) Compute weight matrix W = diag(1/[|θ (k−1) i − θ (k−1) j | + ]) if (i, j) ∈ E K . (c) Update θ (k) ← [W + λ∇ GW ∇ G ] −1 W y.
Model Selection
The choice of the tuning parameter λ in (8) is an important practical issue in estimation because it controls the degree of smoothness in the estimator. The value of λ can be chosen through K-fold cross-validation. Alternatively, we can select the regularization parameter based on Bayesian Information Criteria (BIC; Schwarz, 1978). The BIC for quantile regression (Yu and Moyeed, 2001) can be computed as
BIC(τ ) = 2 σ n i=1 ρ τ (y i −θ i ) + ν log n,
where ν denotes the degree of freedom of the estimator and σ > 0 can be empirically chosen
as σ = 1−|1−2τ | 2 .
It is also possible to use Schwarz Information Criteria (SIC; Koenker et al., 1994) given by
SIC(τ ) = log 1 n n i=1 ρ τ (y i −θ i ) + 1 2n ν log n.
However, BIC is more stable than SIC in practice because when λ is small, SIC may become ill-conditioned as the term inside the logarithm is close to 0. Tibshirani and Taylor (2012) demonstrate that a lasso-type problem has degrees of freedom ν equal to the expected nullity of the penalty matrix after removing the rows that are indexed by the boundary set of a dual solution at y. In our case, we define ν as the number of connected components in the graph ∇ G after removing all edges j's where |(∇ Gθ ) j | is above a threshold γ, typically very small (e.g., 10 −2 ).
Theoretical Analysis
Before arriving at our main results, we introduce some notation. For a set
A ⊂ A with (A, d A ) a metric space, we write B (A) = {a : exists a ∈ A, with d A ≤ }. The Euclidean norm of a vector x ∈ R is denoted by x 2 = (x 2 1 + ... + x 2 d ) 1/2 . The l 1 norm of x is denoted by x 1 = |x 1 | + ... + |x d |. The infinity norm of x is denoted by x ∞ = max i |x i |.
In the covariate space X , we consider the Borel sigma algebra, B(X ), induced by the metric d X , and we let µ be a measure on B(X ). We assume that the covariates in the model (1)
satisfy x i ind ∼ p(x).
In other word, p is the probability density function associated with the distribution of x i , with respect to the measure space (X , B(X ), µ). Let {a n } and {b n } ⊂ R be two sequences. We write a n = O P (b n ) if for every > 0 there exists C > 0 such that P(a n ≥ Cb n ) < for all n. We also write poly(x) as a polynomial function of x, but the functions vary from case to case in the content below. Throughout, we assume the dimension of X to be greater than 1, since the study of the one-dimensional model, quantile trend filtering, can be found in Padilla and Chatterjee (2020).
We first make some necessary assumptions for analyzing the theoretical guarantees of both the constrained and penalized estimators.
Assumption 1 (Bounded Variation) We write θ * i = F −1 y i |x i (τ ) for i = 1, ..., n and require that V * := ∇ G θ * 1 /[n 1−1/d poly(log n)] satisfies V * = O P (1). Here F y i |x i is the cumulative distribution function of y i given x i for i = 1, ..., n.
The first assumption simply requires that θ * , the vector of τ -quantiles of y, has bounded total variation along the K-NN graph. The scaling n 1−1/d poly(log n) comes from Padilla et al. (2020a), and we will discuss the details after we present Assumptions 3-5 below.
Assumption 2 There exists a constant L > 0 such that for δ ∈ R n satisfying δ ∞ ≤ L we have that min i=1,...,n
f y i |x i (θ * i + δ i ) ≥ f a.s.,
for some f > 0, and where f y i |x i is the the conditional probability density of y i given x i .
The assumption that the conditional density of the response variable is bounded by below in a neighborhood is standard in quantile regression analysis. Related conditions appears as D.1 in Belloni and Chernozhukov (2011) and Condition 2 in He and Shi (1994).
The next three assumptions inherit from the study of K-NN graph in Von Luxburg et al. (2014) and Padilla et al. (2020a).
Assumption 3 The density of the covariates, p, satisfies 0 < p min < p(x) < p max , for all x ∈ X , where p min , p max > 0 are fixed constants.
We only require the distribution of covariates to be bounded above and below by positive constants. In Györfi et al. (2006) and Meinshausen (2006), p is assumed to be the probability density function of the uniform distribution in [0, 1] d .
Assumption 4
The base measure µ in the metric space X , in which X is defined, satisfies
c 1,d r d ≤ µ{B r (x)} ≤ c 2,d r d , for all x ∈ X , for all 0 < r < r 0 , where r 0 , c 1,d , c 2,d are positive constants, and d ∈ N\{0, 1} is the intrinsic dimension of X .
Although X is not necessarily a Euclidean space, we require in this condition that balls in X have volume, with respect to some measure µ on the Borel sigma algebra, B(X ), that behaves similarly to the Lebesgue measure of balls in R d .
Assumption 5 There exists a homeomorphism h :
X → [0, 1] d , such that L min d X (x, x ) ≤ h(x) − h(x ) 2 ≤ L max d X (x, x ),
for all x, x ∈ X and for some positive constants L min , L max , where d ∈ N\{0, 1} is the intrinsic dimension of X .
The existence of a continuous bijection between X and [0, 1] d ensures that the space has no holes and is topologically equivalent to [0, 1] d . Furthermore, Assumption 5 requires d > 1. If d = 1 then one can simply order the covariates and then run the one dimensional quantile fused lasso studied in Padilla and Chatterjee (2020).
On another note, we point that Padilla et al. (2020a) showed, exploiting ideas from Von Luxburg et al. (2014), that under Assumptions 3-5, ∇ G θ * 1 n 1−1/d for a K-NN graph G up to a polynomial of log n, with an extra condition on the function f 0 • h −1 being piecewise Lipschitz. Hence, under such conditions Assumption 1 holds. For completeness, the definition of the class of piecewise Lipschitz functions is provided in Appendix E. It is rather remarkable that Padilla et al. (2020a) also present an alternative condition on f 0 • h −1 than piecewise Lipschitz to guarantee the results hold; see Assumption 5 in the same paper. Now, we are ready to present our first theoretical result on quantile K-NN fused lasso estimates.
Theorem 6 Under Assumptions 1-5, by setting C = V n 1−1/d in (10) for a tuning parameter V , where V 1 and V ≥ V * , we have
∆ 2 n θ * −θ C = O P n −1/d poly(log n) ,
for a choice of K satisfying K = poly(log n).
The first theorem shows that quantile K-NN fused lasso attains the optimal rate of n −1/d under the loss ∆ 2 n (·) defined in Section 1 for estimating signals in a constrained set.
Theorem 7 Under Assumptions 1-5, there exists a choice of λ for (8) satisfying
λ = Θ {log n} for d = 2, Θ (log n) 1/2 for d > 2, such that ∆ 2 n θ * −θ = O P n −1/d poly(log n) ,
for a choice of K satisfying K = poly(log n).
The second theorem states that, under certain choice of the tuning parameter, the penalized estimator achieves the convergence rate of n −1/d , similar to the constrained estimator, ignoring the logarithmic factor. Notice that there is a one-to-one correspondence between (8) and (10), as they are equivalent optimization problems. Let λ > 0 as in the statement of Theorem 7. Then there exists a C that depends on y and λ such that (8) and (10) have identical solutions. However, the fact that such C depends on y does not imply that for a deterministic C, such as that in Theorem 6, one can conclude that if the unconstrained estimator attains the rate n −1/d then the constrained estimator attains the same rate. Nevertheless, the fact that we have Theorems 6-7 implies that both estimators attain the same rate. We emphasize that if all or some of Assumptions 1, 3-5 do not hold, then we are not able to characterize the behavior of the K-NN graph. In such case the conclusions of Theorems 6-7 would not necessarily hold.
We conclude with a remark regarding the minimax optimality of Theorems 6-7.
Remark 8 Let f * (t) = F −1 Y |X=t (0.5) be the median function and let C be the class of piecewise Lipschitz functions, see Definition 17 in Appendix E. It was proven in Proposition 2 of Castro et al. (2005) that under the assumption that i ind ∼ N (0, σ 2 ) for some fixed σ, and
that x i ind ∼ U ([0, 1] d ) for i = 1, . . . , n, it holds that inf f estimator sup f * ∈C, f * ∞≤1 E [0,1] d (f * (t) −f (t)) 2 dt ≥ cn −1/d ,(18)
for some constant c > 0. However, by the constraint f * ∞ ≤ 1, the left hand side of (18) equals, up to a constant, to
inf f estimator sup f * ∈C, f * ∞≤1 E [0,1] d min{|f * (t) −f (t)|, (f * (t) −f (t)) 2 }dt .(19)
Furthermore, a discrete version of
E [0,1] d min{|f * (t) −f (t)|, (f * (t) −f (t)) 2 }dt is the quantity ∆ 2 n (θ − θ * ) ifθ i =f (x i ) and θ * i = f * (x i ) for i = 1, .
. . , n. Therefore, the rates in Theorems 6-7 are nearly minimax in the sense that they match, up to log factors, the lower bound n −1/d on the quantity (19) without requiring sub-Gaussian errors, and under more general conditions on the covariates than uniform draws. In contrast, under sub-Gaussian errors, the K-NN fused lasso estimator from Padilla et al. (2020a) attains, up to log factors, the rate n −1/d in terms of mean squared error.
Experiments
In this section, we will examine the performance of quantile K-NN fused lasso (QKNN) on various simulated and real datasets. The two benchmark estimators we compare against are K-NN fused lasso (KNN; Padilla et al., 2020a) and quantile random forest (QRF; Meinshausen, 2006). The performance of an estimator is measured by its mean squared error, defined by
MSE(θ) := 1 n n i=1 (θ i − θ * i ) 2 ,
where θ * is the vector of τ -quantiles of the true signal. For quantile K-NN fused lasso, we use the ADMM algorithm and select the tuning parameter λ based on the BIC criteria described in Section 3.3; for K-NN fused lasso, we use the algorithm from Chambolle and Darbo (2009) and the corresponding penalty parameter is chosen to minimize the average mean squared error over 500 Monte Carlo replicates. For quantile random forest, we directly use the R package "quantregForest" with defaulted choice of tree structure and tuning parameters.
Throughout, for both K-NN fused lasso and quantile K-NN fused lasso, we set K to be 5 for sufficient information and efficient computation.
Simulation Study
We generate 500 data sets from models under each scenario described below with sample size between 10 2 and 10 4 and then report the mean squared errors of the three estimators with respect to different quantiles. For each scenario the data are generated as
y i = θ * i + i , and θ * i = f 0 (x i ), i = 1, ..., n,
where θ * i comes from some underlying functions f 0 , and the errors { i } n i=1 are independent with i ∼ F i for some distributions F i , where we select from Gaussian, Cauchy, and tdistributions.
Scenario 1
We generate x i uniformly from [0, 1] 2 , and define f 0 : [0, 1] 2 → {0, 1} by
f 0 (x) = 1 if 5 4 x i1 + 3 4 x i2 > 1, 0 otherwise.
Scenario 2
In this case, we generate X ∈ R 2 according to the probability density function
p(x) = 1 5 1 {[0,1] 2 \[0.4,0.6] 2 } (x) + 16 25 1 {[0.45,0.55] 2 } (x) + 4 25 1 {[0.4,0.6] 2 \[0.45,0.55] 2 } (x)
.
The function f 0 : [0, 1] 2 → R is defined as f 0 (x) = 1 { x− 1 2 (1,1) 2 2 ≤ 2 1000 } (x). Scenario 3 Again, x i are from uniform [0, 1] 2 . The smooth function f 0 : [0, 1] 2 → R is defined as f 0 (x i ) = 0.4x 2 i1 + 0.6x 2 i2 .
Scenario 4
The function f 0 : [0, 1] d → R is defined as The scenarios above have been chosen to illustrate the local adaptivity of our proposed approach to discontinuities of the quantile function. Scenario 1 consists of a piecewise constant median function. Scenario 2 is borrowed from Padilla et al. (2020a) and also has a piecewise constant median. However, the covariates in Scenario 2 are not uniformly drawn and are actually highly concentrated in a small region of the domain. Scenario 3 is a smooth function, and Scenario 4 is taken from Padilla et al. (2020a). In the latter we also have a piecewise constant quantiles, but the boundaries of the different pieces are not axis aligned, and the errors are heteroscedastic. Figure 1 displays the true function and the estimates from both quantile K-NN fused lasso and quantile random forest under Scenarios 1, 2 and 3. Clearly, quantile K-NN fused lasso provides more reasonable estimation in all cases. Quantile random forest estimates are more noisy and the performance is even poorer under Cauchy errors.
f 0 (x) = 1 if x − 1 4 1 d 2 < x − 3 4 1 d 2 ,−
The results presented in Table 1 indicate that overall, quantile K-NN fused lasso outperforms the competitors in most scenario. As expected, for estimating functions with Gaussian errors like Scenario 1, regular K-NN fused lasso is the best method. For Scenarios 2 and 3, when estimating the conditional median of piecewise continuous or continuous functions with heavy-tail errors (such as Cauchy and t-distributions), quantile K-NN fused lasso achieve the smallest mean square errors over the other two methods.
We also compare the performance of linear programming the two algorithms discussed in Section 3, with simulated data from Scenario 3. We obtain almost identical estimators from the three algorithms under the same choice of λ. Regarding computational time, we record the averaged time consumed over 100 simulations for each algorithm. Figure 2 demonstrates that majorize-minimize (MM) is the most efficient one among the three algorithms, and linear programming (LP) can be very expensive in operational time for large-size problems.
Real Data
California Housing Data
In this section, we conduct an experiment of predicting house value in California based on median income and average occupancy, similar to the experiment in Petersen et al. (2016). The data set, consisting of 20,640 measurements, was originally used in Pace and Barry (1997) is publicly available from the Carnegie Mellon StatLib data repository (lib.stat.cmu.edu). We perform 100 train-test random splits the data, with training sizes 1000, 5000, and 10000. For each split the data not in the training set is treated as testing data. For median estimation, we compare averaged mean squared prediction errors of the test sets (after taking the log of housing price) from quantile K-NN fused lasso and quantile random forest; besides, we construct 90% and 95% prediction intervals from both methods and report the proportion of true observations in the test set that fall in the intervals. Both evaluations are averaged over 100 repetitions for each method. The tuning parameter λ for quantile K-NN fused lasso is chosen based on the BIC criteria for each training and the parameters for quantile random forest are selected as default.
From the results in Table 2, quantile K-NN fused lasso has better performance than quantile random forest in all cases. The result agrees with the nature of piecewise continuity in housing price, that guarantee the advantage of our proposed method over the competitor. When we illustrate the predicted values on a test set from a experiment with training size of Figure 3: Comparison between predictions from two methods for California housing data, with respect to τ = 0.1, 0.5, 0.9. The plot at the top presents the true testing data value, and the color scale is the same among all plots.
10,000 visually in Figure 3, we also observe a piecewise continuous structure in quantile K-NN fused lasso estimates, while estimates from quantile random forest contain more noise, especially for lower and higher quantiles. Finally, it is clear from Table 2 that both quantile K-NN fused lasso and quantile random forest have coverage probabilities are noticeably lower than the true nominal level. Since the results in Table 2 correspond to a real data set, training on a subset and predicting on a different subset of the data, we are not aware of how to improve the coverage for both of the competing methods.
Training
Chicago Crime Data
We apply quantile K-NN fused lasso and quantile random forest to a dataset of publiclyavailable crime report counts in Chicago, Illinois in 2015. We preprocess the dataset in the same way as Tansey et al. (2018) by merging all observations into a fine-grained 100 × 100 grid based on latitude and longitude, taking the log of the total counts in each cell, and omitting all cells with zero count. The resulting preprocessed data contains a total number of 3756 data points along the grid. Similar to the experiment on California housing data, we perform a train-test split with training size 500, 1000, 1500, and 2000, and model on the median counts according to the position on the X and Y axes of the grid. We then predict the value on test set and report the averaged square errors over 100 test sets. The parameters for both methods are selected in the same way as in the previous experiment. From the results of Table 3, we see that quantile K-NN fused lasso still outperforms quantile random forest in MSE if the training size is at least 1500. We notice that for small sample size, our method suffers. This is presumably due to the fact that the raw data is not smooth enough. Yet, Figure 4 shows that quantile K-NN fused lasso capture local patterns more successfully than quantile random forest in most regions.
Conclusion
In this paper we have proposed and studied a method for estimating of quantile functions that are piecewise constant. Such classes of functions naturally arise in real-life data. Estimating accurately signals with such structure is an important but challenging research topic arising in many practical contexts. However, there is a lack of existing literature on robust estimation of piecewise constant or piecewise Lipschitz signals in multivariate settings. Motivated by this, we have proposed the quantile K-NN fused lasso estimator.
Our numerical experiments on different real-life and simulated examples lay out empirical evidence of the superior robustness and local adaptivity of our estimator over other state-of-the-art methods. We have presented two algorithms to efficiently compute the so-lutions to the corresponding optimization problem behind our method. Our theoretical analysis extends previous work on K-NN fused lasso (Padilla et al., 2020a) and quantile trend filtering (Padilla and Chatterjee, 2020). Specifically, we have shown that the quantile K-NN fused lasso estimator achieves an optimal convergence rate of n −1/d for estimating a d-dimensional piecewise Lipschitz quantile function. This result is guaranteed under mild assumptions on data, and thus our estimator can be applied to more general models rather than only those with sub-Gaussian errors.
Appendix A. Closed-form Solution to the Primal in ADMM Algorithm
In Section 3.1, we introduce an ADMM algorithm to compute quantile K-NN estimates. The algorithm requires to solve the primal problem (13)
θ ← arg min θ∈R n n i=1 ρ τ (y i − θ i ) + R 2 θ − z + u 2 .
We can solve the problem coordinate-wisely: for i = 1, ..., n, we find the minimizer
θ i ← arg min θ i ∈R ρ τ (y i − θ i ) + R 2 (θ i − z i + u i ) 2 .
By definition,
ρ τ (y i − θ i ) = τ (y i − θ i ) if y i − θ i > 0, (τ − 1)(y i − θ i ) if y i − θ i < 0, 0 if y i − θ i = 0.
We discuss the three cases separately.
(1) When
y i − θ i > 0, θ i ← arg min τ (y i − θ i ) + R 2 (θ i − z i + u i ) 2 .
Take the derivative and set to 0 to obtain θ i = z i −u i +τ /R. The condition y i −θ i > 0 then becomes y i −z i +u i > τ /R.
(2) When y i − θ i < 0, θ i ← arg min (τ − 1)(y i − θ i ) + R 2 (θ i − z i + u i ) 2 .
Take the derivative and set to 0 to obtain θ i = z i − u i + (τ − 1)/R. The condition y i − θ i < 0 then becomes
y i − z i + u i < (τ − 1)/R. (3) When y i − θ i = 0, it is simple to get θ i = y i .
To summarize, the closed-form solution to the primal (13) is
θ i = z i − u i + τ R if y i − z i + u i > τ R , z i − u i + τ −1 R if y i − z i + u i < τ −1 R , y i otherwise;
for i = 1, ..., n.
Appendix B. Sensitivity of Initialization in the ADMM Algorithm
In this section, we examine how the convergence of Algorithm 1 is sensitive to different initials. Recall that Algorithm 1 requires an initialization of θ (0) , z (0) and u (0) . A practical way to choose these initials is to set θ (0) = z (0) = y, u (0) = 0, where y is the input data, as defined in Algorithm 1. Here, we try three other initializations to assess the sensitivity to the initials:
(1) The setup in Algorithm 1: θ (0) = z (0) = y, u (0) = 0.
(2) We add perturbations to the first setup:
θ (0) = y + v 1 , z (0) = y + v 2 , u (0) = v 3 , where v 1 , v 2 , v 3 ind ∼ N (0, I n ).
(3) Random initialization:
θ (0) = w 1 , z (0) = w 2 , u (0) = w 3 , where w 1 , w 2 , w 3 ind ∼ N (0, I n ).
(4) Random initialization with a large multiplication factor: θ (0) = 50s 1 , z (0) = 50s 2 , u (0) = 50s 3 , where s 1 , s 2 , s 3 ind ∼ N (0, I n ).
We use the data generation mechanism in Scenario 2 from the main content, and errors are independently drawn from a t-distribution with 3 degrees of freedom. The regularization parameter λ is set to be fixed as 0.5. We replicate the simulation 500 times with a sample size n = 10000. Note that the input data and perturbations are generated independently in every replication. For each initialization setup, we record the averaged value of the objective function n i=1 ρ τ (y i − θ i ) + λ ∇ G θ 1 after each iteration and the number of iterations to converge when we set κ = 0.01 in Step 3 of Algorithm 1.
From Figure 5, we observe that the value of the objective function converges very fast to the optimal minimum within only tens of iterations, regardless of the random initialization that is used. If we look at the number of iterations shown in Table 4, the first three initializations have relatively the same convergence speed. If there exists a large deviation between the initial value and the true signal like in the last setup, the ADMM algorithm takes slightly longer time to converge. Overall, the ADMM algorithm is insensitive to random initialization and fast convergence is observed in practice.
Initialization Averaged number of iterations to converge
(1) 11 (2) 11.13 (3) 11 (4) 26.65
Appendix C. General Lemmas
The following lemmas hold by conditioning on {x i } n i=1 .
Definition 9
The function ∆ 2 is defined as
∆ 2 (δ) := n i=1 min{|δ i |, δ 2 i },
where δ i ∈ R n . We also write ∆(δ) = {∆ 2 (δ)} 1/2 .
Definition 10 For a set S ⊂ R n , the sub-Gaussian width of S is defined as
SGW (S) = E sup v∈S n i=1 s i v i ,
where s 1 , ..., s n are independent 1-subgaussian random variables.
The notation of sub-Gaussian width is not used very often in literature compared to a similar definition of Gaussian width,
GW (S) = E sup v∈K n i=1 z i v i ,
where z 1 , ..., z n are independent standard normal random variables. In fact, the sub-Gaussian width shares many common properties with the Gaussian width, as we can upper bound the sub-Gaussian width by a constant times the Gaussian width using generic chaining; see Chapter 4 in Talagrand (2005) and also Banerjee et al. (2014) for precise explanations.
Definition 11 We define the empirical loss function
M (θ) = n i=1M i (θ i ), whereM i (θ i ) = ρ τ (y i − θ i ) − ρ τ (y i − θ * i ). Setting M i (θ i ) = E(ρ τ (y i − θ i ) − ρ τ (y i − θ * i )),
the population version ofM becomes
M (θ) = n i=1 M i (θ i ). Now, we consider the M -estimator θ = arg min θ∈R nM (θ) subject to θ ∈ S,(20)
and θ * ∈ arg min θ∈R n M (θ). Throughout, we assume that θ * ∈ S ⊂ R n .
Lemma 12 With the notation from before,
M (θ) ≤ sup v∈S M (v) −M (v) .
Proof. See the proof of Lemma 10 in Padilla and Chatterjee (2020).
Lemma 13 Suppose that Assumption 2 holds. Then there exists a constant c τ > 0 such that for all δ ∈ R n , we have M (θ + δ) ≥ c τ ∆ 2 (δ).
Proof. See the proof of Lemma 14 in Padilla and Chatterjee (2020).
Corollary 14 Under Assumption 2, if θ * ∈ S, we have that
E ∆ 2 (θ − θ * ) ≤ E sup v∈S M (v) −M (v) .(21)
Next, we proceed to bound the right hand side of (21).
Lemma 15 (Symmetrization) Under Assumption 2, we have that
E sup v∈S M (v) −M (v) ≤ 2E sup v∈S n i=1 ξ iMi (v i ) ,
where ξ 1 , ..., ξ n are independent Rademacher variables independent of y 1 , ..., y n .
Proof. See the proof of Lemma 11 in Padilla and Chatterjee (2020).
Lemma 16 (Contraction principle) Under Assumption 2, we have that
E sup v∈S n i=1 s iMi (v i ) ≤ 2SGW (S − θ * ) = 2SGW (S),
where s 1 , ..., s n are independent 1-subgaussian random variables independent of y 1 , ..., y n .
Proof. Recall thatM i (v i ) = ρ τ (y i −v i )−ρ τ (y i −θ * i ).
Clearly, these are 1-Lipschitz continuous functions. Therefore,
E sup v∈S n i=1 s iMi (v i ) = E E sup v∈S n i=1 s iMi (v i ) y ≤ E E sup v∈S n i=1 s i v i y = E sup v∈S n i=1 s i (v i − θ * ) + E n i=1 s i θ * = E sup v∈S n i=1 s i (v i − θ * )
where the inequality follows from the Gaussian version of Talagrand's contraction principle, see corallary 3.17 in Ledoux andTalagrand (2013) and7.2.13 in Vershynin (2018).
Appendix D. K-NN Embeddings
This section follows from Section D and E in Padilla et al. (2020a) and it originally appeared in the flow-based proof of Theorem 4 in Von . The main idea is to embed a mesh on a K-NN graph corresponding to the given observations X = (x 1 , ..., x n ) under Assumptions 3-5 so that we are able to bound the total variation along the grid graph from below. In this way, we can further derive an upper bound on the loss of the optimization problem (8) and (10) with respect to the loss function defined in Definition 9.
First, we need to construct a valid grid discussed in Definition 17 of Von Luxburg et al. (2014) with a minor modification. With high probability, a valid grid graph G satisfies the following: (i) the grid width is not too small: each cell of the grid contains at least one of the design points; (ii) the grid width is not too large: points in the same or neighboring cells of the grid are always connected in the K-NN graph.
Given N ∈ N, a d-dimensional grid graph G lat ∈ [0, 1] d has vertex set V lat and edge set E lat . The grid graph has equal side lengths, and the total number of nodes |V lat | = N d . Without loss of generality, we assume that the nodes of the grid correspond to the points
P lat (N ) = i 1 N − 1 2N , ..., i d N − 1 2N : i 1 , ..., i d ∈ {1, ..., N } .(22)
Moreover, for z, z ∈ P lat (N ), (z, z ) ∈ E lat (N ) if and only if z − z 2 = 1 N . Now, we define I(N ) = h −1 {P lat (N )} as the mesh in the covariate space X corresponding to the grid graph G lat (N ) ∈ [0, 1] d through the homeomorphism h from Assumption 5. In general, I(N ) performs as a quantifization in the domain X ; see Alamgir et al. (2014) for more details. We denote the elements in I(N ) by u 1 , ..., u N d , and define a collection of cells {C(x)} ∈ X for x ∈ I(N ) as
C(x) = h −1 z ∈ [0, 1] d : h(x) = arg min x ∈P lat (N ) z − x ∞ .(23)
In order to analyze the behavior of the proposed estimatorθ through the grid embedding, we construct two vectors denoted by θ I ∈ R n and θ I ∈ R N d for any signal θ ∈ R n . The first vector, θ I ∈ R n incorporates information about the samples X = (x 1 , ..., x n ) and the cells {C(x)}. The idea is to force covariates x i fallen in the same cell to take the same signal value after mapping with the homeomorphism. Formally, we define
(θ I ) i = θ j where j = arg min l=1,...,n h(P I (x i )) − h(x l ) ∞ ,(24)
where P I (x) is the point in I(N ) such that x ∈ C(P I (x)) and if there exists multiple points satisfying the condition, we arbitrarily select one. The second vector, θ I ∈ R N d records coordinates corresponding to the different nodes of the mesh (centers of cells), and is associated with θ I . We first induce a signal in R N d corresponding to the elements in I(N ) as I j = {i = 1, ..., n : P I (x i ) = u j }, for j = 1, ..., N d .
If I j = ∅, then there exists i j ∈ I j such that (θ I ) i = θ i j for all i ∈ I j ; if I j is empty, we require θ i j = 0. We can thus define
θ I = θ i 1 , ..., θ i N d .
Note that in the proof of Lemma 11 in Padilla et al. (2020a), the authors showed that under Assumptions 3-5, with probability approaching 1, max x∈I(N ) |C(x)| ≤ poly(log n).
We will use this inequality in our proofs later, but we will not make the polynomial function of log n explicit.
Appendix E. Definition of Piecewise Lipschitz
To make sure Assumption 1 is valid, we require a piecewise Lipschitz condition on the regression function f 0 . In this section, we provide the detailed definition of the class of piecewise Lipschitz functions, followed from Definition 1 in Padilla et al. (2020a). All notations follow the same from the main context, besides an extra notation on the boundary of a set A, denoted by ∂A.
Definition 17
Let Ω :
= [0, 1] d \B (∂[0, 1] d ).
We say that a bounded function g : [0, 1] d → R is piecewise Lipschitz if there exists a set S ⊂ (0, 1) d that has the following properties:
• The set S has Lebesgue measure zero.
• For some constants C S , 0 > 0, we have that µ(h −1 {B (S) ∩ ([0, 1] d \Ω )}) ≤ C S for all 0 < < 0 .
• There exists a positive constant L 0 such that if z and z belong to the same connected component of Ω \B (S), then |g(z) − g(z )| ≤ L 0 z − z 2 . For a matrix D, we denote its kernel by Ker(D) and its Moore-Penrose inverse by D † . We also write Π as the projection matrix onto Ker(D), and denote Ker(D) ⊥ as the orthogonal complement to the kernel space of D.
Our goal is to upper bound the expectation of M (·) −M (·) in the constrained set S defined by
S = θ ∈ R n : ∇ G θ 1 ≤ V n 1−1/d poly(log n) ,(26)
where V = ∇ G θ * 1 /[n 1−1/d poly(log n)] and V ≥ V * . Through the K-NN embedding, we can instead bound the expected loss in the embedded set defined bỹ
S = θ ∈ R n : Dθ I 1 ≤ V n 1−1/d poly(log n) ,(27)
where θ I follows the definition in Appendix D.
F.2 Auxiliary lemmas for Proof of Theorem 6
Lemma 18 For v ∈ R n , if ∇ G v 1 ≤Ṽ , and ∆ 2 (v − θ * ) ≤ t 2 , then Dv I 1 ≤Ṽ , and ∆ 2 (v I − θ * ,I ) ≤ c 1 t 2 for some constant c 1 .
Proof. Lemma 4 in Padilla et al. (2020a) obtains the inequality
Dv I 1 ≤ ∇ G v 1 , ∀v ∈ R .
Hence, the first claim follows.
Next, we observe that for any vector u ∈ R n ,
∆ 2 (u I ) = N d j=1 min |u i j |, u 2 i j ≤ n i=1 min |u i |, u 2 i ≤ ∆ 2 (u),
for some positive constant c 1 . The second claim follows then.
Lemma 18 gives us the fact that
v ∈ S : ∆ 2 (v − θ * ) ≤ t 2 ⊆ v ∈S : ∆ 2 (v I − θ * ,I ) ≤ c 1 t 2 .(28)
Lemma 19 Under Assumptions 1-5, we have that
E sup v∈S:∆ 2 (v I −θ * ,I )≤t 2 n i=1 ξ i (v i − θ * i ) ≤ poly(log n)E sup v∈S:∆ 2 (v I −θ * ,I )≤t 2 N d j=1ξ j (v I j − θ * ,I j ) +2V n 1−1/d poly(log n),
whereξ ∈ R N d is a 1-subgaussian vector whose coordinates are independent.
Proof. We notice that
ξ (v − θ * ) = ξ (v − v I ) + ξ (v I − θ * I ) + ξ (θ * I − θ * ) ≤ 2 ξ ∞ ( ∇ G v 1 + ∇ G θ * 1 ) + ξ (v I − θ * I ),
where the second inequality holds by Lemma 4 in Padilla et al. (2020a). Moreover,
ξ (v I − θ * I ) = N d j=1 l∈I j ξ l (v i j − θ * i j ) = max u∈I |C(u)| 1/2ξ (v I − θ * ,I ) whereξ j = max u∈I |C(u)| −1/2 l∈I j ξ l .
Clearly, theξ 1 , ...,ξ N d are independent and also 1-subgaussian as the original Rademacher random variables ξ 1 , ..., ξ n . Then, following from (25), we have
ξ (v I − θ * I ) ≤ poly(log n)ξ (v I − θ * ,I ).
Hence, the desired inequality holds.
Lemma 20 Let δ ∈ R N d with ∆ 2 (δ) ≤ t 2 . Then Πδ ∞ ≤ t 2 N d + t √ N d . Proof. Notice that Ker(D) = span(1 N d ), then Πδ = δ v · v, where v = 1 √ N d 1 N d . Hence Πδ ∞ = δ v · v ∞ ≤ |δ v| · v ∞ = |δ v| √ N d .(29)
Now,
|δ v| ≤ N d i=1 |δ i ||v i | = N d i=1 |δ i ||v i |1 {|δ i |>L} + N d i=1 |δ i ||v i |1 {|δ i |≤L} ≤ v ∞ N d i=1 |δ i |1 {|δ i |>L} + v N d i=1 δ 2 i 1 {|δ i |≤L} 1/2 ≤ t 2 √ N d + t,(30)
where the first inequality follows from the triangle inequality, the second from Hölder's and Cauchy Schwarz inequalities. The claim follows combining (29) with (30).
Lemma 21 Under Assumptions 1-5, for N n 1/d , we have that
SGW δ : δ ∈S, ∆ 2 (δ) ≤ c 1 t 2 ≤ C d c 1 t 2 √ n + √ c 1 t + V n 1−1/d poly(log n).
Proof. Recall that the projection on Ker(D) ⊥ , D † D = I − Π, which yields ξ δ =ξ Πδ +ξ D † Dδ.
D † Dδ =: T 1 + T 2 .(31)
We first bound T 1 . Notice that Π is idempotent, i.e., Π 2 = Π, thus ξ Πδ =ξ ΠΠδ
≤ ξ Π 1 Πδ ∞ = N d j=1ξ j · Πδ ∞ = N d j=1ξ j N d/2 · N d/2 Πδ ∞(32)
where the inequality follow from Hölder's inequality, and v = 1
√ N d 1 N d . Then, T 1 ≤ E N d j=1ξ j N d/2 c 1 t 2 √ N d + √ c 1 t ≤ C d c 1 t 2 √ n + √ c 1 t(33)
for some positive constant C d .
Next, we bound T 2 . Notice that
T 2 ≤ E sup δ∈S:∆ 2 (δ)≤t 2 ξ D † ∞ Dδ 1 ≤ V n 1−1/d poly(log n)E ξ D † ∞ ,(34)
thus we only need to bound E ξ D † ∞ . From Section 3 in Hütter and Rigollet (2016), we write D † = [s 1 , ..., s m ], and max j=1,...,m s j 2 is bounded above by bounded by (log n) 1/2 for d = 2 or by a constant for d > 2. Then,
E ξ D † ∞ = E max
wheres j = s j max j=1,...,m s j 2 .
Moreover,s jξ is also sub-Gaussian but with parameter at most 1. Combining (34) with (35), we obtain that T 2 ≤ C d V n 1−1/d poly(log n),
for some positive constant C d . The conclusion follows then.
Theorem 22 Suppose that 2SGW {δ ∈S : ∆ 2 (δ I ) ≤ c 1 η 2 } ≤ κ(η),
for a function κ : R → R. Then for all η > 0 we have that
P ∆ 2 (δ) > η 2 ≤ poly(log n)κ(η) c τ η 2 + V n 1−1/d poly(log n) c τ η 2 ,
where c τ is the constant from Lemma 13. Furthermore, if {r n } is a sequence such that lim t→∞ sup n poly(log n)κ(tr n n 1/2 ) t 2 r 2 n n + V n 1−1/d poly(log n) t 2 r 2 n n → 0,
then 1 n ∆ 2 (θ − θ * ) = O P (r 2 n ).
Proof. Letδ =θ − θ * and suppose that 1 n δ 2 > η 2 n .
Next, let q 2 = ∆ 2 (δ). Then define g : [0, 1] → R as g(t) = ∆ 2 (tδ). Clearly, g is a continuous function with g(0) = 0 and g(1) = q 2 . Therefore, there exists tδ such that g(tδ) = η 2 . Hence, lettingδ = tδ we observe that by the basic inequalityM (θ * +δ) ≤ 0, δ ∈ S by convexity of S, and ∆ 2 (δ) = η 2 by construction. This implies, along with Lemma 13, that Therefore, combing the results of Lemma 15-19, we have P ∆ 2 (δ) > η 2 ≤ P sup v∈S:∆ 2 (v−θ * )≤η 2
M (v) −M (v) ≥ c τ η 2 ≤ 1 c τ η 2 E sup v∈S:∆ 2 (v−θ * )≤η 2 M (v) −M (v) ≤ 1 c τ η 2 E sup v∈S:∆ 2 (v−θ * )≤c 1 η 2 M (v) −M (v) ≤
2poly(log n) c τ η 2 SGW {δ ∈S : ∆ 2 (δ I ) ≤ c 1 η 2 } + V n 1−1/d poly(log n) c τ η 2 ≤ poly(log n)κ(η) c τ η 2 + V n 1−1/d poly(log n) c τ η 2 ,
where the second inequality follows from Markov's inequality. This completes the proof.
F.3 Proof of Theorem 6
Proof. The claim follows immediately from Lemmas 19 and 21 and Theorem 22 by setting r n n −1/2d poly(log n).
Appendix G. Theorem 7 G.1 Auxiliary lemmas for Proof of Theorem 7
Throughout we assume that Assumptions 1-5 hold, and all notations follow the same as in the proof of Theorem 1.
Lemma 23 Let ∈ (0, 1), then there exists a choice λ = Θ {log n} for d = 2, Θ (log n) 1/2 for d > 2, such that for a constant C 0 > 0, we have that, with probability at least 1 − /4,
κ(θ − θ * ) ∈ A, with A := δ : ∇ G δ 1 ≤ C 0 ∇ G θ * 1 + R 1 R 2 c 1 ∆ 2 (δ) n 1/2 + √ c 1 ∆(δ)
for all κ ∈ [0, 1], where R 1 = C d log n 1/2 , R 2 = C d (log n) 1/2 log c k n 1/2 for d = 2, C d log c k n 1/2 for d > 2, and C 0 , C d are positive constants.
Proof. Pick κ ∈ [0, 1] fixed, and letδ = κ(θ − θ * ). Then by the optimality ofθ and the convexity of (8), we have that
n i=1 ρ τ (y i −θ i ) + λ ∇ Gθ 1 ≤ n i=1 ρ τ (y i − θ * i ) + λ ∇ G θ * 1 ,
whereθ = θ * +δ. Then as in the proof of Lemma 3 from Belloni and Chernozhukov (2011),
0 ≤ λ ∇ G θ * 1 − ∇ Gθ 1 + (θ − θ * ) a * ,(37)
where a * i = τ − 1{y i ≤ θ * i } for i = 1, ..., n.
Next, we bound the second term of the right hand of (37). From Lemma 19, we know it is sufficient to boundã (θ I − θ * ,I ),
From Lemmas 18, 20 and (32), we obtain
A 1 ≤ ã ∞ ∆ 2 (θ I − θ * ,I ) n 1/2 + ∆(θ I − θ * ,I ) ≤ ã ∞ c 1 ∆ 2 (θ − θ * ) n 1/2 + √ c 1 ∆(θ − θ * )(39)
To bound A 2 , we use the result of Lemma 18 to obtain
A 2 ≤ (D † ) ã ∞ Dθ I 1 + Dθ * ,I 1 ≤ (D † ) ã ∞ ∇ Gθ 1 + ∇ G θ * 1(40)
Since a * is Bernoulli with parameter τ and is thus sub-Gaussian with parameter 1 4 ,ã is also sub-Gaussian. As in the proof of Theorem 2 from Hütter and Rigollet (2016), we have that the following two inequalities hold simultaneously on an event of probability at least 1 − 2 , ã ∞ ≤ R 1 := C d log n 1/2 , (D † ) ã ∞ ≤ R 2 := C d (log n) 1/2 log c k n 1/2 for d = 2, C d log c k n 1/2 for d > 2, for some constant c k .
Then, with probability at least 1 − 2 , By choosing λ = 2R 2 , we obtain κ(θ − θ * ) ∈ A := δ :
∇ G δ 1 ≤ C 0 ∇ G θ * 1 + R 1 R 2 c 1 ∆ 2 (δ) n 1/2 + √ c 1 ∆(δ) ,
for some positive constant C 0 .
G.2 Proof of Theorem 7
Proof. Let ∈ (0, 1). By Lemma 23 we can suppose that the following event Ω = κ(θ − θ * ) ∈ A, ∀κ ∈ [0, 1]
Next, define H(η) = {δ ∈ A : ∆(δ) ≤ η}.
Hence, if δ ∈ H(η) and Ω holds, then
∇ G δ 1 ≤ C 0 ∇ G θ * 1 + R 1 R 2 c 1 ∆ 2 (δ) n 1/2 + √ c 1 ∆(δ)(43)
where the inequality follow from the definition of H(η) and Lemma 23.
We now define
L(η) = δ : ∇ G δ 1 ≤ C 0 ∇ G θ * 1 + R 1 R 2 c 1 ∆ 2 (δ) n 1/2 + √ c 1 ∆(δ) , ∆(δ) ≤ η , L(η) = δ : Dδ I 1 ≤ C 0 ∇ G θ * 1 + R 1 R 2 c 1 ∆ 2 (δ) n 1/2 + √ c 1 ∆(δ) , ∆(δ I ) ≤ √ c 1 η . Then P ∆ 2 (δ) > η 2 ∩ Ω ≤ 1 c τ η 2 E sup δ∈L(η) M (θ * + δ) −M (θ * + δ) + λ c τ η 2 sup δ∈L(η) ∇ G δ 1 ≤ 2poly(log n) c τ η 2 E sup δ∈L(η) N d i=1ξ i δ I i +
2C 0 V * n 1−1/d poly(log n) c τ η 2 + 2C 0 poly(log n) c τ η 2 R 1 R 2 c 1 η 2 n 1/2 + √ c 1 η + λ c τ η 2 sup δ∈L(η)
∇ G δ 1 .(44)
where the second inequality follows with the same argument from Lemmas 15, 16, and 19.
By Lemma 21, we have P ∆ 2 (δ) > η 2 ∩ Ω ≤ 2poly(log n) c τ η 2 C d c 1 η 2 n 1/2 + √ c 1 η (log n) 1/2 + C d C 0 V * n 1−1/d poly(log n) + R 1 R 2 c 1 η 2 n 1/2 + √ c 1 η + λ c τ η 2 C 0 V * n 1−1/d poly(log n) + R 1 R 2 c 1 η 2 n 1/2 + √ c 1 η .
Hence given our choice of λ, by choosing η = c γ n 1 2 (1−1/d) poly(log n)
for some c γ > 1, we conclude that P ∆ 2 (δ) > η 2 ∩ Ω ≤ , provided that c γ is large enough.
©2021
Steven Siwei Ye and Oscar Hernan Madrid Padilla. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. arXiv:2012.01758v4 [stat.ME] 17 Aug 2021
1 otherwise, and the density p is uniform in [0, 1] d . The errors are chosen as (x β) , where β = ( 1 d , ..., 1 d ) . Here we simulate with d = 5.
Figure 1 :
1Comparison among observations and estimates from Scenario 1 with Gaussian errors (the first row ), Scenario 1 with Cauchy errors (the second row ), Scenario 2 (the third row ), and Scenario 3 (the fourth row ). Left column: the function f 0 evaluated at the observed x i for i = 1, . . . , n, with n = 10000. The horizontal and vertical axis of each panel correspond to the coordinates of x i . Middle column: the corresponding estimate of f 0 obtained via quantile K-NN fused lasso. Right column: the estimate of f 0 obtained via quantile random forest.
Figure 2 :
2A log-scaled plot of time per simulation of LP, ADMM and MM algorithm against problem size n (15 values from 10 2 up to 10 4 ). For each algorithm, the time to compute the estimate for one simulated data is averaged over 100 Monte Carlo simulations.
Figure 4 :
4Left: one test data from Chicago Crime Data, under training size 1500. Middle: the estimate obtained via quantile K-NN fused lasso. Right: the estimate obtained via quantile random forest.
Figure 5 :
5Averaged value of objective function (over 500 replications) after each iteration step for different initializations.
sup v∈S:∆ 2 (v−θ * )≤η 2 M (v) −M (v) ≥ M (θ * +δ) −M (θ * +δ) ≥ M (θ * +δ) ≥ c τ η 2 .
I − θ * ,I ) =ã Π(θ I − θ * ,I ) +ã D † D(θ I − θ * ,I ) =: A 1 + A 2
Monte Carlo simulations for the different methods, sample sizes, errors, quantiles considered. The numbers in parentheses indicate the standard Monte Carlo errors over the replications.n
Scenario
τ
QKNN
QRF
KNN
100
1
N(0,1)
0.5 0.2364 (0.0628)
0.2242 (0.0467)
0.1507 (0.0365)
1000
1
N(0,1)
0.5 0.1225 (0.0117)
0.1569 (0.0116)
0.0872 (0.0239)
5000
1
N(0,1)
0.5 0.0983 (0.0024)
0.1348 (0.0047)
0.0540 (0.0022)
10000
1
N(0,1)
0.5 0.0370 (0.0019)
0.1279 (0.0028)
0.0292 (0.0010)
100
1
Cauchy(0,1) 0.5 0.1776 (0.0746)
3059.63 (54248)
35.4988 (124.19)
1000
1
Cauchy(0,1) 0.5 0.1440 (0.0817) 26640.10 (179298) 2816.1 (7691.6)
5000
1
Cauchy(0,1) 0.5 0.1326 (0.0740) 34038.86 (196566) 4628.8 (7989.0)
10000
1
Cauchy(0,1) 0.5 0.0962 (0.0642) 57748.95 (200327) 4799.0 (7249.6)
100
2
t 3
0.5 0.1838 (0.0467)
0.5591 (0.3569)
0.2374 (0.0595)
1000
2
t 3
0.5 0.0780 (0.0135)
0.3935 (0.0875)
0.1428 (0.0536)
5000
2
t 3
0.5 0.0364 (0.0050)
0.3479 (0.0477)
0.0622 (0.0165)
10000
2
t 3
0.5 0.0265 (0.0024)
0.3311 (0.0457)
0.0542 (0.0097)
100
3
t 2
0.5 0.0470 (0.0253)
1.6723 (4.8880)
0.1545 (0.4188)
1000
3
t 2
0.5 0.0174 (0.0050)
1.5363 (2.9381)
0.0510 (0.0230)
5000
3
t 2
0.5 0.0075 (0.0015)
1.4207 (2.1827)
0.0414 (0.0106)
10000
3
t 2
0.5 0.0059 (0.0019)
1.2910 (1.1924)
0.0409 (0.0102)
100
4
t 3
0.9 0.7413 (0.2180)
0.6948 (0.5340)
*
1000
4
t 3
0.9 0.3568 (0.1192)
0.5374 (0.2656)
*
5000
4
t 3
0.9 0.2889 (0.0683)
0.4420 (0.0422)
*
10000
4
t 3
0.9 0.2409 (0.0173)
0.4344 (0.0548)
*
100
4
t 3
0.1 1.1563 (0.3923)
0.8877 (0.6858)
*
1000
4
t 3
0.1 0.4160 (0.1542)
0.6224 (0.2667)
*
5000
4
t 3
0.1 0.3074 (0.0503)
0.4897 (0.0644)
*
10000
4
t 3
0.1 0.2597 (0.0301)
0.4536 (0.0494)
*
Table 1: Mean squared error 1
n
n
i=1 (θ i − θ *
i ) 2 , averaging over 500
Table 2 :
2Average test set prediction error (across the 100 test sets) on California housing data. For median, we report the mean squared errors; for other quantiles (τ ∈ {0.9, 0.95}), we report the averaged proportion of the true data located in the predicted confidence interval. Standard Monte Carlo errors are recorded in parentheses. The number of nearest neighbors, K is chosen as 5 for quantile K-NN fused lasso.
Table 3 :
3Average test set prediction errors (across the 100 test sets) with standard errors on Chicago Crime data.
Table 4 :
4Number of iterations to converge under different initializations, averaging over 500
Monte Carlo simulations.
Roughly speaking, a bounded function g is piecewise Lipschitz if there exists a small set S that partitions [0, 1] d in such a way that g is Lipschitz within each connected component of the partition. Theorem 2.2.1 in Ziemer (2012) implies that if g is piecewise Lipschitz, then g has bounded variation on any open set within a connected component.Appendix F. Theorem 6
F.1 Notations
K.Yu and R. A. Moyeed. Bayesian quantile regression. Statistics & Probability Letters, 54 (4):437-447, 2001. W. P. Ziemer. Weakly differentiable functions: Sobolev spaces and functions of bounded variation, volume 120. Springer Science & Business Media, 2012.
happen with probability at least 1 − /2 with A as in Lemma 23. Then, P ∆ 2 (δ) > η 2 ≤ P ∆ 2 (δ) > η 2 ∩ Ω + 2 .Now suppose that the event ∆ 2 (δ) > η 2 ∩ Ω holds. As in the proof of Theorem 22, there existsδ = tδδ with tδ ∈ [0, 1] such thatδ ∈ A, ∆ 2 (δ) = η 2 . Hence, by the basic inequality,where the second inequality follows from Lemma 13. Therefore,where the second inequality follows from Markov's inequality, and the last from the triangle inequality.
Density-preserving quantization with application to graph downsampling. M Alamgir, G Lugosi, U Von Luxburg, Proceedings of the 27th Conference on Learning Theory. the 27th Conference on Learning Theory35M. Alamgir, G. Lugosi, and U. von Luxburg. Density-preserving quantization with applica- tion to graph downsampling. In Proceedings of the 27th Conference on Learning Theory, volume 35, pages 543-559, 2014.
Estimation with norm regularization. A Banerjee, S Chen, F Fazayeli, V Sivakumar, Advances in Neural Information Processing Systems. 27A. Banerjee, S. Chen, F. Fazayeli, and V. Sivakumar. Estimation with norm regularization. In Advances in Neural Information Processing Systems 27, pages 1556-1564. 2014.
l 1 -penalized quantile regression in high-dimensional sparse models. A Belloni, V Chernozhukov, The Annals of Statistics. 391A. Belloni and V. Chernozhukov. l 1 -penalized quantile regression in high-dimensional sparse models. The Annals of Statistics, 39(1):82-130, 2011.
Conditional quantile processes based on series or many regressors. A Belloni, V Chernozhukov, D Chetverikov, I Fernández, Journal of Econometrics. 2131A. Belloni, V. Chernozhukov, D. Chetverikov, and I. Fernández-Val. Conditional quantile processes based on series or many regressors. Journal of Econometrics, 213(1):4-29, 2019.
Multidimensional divide-and-conquer. J L Bentley, Communications of the ACM. 234J. L. Bentley. Multidimensional divide-and-conquer. Communications of the ACM, 23(4): 214-229, 1980.
Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning. S Boyd, N Parikh, E Chu, B Peleato, J Eckstein, 3S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1-122, 2011.
Baseline drift estimation for air quality data using quantile trend filtering. H L Brantley, J Guinness, E C Chi, The Annals of Applied Statistics. 142H. L. Brantley, J. Guinness, and E. C. Chi. Baseline drift estimation for air quality data using quantile trend filtering. The Annals of Applied Statistics, 14(2):585-604, 2020.
Nonparametric quantile estimations for dynamic smooth coefficient models. Z Cai, X Xu, Journal of the American Statistical Association. 103484Z. Cai and X. Xu. Nonparametric quantile estimations for dynamic smooth coefficient models. Journal of the American Statistical Association, 103(484):1595-1608, 2008.
Faster rates in regression via active learning. R Castro, R Willett, R Nowak, In Tech. Rep. University of WisconsinR. Castro, R. Willett, and R. Nowak. Faster rates in regression via active learning. In Tech. Rep., University of Wisconsin, Madison, 2005.
On total variation minimization and surface evolution using parametric maximum flows. A Chambolle, J Darbo, International Journal of Computer Vision. 843A. Chambolle and J. Darbo. On total variation minimization and surface evolution using parametric maximum flows. International Journal of Computer Vision, 84(3):288-307, 2009.
Fast approximate knn graph construction for high dimensional data via recursive lanczos bisection. J Chen, H Fang, Y Saad, Journal of Machine Learning Research. 10J. Chen, H. ren Fang, and Y. Saad. Fast approximate knn graph construction for high dimensional data via recursive lanczos bisection. Journal of Machine Learning Research, 10:1989-2012, 2009.
Asymptotics for m-type smoothing splines. D D Cox, The Annals of Statistics. 112D. D. Cox. Asymptotics for m-type smoothing splines. The Annals of Statistics, 11(2): 530-551, 1983.
Spline smoothing and nonparametric regression. R L Eubank, M. Dekker. 90R. L. Eubank. Spline smoothing and nonparametric regression, volume 90. M. Dekker New York, 1988.
Adaptive robust variable selection. J Fan, Y Fan, E Barut, The Annals of Statistics. 421J. Fan, Y. Fan, and E. Barut. Adaptive robust variable selection. The Annals of Statistics, 42(1):324-351, 2014.
A distribution-free theory of nonparametric regression. L Györfi, M Kohler, A Krzyzak, H Walk, Springer Science & Business MediaL. Györfi, M. Kohler, A. Krzyzak, and H. Walk. A distribution-free theory of nonparametric regression. Springer Science & Business Media, 2006.
Convergence rate of b-spline estimators of nonparametric conditional quantile functions. X He, P Shi, Journal of Nonparametric Statistics. 33-4X. He and P. Shi. Convergence rate of b-spline estimators of nonparametric conditional quantile functions. Journal of Nonparametric Statistics, 3(3-4):299-308, 1994.
Bivariate quantile smoothing splines. X He, P Ng, S Portnoy, Journal of the Royal Statistical Society. Series B (Methodological). 603X. He, P. Ng, and S. Portnoy. Bivariate quantile smoothing splines. Journal of the Royal Statistical Society. Series B (Methodological), 60(3):537-550, 1998.
A faster algorithm for solving a generalization of isotonic median regression and a class of fused lasso problems. D S Hochbaum, C Lu, SIAM Journal on Optimization. 274D. S. Hochbaum and C. Lu. A faster algorithm for solving a generalization of isotonic median regression and a class of fused lasso problems. SIAM Journal on Optimization, 27(4):2563-2596, 2017.
Quantile regression via an MM algorithm. D R Hunter, K Lange, Journal of Computational and Graphical Statistics. 91D. R. Hunter and K. Lange. Quantile regression via an MM algorithm. Journal of Compu- tational and Graphical Statistics, 9(1):60-77, 2000.
Optimal rates for total variation denoising. J.-C Hütter, P Rigollet, Proceedings of the 29th Annual Conference on Learning Theory. the 29th Annual Conference on Learning Theory49J.-C. Hütter and P. Rigollet. Optimal rates for total variation denoising. In Proceedings of the 29th Annual Conference on Learning Theory, volume 49, pages 1115-1146, 2016.
Group lasso for high dimensional sparse quantile regression models. K Kato, K. Kato. Group lasso for high dimensional sparse quantile regression models, 2011.
. S.-J Kim, K Koh, S Boyd, D Gorinevsky, SIAM Review. 512l 1 trend filteringS.-J. Kim, K. Koh, S. Boyd, and D. Gorinevsky. l 1 trend filtering. SIAM Review, 51(2): 339-360, 2009.
Quantile regression. R Koenker, Cambridge University PressR. Koenker. Quantile regression. Cambridge University Press, 2005.
Regression quantiles. R Koenker, G BassettJr, Econometrica. 461R. Koenker and G. Bassett Jr. Regression quantiles. Econometrica, 46(1):33-50, 1978.
Quantile smoothing splines. R Koenker, P Ng, S Portnoy, Biometrika. 814R. Koenker, P. Ng, and S. Portnoy. Quantile smoothing splines. Biometrika, 81(4):673-680, 1994.
Probability in Banach spaces: isoperimetry and processes. M Ledoux, M Talagrand, Springer Science & Business MediaM. Ledoux and M. Talagrand. Probability in Banach spaces: isoperimetry and processes. Springer Science & Business Media, 2013.
l 1 -norm quantile regression. Y Li, J Zhu, Journal of Computational and Graphical Statistics. 171Y. Li and J. Zhu. l 1 -norm quantile regression. Journal of Computational and Graphical Statistics, 17(1):163-185, 2008.
Locally adaptive regression splines. E Mammen, S Van De Geer, The Annals of Statistics. 251E. Mammen and S. van de Geer. Locally adaptive regression splines. The Annals of Statistics, 25(1):387-413, 1997.
Quantile random forests. N Meinshausen, Journal of Machine Learning Research. 7N. Meinshausen. Quantile random forests. Journal of Machine Learning Research, 7:983- 999, 2006.
Sparse spatial autoregressions. K Pace, R Barry, Statistics & Probability Letters. 333K. Pace and R. Barry. Sparse spatial autoregressions. Statistics & Probability Letters, 33 (3):291-297, 1997.
Risk bounds for quantile trend filtering. O H M Padilla, S Chatterjee, O. H. M. Padilla and S. Chatterjee. Risk bounds for quantile trend filtering, 2020.
Adaptive nonparametric regression with the k-nearest neighbour fused lasso. O H M Padilla, J Sharpnack, Y Chen, D M Witten, Biometrika. 1072O. H. M. Padilla, J. Sharpnack, Y. Chen, and D. M. Witten. Adaptive nonparametric regression with the k-nearest neighbour fused lasso. Biometrika, 107(2):293-310, 2020a.
O H M Padilla, W Tansey, Y Chen, Quantile regression with relu networks: estimators and minimax rates. O. H. M. Padilla, W. Tansey, and Y. Chen. Quantile regression with relu networks: esti- mators and minimax rates, 2020b.
Convex regression with interpretable sharp partitions. A Petersen, N Simon, D Witten, Journal of Machine Learning Research. 1794A. Petersen, N. Simon, and D. Witten. Convex regression with interpretable sharp parti- tions. Journal of Machine Learning Research, 17(94):1-31, 2016.
Advanced algorithms for penalized quantile and composite quantile regression. M Pietrosanu, J Gao, L Kong, B Jiang, D Niu, Computational Statistics. M. Pietrosanu, J. Gao, L. Kong, B. Jiang, and D. Niu. Advanced algorithms for penalized quantile and composite quantile regression. Computational Statistics, pages 1-14, 2017.
Nonlinear total variation based noise removal algorithms. L I Rudin, S Osher, E Fatemi, Physica D: Nonlinear Phenomena. 601-4L. I. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise removal algo- rithms. Physica D: Nonlinear Phenomena, 60(1-4):259-268, 1992.
Estimating the dimension of a model. G Schwarz, The Annals of Statistics. 62G. Schwarz. Estimating the dimension of a model. The Annals of Statistics, 6(2):461-464, 1978.
Local quantile regression. V Spokoiny, W Wang, W K Härdle, Journal of Statistical Planning and Inference. 1437V. Spokoiny, W. Wang, and W. K. Härdle. Local quantile regression. Journal of Statistical Planning and Inference, 143(7):1109-1129, 2013.
The Generic Chaining. M Talagrand, SpringerM. Talagrand. The Generic Chaining. Springer, 2005.
Maximum-variance total variation denoising for interpretable spatial smoothing. W Tansey, J Thomason, J Scott, Proceedings of the 32nd AAAI Conference on Artificial Intelligence. the 32nd AAAI Conference on Artificial IntelligenceW. Tansey, J. Thomason, and J. Scott. Maximum-variance total variation denoising for interpretable spatial smoothing. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, 2018.
Sparsity and smoothness via the fused lasso. R Tibshirani, M Saunders, Journal of the Royal Statistical Society. Series B (Methodological). 671R. Tibshirani and M. Saunders. Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society. Series B (Methodological), 67(1):91-108, 2005.
Adaptive piecewise polynomial estimation via trend filtering. R J Tibshirani, The Annals of Statistics. 421R. J. Tibshirani. Adaptive piecewise polynomial estimation via trend filtering. The Annals of Statistics, 42(1):285-323, 2014.
Degrees of freedom in lasso problems. R J Tibshirani, J Taylor, The Annals of Statistics. 402R. J. Tibshirani and J. Taylor. Degrees of freedom in lasso problems. The Annals of Statistics, 40(2):1198-1232, 2012.
On computing robust splines and applications. F I Utreras, SIAM Journal on Scientific and Statistical Computing. 22F. I. Utreras. On computing robust splines and applications. SIAM Journal on Scientific and Statistical Computing, 2(2):153-163, 1981.
High-Dimensional Probability: An Introduction with Applications in Data Science. R Vershynin, Cambridge University PressR. Vershynin. High-Dimensional Probability: An Introduction with Applications in Data Science. Cambridge University Press, 2018.
Hitting and commute times in large random neighborhood graphs. U Von Luxburg, A Radl, M Hein, Journal of Machine Learning Research. 1552U. Von Luxburg, A. Radl, and M. Hein. Hitting and commute times in large random neighborhood graphs. Journal of Machine Learning Research, 15(52):1751-1798, 2014.
High-dimensional statistics: A non-asymptotic viewpoint. M J Wainwright, Cambridge University Press48M. J. Wainwright. High-dimensional statistics: A non-asymptotic viewpoint, volume 48. Cambridge University Press, 2018.
Trend filtering on graphs. Y.-X Wang, J Sharpnack, A Smola, R J Tibshirani, Journal of Machine Learning Research. 17105Y.-X. Wang, J. Sharpnack, A. Smola, and R. J. Tibshirani. Trend filtering on graphs. Journal of Machine Learning Research, 17(105):1-41, 2016.
Coordinate descent algorithms for lasso penalized regression. T T Wu, K Lange, The Annals of Applied Statistics. 21T. T. Wu and K. Lange. Coordinate descent algorithms for lasso penalized regression. The Annals of Applied Statistics, 2(1):224-244, 2008.
Local linear quantile regression. K Yu, M C Jones, Journal of the American Statistical Association. 93441K. Yu and M. C. Jones. Local linear quantile regression. Journal of the American Statistical Association, 93(441):228-237, 1998.
|
[
"https://github.com/stevenysw/qt_knnfl)."
] |
[
"GG Carinae: Discovery of orbital phase dependent 1.583-day periodicities in the B[e] supergiant binary",
"GG Carinae: Discovery of orbital phase dependent 1.583-day periodicities in the B[e] supergiant binary"
] |
[
"Augustus Porter 1★ \nDepartment of Physics\nUniversity of Oxford\nDenys Wilkinson BuildingOxfordUnited Kingdom\n",
"Katherine Blundell \nDepartment of Physics\nUniversity of Oxford\nDenys Wilkinson BuildingOxfordUnited Kingdom\n",
"Philipp Podsiadlowski \nDepartment of Physics\nUniversity of Oxford\nDenys Wilkinson BuildingOxfordUnited Kingdom\n",
"Steven Lee \nAnglo-Australian Telescope\nCoonabarabran NSW 2357Australia\n\nResearch School of Astronomy and Astrophysics\nAustralian National University\n2611CanberraACT\n"
] |
[
"Department of Physics\nUniversity of Oxford\nDenys Wilkinson BuildingOxfordUnited Kingdom",
"Department of Physics\nUniversity of Oxford\nDenys Wilkinson BuildingOxfordUnited Kingdom",
"Department of Physics\nUniversity of Oxford\nDenys Wilkinson BuildingOxfordUnited Kingdom",
"Anglo-Australian Telescope\nCoonabarabran NSW 2357Australia",
"Research School of Astronomy and Astrophysics\nAustralian National University\n2611CanberraACT"
] |
[
"MNRAS"
] |
GG Carinae is a binary whose primary component is a B[e] supergiant. Using photometric data from TESS, ASAS, OMC, and ASAS-SN, and spectroscopic data from the Global Jet Watch to study visible He I, Fe II and Si II emission lines, we investigate the short-period variations which are exhibited in GG Car. We find a hitherto neglected periodicity of 1.583156 ± 0.0002 days that is present in both its photometry and the radial velocities of its emission lines, alongside variability at the well-established ∼31-day orbital period. We find that the amplitudes of the shorter-period variations in both photometry and some of the emission lines are modulated by the orbital phase of the binary, such that the short-period variations have largest amplitudes when the binary is at periastron. There are no significant changes in the phases of the shortperiod variations over the orbital period. We investigate potential causes of the 1.583-day variability, and find that the observed period agrees well with the expected period of the = 2 f-mode of the primary given its mass and radius. We propose that the primary is periodically pulled out of hydrostatic equilibrium by the quadrupolar tidal forces when the components are near periastron in the binary's eccentric orbit ( = 0.5) and the primary almost fills its Roche lobe. This causes an oscillation at the = 2 f-mode frequency which is damped as the distance between the components increases.
|
10.1093/mnras/stab817
|
[
"https://arxiv.org/pdf/2103.09725v2.pdf"
] | 232,258,099 |
2103.09725
|
5e2d0868bf0f0f1b199f5dadceb2bacc83e55730
|
GG Carinae: Discovery of orbital phase dependent 1.583-day periodicities in the B[e] supergiant binary
2020
Augustus Porter 1★
Department of Physics
University of Oxford
Denys Wilkinson BuildingOxfordUnited Kingdom
Katherine Blundell
Department of Physics
University of Oxford
Denys Wilkinson BuildingOxfordUnited Kingdom
Philipp Podsiadlowski
Department of Physics
University of Oxford
Denys Wilkinson BuildingOxfordUnited Kingdom
Steven Lee
Anglo-Australian Telescope
Coonabarabran NSW 2357Australia
Research School of Astronomy and Astrophysics
Australian National University
2611CanberraACT
GG Carinae: Discovery of orbital phase dependent 1.583-day periodicities in the B[e] supergiant binary
MNRAS
0002020Accepted 2021 March 17. Received 2021 March 17; in original form 2020 November 23Preprint 19 March 2021 Compiled using MNRAS L A T E X style file v3.0stars: binaries -stars: emission-line, Be -stars: supergiants -stars: individual: GG Car
GG Carinae is a binary whose primary component is a B[e] supergiant. Using photometric data from TESS, ASAS, OMC, and ASAS-SN, and spectroscopic data from the Global Jet Watch to study visible He I, Fe II and Si II emission lines, we investigate the short-period variations which are exhibited in GG Car. We find a hitherto neglected periodicity of 1.583156 ± 0.0002 days that is present in both its photometry and the radial velocities of its emission lines, alongside variability at the well-established ∼31-day orbital period. We find that the amplitudes of the shorter-period variations in both photometry and some of the emission lines are modulated by the orbital phase of the binary, such that the short-period variations have largest amplitudes when the binary is at periastron. There are no significant changes in the phases of the shortperiod variations over the orbital period. We investigate potential causes of the 1.583-day variability, and find that the observed period agrees well with the expected period of the = 2 f-mode of the primary given its mass and radius. We propose that the primary is periodically pulled out of hydrostatic equilibrium by the quadrupolar tidal forces when the components are near periastron in the binary's eccentric orbit ( = 0.5) and the primary almost fills its Roche lobe. This causes an oscillation at the = 2 f-mode frequency which is damped as the distance between the components increases.
INTRODUCTION
B[e] supergiants (B[e]SGs) are a class of rare stars which are not predicted by any stellar evolution models. They are characterized by hybrid spectra of hot stars with infrared excess, strong emission in Hydrogen Balmer and Helium lines, strong permitted and forbidden emission lines from a number of elements, wide absorption lines in the ultraviolet (UV) spectrum, and significant infrared excesses. These features point towards a complex circumstellar environment (Zickgraf et al. 1985(Zickgraf et al. , 1986Kraus 2019). Currently there are only ∼33 confirmed B[e]SGs discovered, and ∼25 further candidates (Kraus et al. 2014;Levato et al. 2014;Kraus 2009;Kraus 2017;Kraus 2019). Their formation channels and the origin of the B[e] phenomenon are unclear, with some studies ascribing the phenomena to binarity (Podsiadlowski et al. 2006;Miroshnichenko 2007;Wang et al. 2012) and others to non-radial pulsations . The opaque circumstellar envelopes of B[e]SGs generally preclude direct observation of photospheric absorption lines and therefore the determination of the stars' surface conditions (e.g. Kraus 2009). The circumstellar envelopes must be formed by ★ E-mail: [email protected] enhanced mass-loss or ejection, although the exact mechanism remains unknown. B[e]SGs are expected to be rapid rotators (Zickgraf et al. 1986); however direct observations of the rotation speeds of B[e]SGs are inconclusive .
GG Carinae (GG Car, also known as HD 94878 and CPD-59 2855) is an enigmatic Galactic B[e]SG binary which has been studied for over a century due to its peculiar spectroscopic and photometric properties (Pickering & Fleming 1896;Kruytbosch 1930;Greenstein 1938). Lamers et al. (1998) classified GG Car as a B[e]SG building on the work of McGregor et al. (1988) and Lopes et al. (1992), noting their observation of the B[e] phenomenon in the object; its high luminosity; indications of mass-loss through P Cygni line profiles; and its hosting of a hybrid spectrum of narrow emission lines and broad absorption features. Porter et al. (2021), hereafter Paper A, using the measured parallax of GG Car in Data Release 2 from the Gaia mission (Prusti et al. 2016;Brown et al. 2018), refined the luminosity of the primary and used this to constrain the primary mass and radius. Table 1 lists the primary's stellar parameters. Studies of the CO in GG Car's circumbinary disk suggest that the primary has evolved off the main sequence, but is in an early pre-red supergiant phase of its post-main sequence Table 1. Gaia DR2 distance, , and stellar parameters of the primary in GG Car, where pr is the mass of the primary, eff is the effective temperature of the primary, pr is the luminosity of the primary, and pr is the radius of the primary. All values taken from Paper A (Porter et al. 2021) except eff , which is taken from Marchiano et al. (2012). (Porter et al. 2021). is the orbital period, is the amplitude of the radial velocity, is the argument of periastron, the orbital eccentricity, peri is the time of periastron. sec is the inferred mass of the secondary, and is the resulting orbital separation. lifetime (Kraus 2009;Kraus et al. 2013;Oksala et al. 2013). However, this determination depends on the assumed rotation velocity of the primary, which is unknown.
Paper A investigates the variability of GG Car over its orbital period in photometry and Global Jet Watch (GJW) spectroscopy. We found that the photometric variations are continuous over the orbital period with one maximum and one minimum, and also that the He I, Fe II and Si II emission lines in GG Car's visible spectrum originate in the wind of the B[e]SG primary. We then determined an accurate orbital solution of the binary in GG Car, and found the orbit is significantly eccentric ( = 0.50). Paper A shows that the system is brightest in the -band at periastron, and that the photometric variations of the system at the orbital period may be described by enhanced mass transfer at periastron, with the secondary accreting the wind of the primary. The full orbital solution is given in Table 2. Orbital phases in this study are calculated
Orbital phase = − peri ,(1)
where is time in JD, peri is time of periastron passage, and is orbital period.
Early time-series photometry studies noticed that GG Car displays significant intra-night variability, separate from its variability over its ∼31 day orbital period (Kruytbosch 1930;Greenstein 1938). Gosset et al. (1984), through Fourier analysis, found an indication of a periodicity at ∼1.6 days in the system's photometry. This led them to state that one of the GG Car components is a variable, but no further analysis was undertaken and this periodicity has been neglected since that publication. Krtičková & Krtička (2018) were unable to determine a clear UV lightcurve of GG Car over the orbital period, presumably due to variability in the system; they conclude that the binary component that is brightest in the UV is the variable, but do not find a period.
In this study, we investigate this short-period variability of GG Car in detail, in both photometry and spectroscopy. The structure of this paper is as follows: Section 2 introduces the -band and TESS photometry, and the Global Jet Watch spectroscopy of GG Car; Section 3 studies the variability of the system's photometry and emission lines' radial velocities; Section 3.3 investigates the relationship between the amplitude of the short-period variability in the system and the orbital phase of the binary; Section 4 presents our discussions; and Section 5 presents our conclusions.
OBSERVATIONS
-band photometric observations
-band photometric data of GG Car are available from the All Sky Automated Survey (ASAS, Pojmański & Maciejewski 2002;Pojmański 2004), the Optical Monitoring Camera (OMC) aboard the INTEGRAL satellite (Mas-Hesse et al. 2003), and the All Sky Automated Survey for Supernovae (ASAS-SN, Shappee et al. 2014;Kochanek et al. 2017). Each of these surveys uses standard Johnson V-filters, centred at 550 nm and with a full width half maximum of 88 nm. Further details of the -band observations used in this study for each survey are given in Paper A.
TESS photometry
The Transiting Exoplanet Survey Satellite (TESS, Ricker et al. 2014) is a mission geared towards discovering new exoplanet candidates; however, its high-cadence and high-precision photometry of the majority of the sky means that it has proved a valuable resource for stellar astrophysics. The satellite is in a highly-elliptical 13.7-day orbit around Earth, and observes the sky in 26 partially overlapping "Sectors", each Sector being observed for roughly one month. The passband filter has an effective wavelength of 7 500 Å and a width of 4 000 Å; this wide bandpass is roughly centred on the Johnson band, but also encompasses the and bands. The filter therefore transmits to longer wavelengths than the -band surveys described in Section 2.1. TESS is able to create exquisite light curves for objects whose Johnson V-magnitude lies between 3-12 mags. 400 000 pre-selected sources have had reduced photometric data at two minute cadence released, of which GG Car is unfortunately not a member. However, unreduced full-frame image (FFI) data with a sampling rate of 30 minutes are available for any source which lies within one of TESS's sectors. GG Car is located in TESS Sectors 10 and 11 which were observed from 2019-03-26 to 2019-05-21, covering almost two full orbital cycles of the binary, and its mean -band magnitude of ∼8.6 mag places it ideally within the observing limits of TESS. We reduced the TESS FFI data using the eleanor framework (Feinstein et al. 2019).
The pixel scale of TESS is 21 arcseconds per pixel, with a point-spread-function of a similar scale. This presents a problem for GG Car since it is only separated by 49 arcseconds from its nearest neighbour, V413 Car. eleanor is able to minimise the impact that this may have by choosing optimal apertures and PSF modelling. We reduce the FFI data to 15×15 pixel "postage stamps", and model the PSFs of both GG Car and V413 Car as Moffat profiles. We block the brightest 20% of pixels away from the target, which effectively masks background stars, aiding the background subtraction. The data are converted to TESS magnitudes, using the mean magnitude of 7.696 taken from the TESS Input Catalogue. The uncertainties of the individual data points of the TESS data are very low, of the order 10 −4 mag.
To ensure that the features seen in the TESS light curve of GG Car are real and not instrumental artefacts, we extracted the data for three similarly bright stars which were observed nearby on the same CCD as GG Car (V413 Car, AG Car, and HD 94961) using similar methods. The light curves of these other stars do not display the same features as those observed in GG Car.
Global Jet Watch spectroscopy
The Global Jet Watch (GJW) has been collecting mid-resolution (R∼4 000) optical spectroscopic data on a variety of objects, including GG Car which it has been observing since early 2015. GJW is an array of five telescopes separated in longitude which take optical spectra from ∼ 5 800 -8 400 Å. Our observations of GG Car have exposure times of either 1000 or 3000 seconds. This is due to the dominant brightness of H-alpha. H-alpha is saturated in the 1000 and 3000 s exposures. The spectra are barycentric corrected using heliocentric velocities calculated with the barycorrpy package (Kanodia & Wright 2018). In this study, all spectra are normalised by the local continuum. The GJW spectra studied are further described in Paper A. Figure 1 displays the TESS lightcurve of GG Car. The TESS data cover nearly two orbital cycles of the binary of GG Car with high precision and cadence. Both the longer-term variation at the orbital period and the shorter period variations along the lightcurve are apparent. It is also clear that the amplitude of the short period variation changes over the epoch of observation, with the amplitudes being correlated with the brightness of the system. Figure 2 displays the Fourier power spectrum of the ASAS, ASAS-SN, and OMC -band photometry of GG Car in the top panel, and the power spectrum of the TESS photometry in the bottom panel. These power spectra, along with all subsequent power spectra in this study, were calculated using the CLEAN algorithm of Roberts et al. (1987), which deconvolves the Fourier transform of the data from the Fourier transform of the observational aperture function, thereby overcoming the artefacts that inevitably arise from transforming irregularly sampled data.
THE SHORT-PERIOD VARIABILITY OF GG CAR
Photometric variability
The photometric periodograms are dominated by the ∼31-day orbital period of the binary, however there are other periods present in the power spectrum. Most notable is the significant peak at a higher frequency of ∼0.632 days −1 , corresponding to a ∼1.583-day period. This is the same higher-frequency period noted in Gosset et al. (1984). The -band data were all taken from ground-based surveys, with the exception of OMC, so there is a peak complex in this power spectrum around 1 days −1 due to aliasing effects arising from the Earth's rotation period. There is no such complex in the TESS periodogram, since it is a space-based mission.
We calculate the frequency and estimate the uncertainty of the peaks in the power spectra by fitting Gaussians to them, and thereby utilising the centroids and the standard deviations of the peaks as the frequencies and their uncertainties, respectively. Table 3 presents the periodicities found from the photometric data. The ∼31-day orbital period derived from the photometry is consistent with the spectroscopic period presented in Paper A within their respective uncertainties. Therefore for the rest of this paper we will calculate orbital phases using the spectroscopic period and ephemeris of Paper A, which is more precise at 31.01 ± 0.01 days, to match the orbital phase of the binary. The values of the short period used in this study is the 1.583156±0.0002 day value from the -band photometry, as it is more precise than the TESS determination, and we use this period for both photometry and spectroscopy. We therefore calculate the phases of the short period by
phase = − peri 1.583156 ,(2)
where is the JD of observation. We calculate the phases relative to perid , though the choice of this reference time is arbitrary for the short period and without physical significance. Figure 3 displays the -band and TESS photometry data folded according to Equation 2, once the variations at the orbital period are subtracted. Black points average values by phase bin. The short period is very clear to see in both the -band and the TESS data and the averages show that the variations agree in phase indicating an accurate and persistent period, as the two datasets cover nearly 19 years of observation. There is, however, considerable scatter in the folded data. A cause of this scatter is, as discussed in Paper A, because the photometric variations for each 31-day orbital cycle are not identical, but they vary in shape and depth. This can also be seen in the TESS photometric data in Figure 1; the middle minimum is deeper than the minima at the start and end of the observational period. Another cause of the scatter is the variable amplitude of the 1.583-day variations, which is very clear in the TESS data. This amplitude modulation is further discussed in Section 3.3.
Spectroscopic variability
We now turn to spectroscopic variability, as observed by the Global Jet Watch. The variability in this section focuses on the radial velocities (RVs) of emission lines in the visible spectrum of GG Car. We use the same spectra and Gaussian fitting methods to extract the RVs of the emission lines as Paper A, and we refer the reader to that study for details of the methodology. Paper A has shown that Gaussian fitting is a robust method to extract emission lines centers, amongst other studies (e.g. Blundell et al. 2007;Grant et al. 2020). The 1.583-day periodicity is detected in the RVs of the He I emission lines, and in some Si II and Fe II emission lines. Figure 4 displays the Fourier power spectra for the RVs of the emission of the three visible He I emission lines. The bottom panel displays the geometric mean of the three power spectra to detect common periodicities across the line species. Taking the geometric mean leads to spurious peaks in the individual periodograms being suppressed, and common periodicities which exist across all line species to be promoted. This allows us to observe whether real periodicities exist in noisy periodograms. As with the photometric power spectra in Figure 2, the He I variability is dominated by the orbital period, and each of the lines also has a peak at the short-period 1.583-day frequency. There are no common significant peaks at other periods, as shown in the geometric mean of the power spectra.
Periodograms were similarly calculated for the RVs extracted from Fe II and Si II lines. The majority of these lines are also dominated by variations at the orbital period, and a minority show indications of the 1.583-day variations. The metal lines which show an indication of RV variability at the short period frequency are the Fe II 6317.4, 6456.4, 7712.4 and Si II 6347.1, 6371.4 lines, and their periodograms are displayed in Figure 5. Even though some of the individual periodograms in Figure 5 do not have a significant peak at the short-period compared to the noise of the periodogram, a significant peak survives at this frequency in the geometric mean of the power spectra. Lines studied in Paper A which show no indication of variability at 1.583 days in their power spectra are H-alpha, Fe II 5991.3711, 6383.7302, 6432.676, 6491.663, 7513.1762, and Si II 5957.56, 5978.93. Their periodograms are not included in this paper as they do not enlighten, and these lines without the short-period variability are not investigated further in this study. which display variability at the 1.583-day period. The data are binned into 30 bins by phase, and the weighted mean and standard error on the weighted mean in each bin are given by the black error bars. The RV variations have different amplitudes and profiles for each line species, similar to what was found in Paper A with the RV variations at the orbital period. It is unclear why certain lines display the short-period variations whilst others do not, even lines which arise from the same ionisation species. For the He I lines, which display the short-period variations, comparing their RV curves to the photometric variations the phase of the maximal blueshift of the 1.583-day variations roughly corresponds with the phase of minimal brightness at this period.
Similar to the findings in Paper A at the orbital period, the amplitude of the short-period RV variations varies according to line species, with He I having the largest variations and Fe II the smallest. Figure lines available. The method of determining for the RV variations in described in Appendix A.
Orbital-phase dependence of short-period variability
Here we focus on how the amplitudes of the short-period variability are modulated by the phase of the binary orbit. Figure 1 shows that, in the TESS observations of GG Car, both the orbital and 1.583-day photometric variations are not uniform, but they vary in both amplitude and profile. Figure 8 shows the TESS photometric data once they have had the orbital period variations subtracted, leaving only the short-period variability, and then folded by orbital phase using the orbital solution given in Table 2. The largest variations in both observed orbital cycles occurs at around phase 0.12, with those variations being zoomed in the bottom panel. The variations are smallest around phase 0.6, though there is a data gap between phase 1.45-1.55. Figure 9 clearly shows how the amplitude of the short 1.583-day variations is modulated throughout the orbital period of GG Car in the photometric data. For the TESS data a sinusoid is fitted to each 1.583-day slice within the dataset, with the amplitude and phase kept as free parameters. The amplitudes of the 1.583-day variations are then folded over the 31-day orbital period. It is clear that, in the TESS observational interval, the amplitude of the short-period variation is strongly tied to orbital phase, with the largest variations occurring around phase ∼0.1. This is shortly after the binary is at periastron, as per the ephemeris of Paper A, and when the system is brightest (both of which occur at phase 0).
For the -band photometry, since there is not the fine time-sampling of the TESS data, different techniques need to be used to study the amplitude modulations of the short-period variations. To do this, the ASAS, ASAS-SN, and OMC data had the mean 31-day orbital variations subtracted, then a sliding window of orbital phase width 0.15 is used to calculate the short-period amplitudes in the orbital phase intervals. The data in each window were folded by the 1.583-day period and then fitted by a sinusoid, with amplitude, phase, and offset as free parameters. Figure B1 in the Appendix demonstrates how the amplitude modulation of the -band photometry along the binary orbit was calculated in each orbital phase window. Figure 9 shows that the amplitude of the short-period variations in the -band data is also modulated across the orbital phase of the binary. The amplitude-phase signal closely matches that shown by the TESS data in phase and shape, demonstrating that this orbital dependence of the 1.583-day period is long-lived and persistent across all data, and is not a curiosity of the TESS observing epoch. It is also noteworthy that the sharp rise in the TESS amplitude is matched as the phase where the amplitude is largest in the -band photometry, suggesting that the large spike in amplitude that is seen in the TESS data around phase 0.1 persists throughout the observations of GG Car. We find that the phases of the 1.583-day variations have no significant variability across the orbital period; this can be seen in the fitted sinusoids to the -band data in Figure B1, which are all in phase.
In Figure 8, it is clear that the variability in the TESS data is not uniform, but it does follow the general trend of having larger amplitudes shortly after periastron. The -band data, since it is taking the average short-period amplitude of many orbital cycles Figure B1 demonstrates how the data for the -band photometry variations were calculated.
simultaneously with a sliding window in orbital phase, gives us the average amplitude change over a long period of time. Therefore, we cannot conclude that this relationship holds for each orbital cycle, but that, in aggregate, over all orbital cycles there is a general trend for the amplitude of the photometric variability at the 1.583-period to be largest shortly after periastron.
We now turn to see whether this amplitude modulation exists in the spectroscopic data. Figure 10 displays the amplitude modulation along the binary orbit of the 1.583-day RV variations of the He I emission, as observed by the GJW. The amplitude modulations for the spectroscopy were calculated similarly to those of the -band photometry. It is clear, for all three He I lines, that Mean amp (km s 1 ) Figure 10. The amplitude of the short-period RV variations for the He I lines against orbital phase. The data were calculated in the same manner as the -band data in Figure 9. The bottom panel shows the mean of the short-period amplitude in each phase bin, with the error-bar corresponding to the standard deviation in that bin.
the amplitude of the spectroscopic RV variations are modulated in a similar manner to the photometric variations. He I 7065, however, has an offset to the other He I lines and the photometry with peak amplitude occurring around phase 0.9. Figures 11 and 12 show the same for the short-period presenting Si II and Fe II lines respectively. The Si II 6347 line shows a clear signal, with the amplitude of the short-period variations modulating across the orbital phase in the same manner as the photometry and the He I, whereas Si II 6371's signal of amplitude modulation is less clear. The Fe II shows some indication of amplitude modulation over the binary orbit, however their signals are not very clear due to the smaller amplitudes of the Fe II lines. The amount of amplitude modulation is largest for the He I lines, followed by the Si II lines, followed by the Fe II lines.
DISCUSSION
Origin of the 1.583-day variability
Stellar variability is variously explained by rotations, multiplicity, or pulsations. As evidenced by the TESS data, the amplitude of the photometric variability at 1.583 days can reach up to 0.07 mag. This implies a peak-to-trough brightness change of ∼14 %; since the secondary of GG Car is predicted to contribute < 3% of the flux of the system assuming it is a main sequence star of 7.2 (Paper A), then the flux from the secondary alone cannot be the origin of this variability. Here, we investigate whether rotation of the primary, a hidden third body, or pulsations in the primary can cause the 1.583-day variability observed in GG Car.
As shown in Zickgraf et al. (1996) the critical rotation velocity at the equator of a star, above which it cannot rotate without breaking up, can be estimated by crit =
√︂
(
1 − Γ rad ) ,(3)
where is the gravitational constant, is the mass of the star, is the radius of the star, and Γ rad is the correction to the effective gravity due to radiation pressure by electron scattering. Γ rad is given by
Γ rad = 4 ,(4)
where is the stellar luminosity, is the speed of light, and is the electron mass scattering coefficient. For , we adopt a value of 0.308 cm 2 g −1 taken from Lamers (1986); it should be noted that this value of is calculated for the composition of the circumstellar environment of P Cygni. Entering the stellar parameters of GG Car given in Table 1 gives crit = 370 ± 70 km s −1 . Should the 1.583-day period be interpreted as the rotation period Might there be a hidden close companion of the B[e]SG primary, or circumstellar material which orbits the primary every 1.583 days, causing the short-period variability of the system? Kepler's third law states
2 = 4 2 3 ,(5)
where is the orbital period, is the semi-major axis of the orbit, and is the total mass of this inner system. Entering = 1.58315 ± 0.0002 days, and the primary's mass of = 24 ± 4 gives = 16.5 ± 0.9 . Given that GG Car's primary has a radius of 27 , this implies that a hidden companion would be completely engulfed in the primary star. Whilst it is theorised that B[e]SGs may be post-merger objects (Podsiadlowski et al. 2006), the variable period would not be as stable as it is observed to be if GG Car were a recently-merged object, given that an indication of a ∼1.6-day period was first reported by Gosset et al. (1984) and the periodicity continues to the present day.
This leaves pulsations as the likely cause of the short-period variability in GG Car. Pulsations have been observed in B[e] stars at a similar timescale to the one we observe in GG Car, though they are rare. Krtičková & Krtička (2018) detects a pulsation period of 1.194 ± 0.06 days in the unclassified B[e] star HD 50138 (V743 Mon). Pulsations with periods at this timescale are often observed in blue supergiants (e.g. Haucke et al. 2018). Saio et al. (2013) reports that radial pulsations and a spectrum of non-radial pulsations may be excited in evolved blue supergiants (BSGs) which have already undertaken the blue loop in their post-main sequence evolution, having been in a prior red supergiant (RSG) state. Conversely, they find that most of these pulsations were suppressed in BSGs which had not yet undergone the blue loop. Since we only observe one significant frequency, other than the orbital frequency, in the periodograms of the photometry and spectroscopy of GG Car this likely indicates that the primary GG Car is in a pre-RSG state, according to the conclusion of Saio et al. (2013). This supports the findings of Kraus (2009) and Kraus et al. (2013), which concluded that GG Car is in a pre-RSG state based on 13 CO abundances.
Given that the variability is coupled to the orbit of the binary, this may indicate that the tidal potential is exciting pulsations in the B[e]SG primary (this amplitude modulation is discussed in further detail in Section 4.2). As the tidal potential is quadrupolar, the most likely oscillation mode we are observing is an l=2 mode, where is the degree of the mode and indicates the number of surface nodes. Gough (1993) shows that f-modes of stars, which act as surface gravity waves with = 0 (where is the number of radial nodes of the oscillation mode), have angular frequencies which may be determined as 2 =
(1 − ( )) ,
where = 2 / is the angular frequency of the mode, = / 2 is the surface gravity of the star, is the stellar radius, = √︁ ( + 1), and ( ) is a term to correct for the sphericity of the star. ( ) is calculated
( ) = 2 −1 + 3 ∫ 0 ( / − 1) exp (2 / ) d ∫ 0 exp (2 / ) d ,(7)
where = ( ) is the density of the star. Entering the stellar parameters of GG Car into Equation 6, and utilising simple distributions of ( ), yields values of the = 2 f-mode frequency which are consistent with our observations. Assuming a constant density gives = 2.4 +1.2 −1.0 days. While a constant density is highly unrealistic, we may use simple prescriptions to give higher densities in the stellar centre, such as ( ) ∝ 1 − ( / ) , where and are constants, that also yield consistent periods. Computing a grid of allowed periods for the = 2 mode using this simple prescription of ( ) returns periods which are consistent with the observed periodicity for all values of and where 0 ≤ ≤ 1 and > 0. While detailed modelling is beyond the scope of this paper, this shows that the observed pulsation frequency is consistent with and likely to be the = 2 f-mode. It must also be noted that, per Equation 6, higher values of up to ∼8 may also yield periods which are consistent with the observed variability; however, the = 2 mode would be expected to be excited more strongly than the higher modes by the tidal potential. The mode observed may not be radial, since tidal modulation is only allowed for pulsation modes with ≠ 0 (Polfliet & Smeyers 1990).
The RV variability that we detect in GG Car's emission lines would then be related to that pulsations in the primary affecting the structure of the wind at its 1.583-day periodicity. Pulsations have been theorised and shown to affect the wind and mass-loss of properties of blue supergiant stars (Aerts et al. 2010;Kraus et al. 2015;Yadav & Glatzel 2016Haucke et al. 2018). Structures in stellar winds caused by pulsations have been proposed to explain variability in certain X-ray binaries (Finley et al. 1992;Koenigsberger et al. 2006).
1.583-day amplitude modulation
In Section 3.3, we have shown that the amplitude of the 1.583-day variations is modulated by the orbital phase of the binary, most clearly in the photometry. The amplitudes are largest when the binary is at periastron in its eccentric ( = 0.5 ± 0.03) orbit. This linking between the 31-day orbital period and the 1.583-day short period is unusual, since the ratio between the two periods is 31.01/1.583 = 19.589, i.e. they are non-commensurate. GG Car's lightcurve bears a resemblance to the "heartbeat stars" which have resonant, tidally driven, stellar oscillations that have variable amplitudes over the orbital period (see e.g. Fuller 2017). Most heartbeat stars yet discovered are lower mass A and F stars, but the phenomenon has also been observed in massive O and B stars (Pablo et al. 2017;Jayasinghe et al. 2019). These heartbeat stars are eccentric binaries which have orbital periods that are exact integer multiples of the star's g-mode pulsation periods, which lead to coherent and resonant pulsations due to the tidal excitation of the oscillation modes (see also Kumar et al. 1995;De Cat et al. 2000;Willems & Aerts 2002). However, the short-period variability we observe in GG Car is clearly non-resonant with the orbital period, as the bottom panel of Figure 8 and the non-integer relation between the short-period and orbital period clearly show. Therefore, the periodicity and amplitude modulation we report cannot arise due to resonance. However, there are indications that tidal effects can affect non-resonant free oscillations, and here we explore that possibility.
Paper A showed that, at periastron, the radius of the primary of GG Car extends to 85 ± 28 % of its Roche lobe radius, whereas at apastron it only extends to 28 ± 9 %; therefore, the tidal perturbation at periastron will be significant and a dynamical tide will be raised. Paper A also presents evidence that the primary's mass-loss is focused around periastron. In GG Car, the timescale of the varying gravitational potential will be short given that the orbit is significantly eccentric, with the timescale of periastron passage peri ∼ √︁ 3 (1 − ) 3 / tot ∼ 1.7 days, where tot is the combined mass of the primary and the secondary. A similar determination of the timescale of intense gravitational interaction is the half-width-at-half-maximum (HWHM) of the tidal force, i.e. the time taken for the tidal force of the secondary on the primary to increase from mid-point value to its peak value at periastron. The HWHM of the tidal force is ∼1.9 days (since tidal ∝ −3 , where is the instantaneous separation of the binary components). Therefore, around periastron, the timescale of the change of the tidal force is of the same order as the observed periodicity and dynamical timescale of the primary; the star will be unable to adjust to the changing tidal force in a quasi-static way. Conversely, for the tidal force to go from its minimum at apastron to the mid-point value takes 13.6 days, i.e. an order of magnitude longer than the pulsation period and the dynamical timescale.
It therefore follows that the varying proximity of the two binary components will affect the conditions of the primary, and the rapid change of the tidal forces and enhanced mass loss at periastron will draw the primary out of hydrostatic equilibrium at a timescale of the same order as its dynamical timescale. The primary, attempting to regain equilibrium, oscillates at the observed period of 1.583 days, which we have shown in Section 4.1 is likely to be the = 2 f-mode. As the binary components separate after periastron passage, the tidal force becomes increasingly less important, and the star will continue to oscillate at the observed period and ultimately try to regain hydrostatic equilibrium. The timescale for the damping of the oscillations depends on the dominant source of viscosity, but, in the case of the large tidally induced distortion observed in GG Car, could be as fast as the dynamical timescale of the primary's envelope ( dyn ∼ 1 day). Quantifying mode damping timescales in the envelopes of massive OB stars is an uncertain problem, and is beyond the scope of this paper. Figure 13 displays both how the short-period variation amplitude and mean brightness in the -band compares with the instantaneous separation of the binary components along the orbital period: the mean brightness is near perfectly anti-correlated with the separation of the components. On the other hand, the short-period amplitude of the -band data increases more rapidly than it decays, and peaks somewhere between orbital phases 0.0 and 0.14, corresponding to ∼0 -4 days after periastron. This delay can also be clearly seen in the folded TESS data in Figure 8.
There are examples in the literature which support this hypothesis of tidally modulated free oscillations. Moreno et al. (2011) calculated that increased stellar activity can be expected on stellar surfaces around periastron in eccentric binaries due to the raising of dynamical tides and the associated changes in timescale of dissipation of the tidal energy, and this can lead to oscillations. The interaction of free oscillations with tidal interaction was theoretically studied by Polfliet & Smeyers (1990), who show that a tidally distorted star may display free non-radial oscillations with periods of the order of its dynamical timescale. They find that the free oscillations' amplitudes are modulated at a frequency which is an integer multiple of the orbital frequency. Tidal modulation of pulsation amplitudes in this manner have been reported in the Cephei variables Cep (Fitch 1969), CC And (Fitch 1967), Scorpii (Fitch 1967;Goossens et al. 1984;Chapellier & Valtier 1992), Vir (Dukes 1974), and 16 Lacertae (Fitch 1969;Chapellier et al. 1995); in these objects, the pulsation periods and the orbital periods are non-resonant, but the pulsational amplitudes undergo an integer number of cycles per orbital period. Chapellier et al. (1995) reports that the amplitude of an = 1 pulsation mode of the system 16 Lacertae undergoes exactly one cycle over the orbital period where the pulsation period and the orbital period are non-commensurate, similar to what we observe in GG Car.
It is worth noting that the amplitude modulation that we are reporting in GG Car also bears resemblance to binaries which have been recently discovered to have pulsations that are tidally trapped on one hemisphere of the variable component. Handler et al. (2020) discovered a tidally-trapped pulsation mode in the binary star HD 74423, in the form of amplitude modulation of the observed pulsations as a function of orbital phase. Similarly to what we observe in GG Car, the pulsation frequency and orbital frequency in HD 74423 are non-commensurate. Kurtz et al. (2020) find a similar result in CO Cam, finding that four modes are trapped by the tidal potential of the companion. Fuller et al. (2020) presents evidence of a similar process occurring in TIC 63328020. The authors explain the amplitude modulation in these systems as being due to the pulsation axis of the variable component to be aligned with the line of apsis of the binary which cause the pulsations to have a larger amplitude on one hemisphere of the star, either the hemisphere facing towards or away from the companion. A larger, or smaller, photometric variability amplitude is then observed at times of conjunction depending on which hemisphere is facing the observer. According to the orbital geometry of GG Car ( = 0.50, = 339.87 • ), superior conjunction occurs at phase 0.14 and inferior conjunction occurs at 0.93. Superior conjunction, therefore, does occur at a remarkably similar phase as the oscillation amplitude's maximum, most clearly shown in Figure 8. However, in the scenario of tidally trapped pulsations, the phase of inferior conjunction would then be expected to have the lowest oscillation amplitude. This is clearly not the case, as the oscillation amplitudes are still very large around phase 0.93. Therefore it is unlikely that the amplitude modulation we observe in GG Car is due to tidal trapping of the pulsation mode, though geometrical effects may perhaps be accentuating the observed amplitude of the non-radial pulsation mode at superior conjunction.
Further TESS-quality observations of GG Car observing more orbital periods would be needed to fully confirm such an argument of orbital-phase modulated free oscillations. Alternatively, phase-resolved studies of the spectral energy distribution (SED) of the system could allow the varying contribution of the primary to the SED to be measured. Should the primary's contribution vary with the orbit and the 1.583-day period, this would lock down the variability as being due to pulsations of the primary, and therefore the amplitude modulation would be due to proximity effects of the primary to the secondary.
CONCLUSIONS
We have shown that the B[e]SG binary GG Car is significantly variable in both photometry and spectroscopy at 1.583156±0.0002 days, and we have studied this variability in detail for the first time. This period is much shorter than the well-known 31-day orbital period of the binary. We have shown that the short-period variability cannot be caused by the rotation of the B[e]SG primary, the presence of a hidden third body, or intrinsic variability of the secondary's flux. We find that 1.583 days is consistent with the period of the lower-order f-modes ( < ∼ 8) of GG Car's primary given its mass and radius, and we ascribe the variability as most likely being due to the = 2 f-mode such that it couples to the quadrupolar tidal potential. We therefore argue that pulsations of this mode are the most likely cause of its variability.
In spectroscopy, we found that the short period manifests itself in the RVs of the He I, Si II and Fe II emission lines; however, not all of GG Car's emission lines display the periodicity. We have found that the amplitudes of the spectroscopic RV variations at the 1.583-day period are correlated with the upper energy levels of the transitions causing the line emission, implying that the variations are related to the temperature of the line forming regions.
We have shown that the amplitudes of the short-period variations are dependent on the orbital phase of the binary, most notably for the -band and TESS photometry, with the largest variations occurring around or just after periastron, where the system is also at its brightest. This is striking as the ratio between the orbital period and the shorter period is 19.596. This non-integer ratio of the two periods means the shorter period cannot be a tidally-resonant excited oscillation mode in one of the stars.
Paper A shows that the primary's radius extends to ∼85% of its Roche radius at periastron, compared to only ∼28% at apastron. We have shown, in Section 4.2, that the timescale of the change of the tidal forces on the primary at periastron are of the same order as its dynamical timescale. Therefore, we suggest that the primary is being pulled out of hydrostatic equilibrium by the secondary every orbit due to the strong tidal effects at periastron faster than the primary can regain equilibrium. This loss of equilibrium causes pulsations at the = 2 f-mode, which can couple to the quarupolar tidal potential and which is consistent with the 1.583-day period observed, with a larger amplitude when the stars are close in proximity and the primary is being pulled further from equilibrium. These oscillations are damped at a timescale which may be as fast as the dynamical timescale as the separation between the binary components increases and the primary can return to equilibrium.
APPENDIX A: AMPLITUDES OF SPECTROSCOPIC RADIAL VELOCITY VARIATIONS
To determine the RV variability of the emission lines, Keplerian orbital RV solutions are fitted to the RV data for each line separately at both the orbital and 1.583-day periods simultaneously, fitting for the amplitude , the eccentricity , the argument of periapsis , and the phase of periastron 0 for each period. We also fit for jitter, , modelled as a correction of the RV uncertainties. We also fit the systemic velocity 0 separately for each line. The fits at the orbital period are discussed in Paper A, and are used to determine the orbital solution of the binary. We show in Section 4 that the 1.583-day period cannot be an orbital period of a hidden inner binary; however, modelling the RV variations with a Keplerian solution is useful for fitting an amplitude to a repeating signal of an arbitrary shape in noisy data. We can then mutually compare the amplitudes found between the line species.
We fit the RV variations for each emission line separately by maximising the log-likelihood function, using the Monte Carlo Markov Chain algorithm emcee (Foreman-Mackey et al. 2012). The log-likelihood for a set of parameters, , given RV data points, , with uncertainty is given by
ln ( | , ) = − 1 2 ∑︁ =0 − kep ( orb ) − kep ( 1.583 ) − 0 2 2 + 2 +ln 2 ( 2 + 2 ) ,(A1)
where orb and 1.583 are the orbital parameters for the long-and the short-period respectively, and kep is the Keplerian velocity calculated for a set of orbital parameters. The fitted parameters, , encode orb , 1.583 , 0 , and . The uncertainties, , are taken from the least-squares fitting algorithm of the Gaussian fitting routines. Table A1 lists the fitted amplitudes for all emission lines in this study at the 1.583-day period. The fitted parameters at the orbital period are listed in Paper A, appendix B.
Although we are not implying that the 1.583-day period is in any way due to an orbital effect, fitting Keplerian RV solutions is a convenient method to fit an arbitrarily shaped, periodic signal in this context, which allows for effective extraction of mutually comparable amplitudes. We ignore all parameters encoded in Figure B1 demonstrates how the short-period amplitude versus orbital phase for the -band photometric data, shown in Figure 9, was calculated. The -band data had the photometric variations at the orbital period subtracted, then were binned by a sliding window in orbital phase, of phase width 0.25. The data within a window are then folded by the 1.583-day short period, and a sinusoid is fitted and the amplitude extracted. Each panel shows the photometric data in blue and the sinusoid fits as black in each of the phase windows.
APPENDIX B: CALCULATION OF PHASE-AMPLITUDE FIGURES
The amplitudes of the 1.583-day variations vary significantly with orbital phase. The phase of the sinusoidal variations do not change significantly given the uncertainties of the fitted parameters. This paper has been typeset from a T E X/L A T E X file prepared by the author. Figure B1. Demonstration of how the short-period amplitude versus orbital phase for the -band photometric data, shown in Figure 9, was calculated. Each panel shows the -band data, with the 31-day orbital variance subtracted, in an orbital phase window of width 0.15 folded by the 1.583-day period. A black line shows the sinusoid fitted to the data in that phase window. The legend in each panel says the start phase, the end phase, and the amplitude of the fitted sinuosoid. The phase of the windows increases first down the columns in the figure, and then along the rows.
Figure 1 .
1TESS lightcurve of GG Car.
Figure 2 .
2Fourier power spectra of the -band photometry of GG Car (top panel) and the TESS photometric data (bottom panel). The frequencies corresponding to the orbital period and the short period are denoted by small arrows and presented in
Figure 3 .
3Top panel: -band photometry of GG Car, folded by the short 1.583-day period using Equation 2. The data have had variations at the 31day orbital period subtracted before folding. Black points indicate the average value in 30 bins by phase. Bottom panel: same as the top panel, except for the TESS photometric data, and the black points denote the average in 35 bins by phase.
Figure 6
6Figure 6 displays the RV variations of the emission lines
Figure 4 .
4Periodograms of the radial velocities of the emission components of the He I lines. The bottom panel displays the geometric mean of the three periodograms. Arrows indicate the frequencies of the orbital period and the new 1.583-day short period. The periodograms are normalised by the peak power, which for these lines are all at the orbital period.
Figure 5 .
57 plots the amplitude, , of the RV data against the energy of the upper atomic state of the transition, . A clear correlation of and is evident, even with the small number of Same asFigure 4, except for the Si II and Fe II emission lines which display the 1.583-day variation.
Figure 6 .
6Radial velocity variations of the emission lines at the 1.583-day short period. Phases are calculated using Equation 2. The variations at the 31-day orbital period have been subtracted from the data of each line, to remove scatter. The data are split into 30 phase bins for each line species and black error bars indicate the weighted mean and standard error of the weighted mean for the radial velocities in each bin. The error bars of the data are both the uncertainties from the Gaussian fitting routine and the jitter, added in quadrature.
Figure 7 .
7Amplitude, , of the RV variations of the emission lines at the 1.583-day period against , the energy of the initial excited state leading to the lines. The correlation coefficient, , weighted by the inverse of the square of the error, is quoted in the legend.
Figure 8 .Figure 9 .
89TESS photometric data of GG Car with the orbital period variations subtracted, leaving the short-period variations, then folded over the orbital period using Equation 1. Sector 10 data are in blue, and sector 11 in orange. The bottom panel zooms in on the largest variations around phase 0.137, where a dashed vertical line is drawn. Amplitudes of the photometric 1.583-day period as a function of the orbital phase of the binary for both TESS and -band data. Orbital phases are calculated using Equation 1.
Figure 11 .
11Same asFigure 10, except for the Si II 6347 and 6371 lines.
Figure 12 .
12Same as Figure 10, except for the Fe II lines which display the short period variability. of the B[e]SG primary, the surface rotation velocity would be rot = 860 ± 260 km s −1 at the equator, far exceeding the critical rotation velocity of the star. Clearly the 1.583-day period cannot be the stellar rotation period of the B[e]SG primary component of GG Car.
Figure 13 .
13Top: Amplitude of the 1.583-day variations in -band photometry against orbital phase (black points, left hand axis), with the instantaneous separation of the binary components over-plotted (red dashed line, right hand axis). Phases of periastron are denoted by vertical dashed lines. Bottom: same as top, except with the mean -band magnitude in the phase bin replacing the short-period amplitudes.
Table 2 .
2Orbital parameters of the B[e]SG primary in GG Car found by Paper A31.01 +0.01
−0.01 days
48.57 +2.04
−1.87 km s −1
339.87 +3.10
−3.06
•
0.50 +0.03
−0.03
peri
JD 2452069.36 ± 1.30
sec
7.2 +3.0
−1.3
0.61 ± 0.03 AU
Table 3 .Table 3 .
33The periodograms are normalised by the peak power. The periodicities found in the photometric data of GG Car.Data source
Periodicities
-band photometry
31.028 ± 0.07 days
1.583156 ± 0.0002 days
TESS photometry
30.2 ± 6.3 days
1.588 ± 0.025 days
The unusual behaviour of GG Car's short-period variability reported in this paper has not been reported in other B[e]SGs as of yet. Further TESS-quality observations and phase-resolved SED observations of GG Car would be required to pin down the cause of its short-period variability and amplitude modulation, and similar phenomena in other B[e]SGs in binaries should be searched for.Table A1. Amplitudes, , of the 1.583-day RV variations for each emission line which displays variations at this period.Line
(km s −1 )
Fe II 6317.3871
3.26 +0.88
−0.86
Fe II 6456.3796
3.49 +0.97
−0.7
Fe II 7711.4386
4.06 +0.75
−0.78
He I 5875.5987
20.8 +2
−1.8
He I 6678.1517
20.1 +2.2
−1.9
He I 7065.17714
14.9 +7.3
−5
Si II 6347.11
5.51 +1
−0.9
Si II 6371.37
5.57 +1.8
−1.3
MNRAS 000, 1-13 (2020)
.583 other than , as they are fitted for convenience only and will have no physical significance.MNRAS 000, 1-13 (2020)
ACKNOWLEDGEMENTSWe thank John Papaloizou for his useful discussions. AJDP thanks the Science & Technology Facilities Council (STFC) for their support in the form of a DPhil scholarship. Part of this work wasbased on data from the OMC Archive at CAB (INTA-CSIC), preprocessed by ISDC. A great many organisations and individuals have contributed to the success of the Global Jet Watch observatories and these are listed on www.GlobalJetWatch.net but we particularly thank the University of Oxford and the Australian Astronomical Observatory. This research has made use of NASA's Astrophysics Data System. This research has made use of the SIM-BAD database, operated at CDS, Strasbourg, France. This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester.DATA AVAILABILITYASAS-band photometric data available from http: //www.astrouw.edu.pl/cgi-asas/asas_cgi_get_data? 105559-6023.5,asas3.ASAS-SN-band photometric data available from https://asas-sn.osu.edu/. OMC -band photometric data available from https://sdc. cab.inta-csic.es/omc/secure/form_busqueda.jsp. TESS FFI data was accessed and reduced via the eleanor framework(Feinstein et al. 2019); the python3.x reduction code used to access the data presented in this article will be shared on reasonable request to the corresponding author. The fits to spectroscopic Global Jet Watch data underlying this article will be shared on reasonable request to the corresponding author.
. C Aerts, A&A. 51311Aerts C., et al., 2010, A&A, 513, L11
. K M Blundell, M G Bowler, L Schmidtobreick, A&A. 474903Blundell K. M., Bowler M. G., Schmidtobreick L., 2007, A&A, 474, 903
. A G A Brown, A&A. 6161Brown A. G. A., et al., 2018, A&A, 616, A1
. E Chapellier, J C Valtier, A&A. 257587Chapellier E., Valtier J. C., 1992, A&A, 257, 587
. E Chapellier, Le Contel, J M , Le Contel, D Sareyan, J P Valtier, J C , A&A. 304406Chapellier E., Le Contel J. M., Le Contel D., Sareyan J. P., Valtier J. C., 1995, A&A, 304, 406
. De Cat, P Telting, J Aerts, C Mathias, P , A&A. 359539De Cat P., Telting J., Aerts C., Mathias P., 2000, A&A, 359, 539
. R J Dukes, ApJ. 19281Dukes R. J., 1974, ApJ, 192, 81
. A D Feinstein, PASP. 13194502Feinstein A. D., et al., 2019, PASP, 131, 094502
. J P Finley, T Belloni, J P Cassinelli, L25 Fitch W. S. 263481ApJFinley J. P., Belloni T., Cassinelli J. P., 1992, A&A, 263, L25 Fitch W. S., 1967, ApJ, 148, 481
. W S Fitch, ApJ. 158269Fitch W. S., 1969, ApJ, 158, 269
. D Foreman-Mackey, D W Hogg, D Lang, J Goodman, PASP. 125306Foreman-Mackey D., Hogg D. W., Lang D., Goodman J., 2012, PASP, 125, 306
. J Fuller, MNRAS. 4721538Fuller J., 2017, MNRAS, 472, 1538
. J Fuller, D W Kurtz, G Handler, S Rappaport, MNRAS. 4985730Fuller J., Kurtz D. W., Handler G., Rappaport S., 2020, MNRAS, 498, 5730
. M Goossens, P Lampens, D De Maerschalck, M Schrooten, A&A. 140223Goossens M., Lampens P., de Maerschalck D., Schrooten M., 1984, A&A, 140, 223
. E Gosset, J Surdej, J.-P Swings, A&AS. 55411Gosset E., Surdej J., Swings J.-P., 1984, A&AS, 55, 411
D O Gough, Astrophysical fluid dynamics, Les Houches, Session XLVII. Zahn J.-P., Zinn-Justin J.AmsterdamElsevierGough D. O., 1993, in Zahn J.-P., Zinn-Justin J., eds, Astrophysical fluid dynamics, Les Houches, Session XLVII. Elsevier, Amsterdam, pp 339- 560
. D Grant, K Blundell, J Matthews, MNRAS. 49417Grant D., Blundell K., Matthews J., 2020, MNRAS, 494, 17
. N K Greenstein, Harvard College Observatory Bulletin. 90825Greenstein N. K., 1938, Harvard College Observatory Bulletin, 908, 25
. G Handler, Nature Astronomy. 4684Handler G., et al., 2020, Nature Astronomy, 4, 684
. M Haucke, L S Cidale, R O J Venero, M Curé, M Kraus, S Kanaan, C Arcos, A&A. 61491Haucke M., Cidale L. S., Venero R. O. J., Curé M., Kraus M., Kanaan S., Arcos C., 2018, A&A, 614, A91
. T Jayasinghe, K Z Stanek, C S Kochanek, T A Thompson, B J Shappee, M Fausnaugh, MNRAS. 4894705Jayasinghe T., Stanek K. Z., Kochanek C. S., Thompson T. A., Shappee B. J., Fausnaugh M., 2019, MNRAS, 489, 4705
. S Kanodia, J Wright, Research Notes of the AAS. 24Kanodia S., Wright J., 2018, Research Notes of the AAS, 2, 4
. C S Kochanek, PASP. 129104502Kochanek C. S., et al., 2017, PASP, 129, 104502
. G Koenigsberger, L Georgiev, E Moreno, M G Richer, O Toledano, G Canalizo, A Arrieta, A&A. 458513Koenigsberger G., Georgiev L., Moreno E., Richer M. G., Toledano O., Canalizo G., Arrieta A., 2006, A&A, 458, 513
. M Kraus, A&A. 494253Kraus M., 2009, A&A, 494, 253
Boletín de la Asociación Argentina de Astronomía. M Kraus, 5870Kraus M., 2016, Boletín de la Asociación Argentina de Astronomía, 58, 70
The B[e] Phenomenon: Forty Years of Studies. M Kraus, arXiv:1610.05000Astronomical Society of the Pacific Conference Series. Miroshnichenko A., Zharikov S., Korčáková D., Wolf M.508219Kraus M., 2017, in Miroshnichenko A., Zharikov S., Korčáková D., Wolf M., eds, Astronomical Society of the Pacific Conference Series Vol. 508, The B[e] Phenomenon: Forty Years of Studies. p. 219 (arXiv:1610.05000)
. M Kraus, Galaxies. 783Kraus M., 2019, Galaxies, 7, 83
. M Kraus, M E Oksala, D H Nickeler, M F Muratore, M Borges Fernandes, A Aret, L S Cidale, W J De Wit, A&A. 54928Kraus M., Oksala M. E., Nickeler D. H., Muratore M. F., Borges Fernandes M., Aret A., Cidale L. S., de Wit W. J., 2013, A&A, 549, A28
. M Kraus, L S Cidale, M L Arias, M E Oksala, M Borges Fernandes, ApJ. 78010Kraus M., Cidale L. S., Arias M. L., Oksala M. E., Borges Fernandes M., 2014, ApJ, 780, L10
. M Kraus, A&A. 58175Kraus M., et al., 2015, A&A, 581, A75
. M Kraus, A&A. 593112Kraus M., et al., 2016, A&A, 593, A112
. I Krtičková, J Krtička, Bull. Astron. Inst. Netherlands. 47711MNRASKrtičková I., Krtička J., 2018, MNRAS, 477, 236 Kruytbosch 1930, Bull. Astron. Inst. Netherlands, 6, 11
. P Kumar, C O Ao, E J Quataert, ApJ. 449294Kumar P., Ao C. O., Quataert E. J., 1995, ApJ, 449, 294
. D W Kurtz, MNRAS. 4945118Kurtz D. W., et al., 2020, MNRAS, 494, 5118
. H Lamers, A&A. 15990Lamers H., 1986, A&A, 159, 90
. H J Lamers, F J Zickgraf, De Winter, D Houziaux, L Zorec, J , A&A. 340117Lamers H. J., Zickgraf F. J., De Winter D., Houziaux L., Zorec J., 1998, A&A, 340, 117
. H Levato, A S Miroshnichenko, C Saffe, A&A. 56828Levato H., Miroshnichenko A. S., Saffe C., 2014, A&A, 568, A28
. D Lopes, A Damineli, J De Freitas Pachecho, A&A. 261482Lopes D., Damineli A., De Freitas Pachecho J., 1992, A&A, 261, 482
. P Marchiano, E Brandi, M F Muratore, C Quiroga, O E Ferrer, L G García, A&A. 54091Marchiano P., Brandi E., Muratore M. F., Quiroga C., Ferrer O. E., García L. G., 2012, A&A, 540, A91
. J M Mas-Hesse, A&A. 411261Mas-Hesse J. M., et al., 2003, A&A, 411, L261
. P J Mcgregor, A R Hyland, D J Hillier, ApJ. 3241071McGregor P. J., Hyland A. R., Hillier D. J., 1988, ApJ, 324, 1071
. A S Miroshnichenko, ApJ. 667497Miroshnichenko A. S., 2007, ApJ, 667, 497
. E Moreno, G Koenigsberger, D M Harrington, A&A. 52848Moreno E., Koenigsberger G., Harrington D. M., 2011, A&A, 528, A48
. M E Oksala, M Kraus, L S Cidale, M F Muratore, M Borges Fernandes, A&A. 55817Oksala M. E., Kraus M., Cidale L. S., Muratore M. F., Borges Fernandes M., 2013, A&A, 558, A17
. Pablo H , MNRAS. 4672494Pablo H., et al., 2017, MNRAS, 467, 2494
. E C Pickering, W P Fleming, ApJ. 4142Pickering E. C., Fleming W. P., 1896, ApJ, 4, 142
P Podsiadlowski, T S Morris, N Ivanova, Astronomical Society of the Pacific Conference Series. Kraus M., Miroshnichenko A. S.355259Podsiadlowski P., Morris T. S., Ivanova N., 2006, in Kraus M., Mirosh- nichenko A. S., eds, Astronomical Society of the Pacific Conference Series Vol. 355, Stars with the B[e] Phenomenon. p. 259
. G Pojmański, Astron. Nachr. 325553Pojmański G., 2004, Astron. Nachr., 325, 553
. G Pojmański, G Maciejewski, Acta Astron. 52397Pojmański G., Maciejewski G., 2002, Acta Astron., 52, 397
. R Polfliet, P Smeyers, A&A. 237110Polfliet R., Smeyers P., 1990, A&A, 237, 110
. A Porter, D Grant, K Blundell, S Lee, MNRAS. 5015554Porter A., Grant D., Blundell K., Lee S., 2021, MNRAS, 501, 5554
. T Prusti, A&A. 5951Prusti T., et al., 2016, A&A, 595, A1
. G R Ricker, J. Astron. Telesc. Instrum. Syst. 114003Ricker G. R., et al., 2014, J. Astron. Telesc. Instrum. Syst., 1, 014003
. D H Roberts, J Lehar, J W Dreher, Astron. J. 93968Roberts D. H., Lehar J., Dreher J. W., 1987, Astron. J., 93, 968
. H Saio, C Georgy, G Meynet, MNRAS. 4331246Saio H., Georgy C., Meynet G., 2013, MNRAS, 433, 1246
. B J Shappee, ApJ. 78848Shappee B. J., et al., 2014, ApJ, 788, 48
. Y Wang, A&A. 54510Wang Y., et al., 2012, A&A, 545, L10
. B Willems, C Aerts, A&A. 384441Willems B., Aerts C., 2002, A&A, 384, 441
. A P Yadav, W Glatzel, MNRAS. 4574330Yadav A. P., Glatzel W., 2016, MNRAS, 457, 4330
. A P Yadav, W Glatzel, MNRAS. 4713245Yadav A. P., Glatzel W., 2017, MNRAS, 471, 3245
. F.-J Zickgraf, B Wolf, O Stahl, C Leitherer, G Klare, A&A. 143421Zickgraf F.-J., Wolf B., Stahl O., Leitherer C., Klare G., 1985, A&A, 143, 421
. F.-J Zickgraf, B Wolf, O Stahl, C Leitherer, I Appenzeller, A&A. 163119Zickgraf F.-J., Wolf B., Stahl O., Leitherer C., Appenzeller I., 1986, A&A, 163, 119
. F J Zickgraf, R M Humphreys, H J Lamers, J Smolinski, B Wolf, O Stahl, A&A. 315510Zickgraf F. J., Humphreys R. M., Lamers H. J., Smolinski J., Wolf B., Stahl O., 1996, A&A, 315, 510
|
[] |
[
"Light scattering as a Poisson process and first passage probability",
"Light scattering as a Poisson process and first passage probability"
] |
[
"Claude Zeller [email protected] \nClaude Zeller Consulting LLC\n97134TillamookOregon\n",
"Robert Cordery \nFairfield University\n1073 North Benson Rd. Farifield06824CT\n"
] |
[
"Claude Zeller Consulting LLC\n97134TillamookOregon",
"Fairfield University\n1073 North Benson Rd. Farifield06824CT"
] |
[] |
The Kubelka-Munk equations describe one-dimensional transport with scattering and absorption. The reflectance for a semi-infinite slab is the Laplace transform of the distribution of the photon path length λ. It is determined by the first-passage probability of an alternating random walk after np peaks. The first-passage probability as a function of the number of peaks is a pathlength distribution-free combinatoric expression involving Catalan numbers. The conditional probability P (λ|np) is a Poisson process.We present a novel demonstration that the probability of first-passage of a random walk is step-length-distribution-free. These results are verified with two iterative calculations, one using the properties of Volterras composition products and the other via an exponential distribution. A third verification is based on fluctuation theory of sums of random variables. Particle trajectories with scattering and absorption on the real half-line are mapped into a random walk on the integer number line in a lattice model, therefore connecting to path combinatorics. Including a separate forward scattering Poisson process results in a combinatoric expression related to counting Motzkin paths.
|
10.1088/1742-5468/ab811f
|
[
"https://arxiv.org/pdf/1906.11131v1.pdf"
] | 195,657,898 |
1906.11131
|
2802cd760c6a398576ef627caf4f5a6c3b577870
|
Light scattering as a Poisson process and first passage probability
21 Jun 2019 June 2019
Claude Zeller [email protected]
Claude Zeller Consulting LLC
97134TillamookOregon
Robert Cordery
Fairfield University
1073 North Benson Rd. Farifield06824CT
Light scattering as a Poisson process and first passage probability
21 Jun 2019 June 2019Random walkKubelka-Munk equationsfirst passagePoisson processCatalan numbersMotzkin numbers
The Kubelka-Munk equations describe one-dimensional transport with scattering and absorption. The reflectance for a semi-infinite slab is the Laplace transform of the distribution of the photon path length λ. It is determined by the first-passage probability of an alternating random walk after np peaks. The first-passage probability as a function of the number of peaks is a pathlength distribution-free combinatoric expression involving Catalan numbers. The conditional probability P (λ|np) is a Poisson process.We present a novel demonstration that the probability of first-passage of a random walk is step-length-distribution-free. These results are verified with two iterative calculations, one using the properties of Volterras composition products and the other via an exponential distribution. A third verification is based on fluctuation theory of sums of random variables. Particle trajectories with scattering and absorption on the real half-line are mapped into a random walk on the integer number line in a lattice model, therefore connecting to path combinatorics. Including a separate forward scattering Poisson process results in a combinatoric expression related to counting Motzkin paths.
Introduction
The simplest solution of the radiative transfer equation is for a one-dimensional flux traveling perpendicular to a plane-parallel layer of absorbing and scattering medium with isotropic radiation intensity over the forward and backward hemispheres.
The solutions of radiation transport equations, obtained during the previous century, have been applied to a broad variety of practical situations from neutron diffusion, optical tomography, spreading of infra-red and visible light in the atmosphere to prints on paper.
One-dimensional radiative transfer can be solved by the well-known two-flux approximation proposed independently by Schuster [14] and Schwarzschild [15] in astronomy, Darwin [4] and Hamilton [6] in crystallography, and Kubelka and Munk [10]. in graphic-art and print quality. More recently it had been used by Youngquist, Carr and Davies [20] in optical coherence tomography and Haney and van Wijk [7] in geology.
Simon and Trachsler [16], in their paper A random walk approach for light scattering in material gave an explicit expression for the reflectance. They show first that the scattering problem can be treated as a Markov chain involving Narayana polynomials. But this Markov chain does not provide a solution of a first-passage time problem that fits the reflectance calculated with the Kubelka-Munk equations. Then using the compositional optical reflectance and transmittance properties for multilayer specimens, they are able to determine a generating function and find a solution as elementary variants of Chebyshev polynomials. This is another way to interpret the hyperbolic functions of the classical solution of Kubelka-Munk equations.
Wuttke [19], in his paper The zigzag walk with scattering and absorption on the real half line and in a lattice model expands the Darwin-Hamilton equations, (identical to the Kubelka-Munk equations) into a recursion and finds that Catalan numbers gives the recurrence probability as function of scattering order. He assumes that the random character of the zigzag walk comes from the exponentially distributed step length between scattering events. As we demonstrate, it is not necessary to specify the distribution since his result is independent of the distribution of step length.
One of our goals is to describe the Kubelka-Munk solution directly in terms of the statistics of the number of peaks n p and the path length λ of rays. We explore the solution using a mathematically equivalent one-dimensional random walk model with random step lengths between reflection events described by a Poisson process with a rate per unit length S. We demonstrate further that the recurrence probability given by Catalan numbers is independent of the step length distribution. We provide another demonstration of the independence of step length distribution using our formulation of the fluctuation theory introduced by Andersen [2].
Additionally, we include an independent Poisson process for forward scattering with a rate S f . Usually forward scattering is not included in one-dimensional scattering because it does not change the ray propagation. We include it here as a step towards analysis of three-dimensional problems. Therefore we obtain a generalization of the Wuttke zigzag walk.
Traditional solution of the Kubelka-Munk model
Let us consider a homogenous layer with thickness d characterized by its absorption coefficient χ and its scattering coefficient S. In this layer, the incident irradiance I propagates in the positive direction and the reflected irradiance J propagates in the negative direction. Both I and J are functions of the depth x in the layer. Depth 0 corresponds to the layer's boundary receiving the incident irradiance I 0 . Depth d indicates the other boundary. We consider, at an arbitrary depth x, a sub-layer with infinitesimal thickness dx. The effect of the material in a thin element dx on I and J is to:
• decrease I by I(S + χ)dx (absorption and scattering) • decrease J by J(S + χ)dx (absorption and scattering)
• increase I by JSdx (scattered light from J reinforces I) • increase J by ISdx (scattered light from I reinforces J).
On these assumptions we obtain the system of equations:
dI dJ = − (χ + S) S −S (χ + S) I J dx,(1)
with solution
I(x) J(x) = 1 − β 1 + β 1 + β 1 − β Ae κx Be −κx ,(2)
where β = χ (χ + 2S) −1 and κ = χ (χ + 2S). The coefficients A and B are determined by the boundary conditions at the two surfaces. After some elementary calculations reflectance R and transmittance T of a slab of thickness d are given by:
R = J(0) I 0 = 1 − β 2 e κd − e −κd (1 + β) 2 e κd − (1 − β) 2 e −κd R 0 = Sd 1 + Sd f or χ = 0 T = I(d) I 0 = 4β (1 + β) 2 e κd − (1 − β) 2 e −κd T 0 = 1 1 + Sd for χ = 0.(3)
The reflectance of a very thick layer (d → ∞) is:
R ∞ (S, χ) = S + χ S − S + χ S 2 − 1.(4)
The Kubelka-Munk reflectance, plays a major role in elucidating the connection to combinatorics and random walks.
Distribution of path lengths from the reflectance
The fluxes I and J can be interpreted as ensembles of photons moving in the positive and negative directions. The photons are absorbed at a rate χ and scattered in a Poisson process at a rate S. Each photon path is weighted according to the Beer-Lambert law by the absorption factor e −χλ . The reflectance is therefore the Laplace transform of the path length distribution. Conversely the inverse Laplace transform of R ∞ leads to the path distribution.
We calculate the inverse Laplace transform of the reflectance L −1 χ (R ∞ (S, χ)) by first expanding the reflectance in the scattering order
R ∞ (S, χ) = ∞ np=1 C np−1 2 2np−1 S S + χ 2np−1 = 1 2 S S + χ C 1 2 S S + χ 2 ,(5)
where C(x) is the generating function
C (x) = ∞ n=0 C n x n = 1 − √ 1 − 4x 2x(6)
of the Catalan numbers C n = (2n)! (n!(n + 1)!) −1 . We identify, term-by-term, the path-length distribution of a random walk model with the inverse Laplace transform ofR ∞ (S, χ)
L −1 χ S S + χ 2np−1 = S 2np−1 λ 2np−2 e −Sλ (2n p − 2)! .(7)
The distribution of λ and n p derived from R ∞ (S, χ) is
P (λ, n p ) = 1 λ C np−1 Sλ 2 2np−1 e −Sλ (2n p − 2)! = 1 λ Sλ 2 2np−1 e −Sλ n p !(2n p − 1)! .(8)
We agree with Wuttke [19] that "while scattering occurs at random, its effect is deterministic when we model the Kubelka-Munk equation with a random walk. Scattering always reverses the direction of motion. Therefore, trajectories form a zigzag walk rather than a drunkard's walk." The random character of the zigzag walk does not come necessarily from an exponentially distributed step length between scattering events. We will demonstrate later in the paper that the effect of the distribution of the step length between scattering events is distribution-free.
With no loss of generality we can include n f forward scattering events at a rate S f thus providing a persistent random walk model compatible with Kubelka-Munk. Therefore the joint probability of the three random variables λ, n p and n f is given by
P (λ, n p , n f ) = C np−1 2 2np−1 S 2np−1 S n f f λ 2np−2+n f (2n p − 2)!n f e −(S+S f )λ ,(9)
and the joint probability for n p and n f is then given by
P (n p , n f ) = ∞ 0 P (λ, n p , n f ) dλ = 1 2 2np−1 [n f + 2 (n p − 1)]! n f !n p ! (n p − 1)! S n f f S 2np−1 (S f + S) n f +2np−1 .(10)
The combinatorial factor is related to the "Triangular array of Motzkin polynomial coefficients" T (n, k), the number of Motzkin paths of length n with k up steps T (n f + 2n p − 2, n p − 1), where according to the OEIS A055151 [17],
T (n, k) = n! k! (k + 1)! (n − 2k)! .(11)
4. Probability of first-passage by convolution
Iterations: building the skeleton
The previous results are derived from the inverse Laplace transform of the reflectance. We now want to do a direct calculation of the trajectory distribution. A skeleton trajectory, a.k.a. zigzag or alternating random walk, starts at 0 and moves in the positive direction until it eventually reflects back towards the origin. If it does not again reverse direction before it reaches the negative half plane, we say it
x y x > y x < y Figure 1. Trajectories in the x, y plane left the medium. A trajectory with n p peaks is subject to (2n p − 1) reflections before leaving the medium and is labeled by the index n p .
Consider now the flux of all possible trajectories starting at x = 0 and beginning in the positive direction. The distribution of the first valley is a symmetrical function P 1 (z) with the following properties:
∞ −∞ P 1 (z)dz = 1 ∞ −∞ P np+1 (z)dz = ∞ 0 P np (z)dz.(12)
A trajectory reflects back towards the origin at a height x. It reflects next time in the positive direction after a distance y. If y > x, then we say the trajectory left the medium. The probability distribution for the height of the second reflection z = x − y is
P 2 (z) = z −∞ P 1 (z − y)P 1 (y)dy.(13)
This is a particular case of composition products considered by the Italian mathematician Vito Volterra in 1913. The probability distribution for the location of the following reflections n p = 3, 4, · · · are given by iterative convolutions:
P np+1 (z) = z −∞ P np (z − y)P 1 (y)dy.(14)
Only the trajectories that stay in the medium at step n p , are transferred to step n p +1, giving:
∞ −∞ P np+1 (z) dz = ∞ 0 P np (z)}dz.(15)
Using equations (12) to (15), calculating by induction and using the suitable limits of integration, we obtain the probability of first-passage after peak n p :
∞ −∞ ∞ z P np (z − y) P 1 (y) dydz = C np−1 2 2nn p −1 .(16)
This explains the relation between the Kubelka-Munk reflection and the generating function of the Catalan numbers.
Dressing the skeleton: connection with combinatorics
The enumeration of lattice paths is a topic in combinatorics, closely related to the study of random walks in probability theory. The ubiquitous presence of Catalan numbers in the joint distribution function P (λ, n p , n f ; S, S f ) suggests a connection with combinatorics. One approach to explain this connection is discretization. Since the statistics of first passage is independent of the distribution of the path length, to discretize the path we just have to integrate over λ. Therefore the discretized skeleton (zigzag) walk is mapped onto a path in a two-dimensional lattice. We expect from this transition to a lattice model to reproduce the analytical result obtained in section (6). The joint probability function is the product of the marginal probability times the conditional probability:
P (n p , n f ; S, S f ) = P (n f |n p ) P (n p ) .(17)
The goal is to create new steps by randomly filling the 2n p segments of the skeleton with n f forward random scattering events and to calculate the resulting distribution. Ultimately we want to find the related conditional probability P (n p |n f ). There are 2n p − 1 reflections and m s = 2n p + n f steps.
With our notations we have paths from (2, 0) to 2(n p , n f ) with the constraint that n p ≥ 1. Therefore the number of paths is given by:
2n p + n f − 2 2n p − 2 ,(18)
then the number N C of combinations is given by:
N C (m s , n p ) = (m s − 2)! (2n p − 2)!(m s − 2n p )! .(19)
We can now write the conditional probability of m s at constant n p with r = S
S+S f P (m s |n p ; r) = (m s − 2)! (2n p − 2)!(m s − 2n p )! r 2np−1 (1 − r) ms−2np .(20)
This result has been confirmed by extensive Monte Carlo calculations and is obtained by recursion in section (5). Then the joint probability P JG is given by
P (n p , n f ; r) = [n f + 2 (n p − 1)]! n f !n p !(n p − 1)! r 2 2np−1 (1 − r) n f .(21)
This is the same as equation ( (10)). This confirms the combinatorial nature (lattice paths enumeration) of the discrete form of the Kubelka-Munk equation.
Analytic calculation of the path distribution
In this section we integrate the scattering and attenuation of an ensemble of light rays. We show directly the connection to Eq.(4), as opposed to the two-step calculation in section (4). We use an exponentially distributed step length consistent with a uniform absorption and scattering rate. Explicit integration will serve as a foundation for exploration of higher dimensions and the effect of inhomogeneities in the media that are important for print quality and medical imaging.
Consider a statistical ensemble of rays of light moving in a one-dimensional diffusive medium on the positive axis. Again, the ray starts at the origin moving upward in the positive direction and after multiple reflections, escapes the media to the negative axis. The distribution of the number of scattering events and the path length for the escaping rays is indicative of the statistical behavior in three dimensions.
x P 1 x P 2 x P 3 x V 1 x V 2 E 3
A ray traveling through the medium is reflected at a rate of S reflections per unit length and is attenuated at a rate of χ per unit length. The Poisson probability density for reflection of a ray after traveling distance d is
ρ (d) = Φ(d)Se −Sd ,(22)
where Φ(d) is the Heaviside step function. We will derive information about the distribution of the number of scattering events and the path length when the ray ultimately escapes. We choose to describe the current state of a ray by the direction, the total prior upward movement l, the prior number of peaks n p and the position x.
The upward movement is related to the path length by 2l − x = λ. Later we will add the distribution of forward scattering events as an independent Poisson process based on the path length. The probability density for an upward reflection at a valley at height x with total prior upward movement l at the valley after n p peaks is P V np (x, l). The probability density that the ray escapes with total path length λ in the scattering medium, after the n th p peak is E np (λ). The probability densities for the peaks and valleys are not normalized because the ray may escape before the reflection.
There are constraints on the integrals in iterating from one valley to the next. The peaks are higher than the surrounding valleys. The ray must travel at least the height of the peak to reach it. The path length is an increasing function as the ray progresses. Therefore, as shown in Fig.(2),
x V n ≤ x P n ≥ x V n−1 , l n ≥ x P n and l n ≥ l n−1 .(23)
The sequence of probability densities at the peaks and valleys can be tediously calculated recursively. The probability density at the next peak is found by integrating the probability density at the valley over the height of the previous valley. Substituting for P P n and marginalizing over the peak, we obtain the valley-to-valley transfer.
P V n (x, l) = Φ(x) ∞ 0 l+x ′ max(x,x ′ ) P P n−1 x ′ , l − x P + x ′ S 2 e −S(2x P −x ′ −x) dx P dx ′ .(24)
The functional form of the probability densities at the valleys is evaluated by iterating this expression starting from the initial condition that the ray starts with upward motion from x = 0.
P 0 (l, x) = δ (x) δ (l) , E 0 (l) = 0.(25)
The first peak integral simply involves satisfying the delta functions. Similarly, the first valley integral uses the fact that l is the total upward motion. The probability that the ray escapes is given by dropping the constraint that x is positive and integrating over the negative half space:
E 1 (l) = Φ (l) 0 −∞ S 2 e −S(l−x) dx = Φ (l) Se −S(l) .(26)
Iterating from the probability distribution at one valley to that at the next valley, we obtain:
P np (x, l) = S 2n e −S(l−x) Φ (l − x) F np−1 (x, l) .(27)
Here F np is the total volume of the dimensional space of allowed configurations of the 2n p −2 step path from the origin to a valley at the point x with path length λ = 2l −x.
The same form, an exponential times the volume of path configurations, will apply in higher dimensions and more complicated geometries. Monte Carlo methods can be applied to measure the path configuration volume in these situations.
F np (x, l) = l 0 l ′ 0 F np−1 (x ′ , l ′ ) Φ (l − l ′ + x ′ − x) dx ′ dl ′ = l np−2 (l − x) np−1 (l + (n p − 1) x) (n p − 1)! (n p )! .(28)
Integrating over the negative half-space gives the escape probability density as a function of upward path length and the total escape probability after the n th p peak:
E np (λ) = 0 −∞ P np λ 2 , x ′ dx ′ = S S λ 2 2(np−1) e −Sλ n np − 1 ! (n p )! .(29)
Integrating over λ gives the escape probability after n p peaks in terms of Catalan numbers. The result is independent of the scattering rate and the formula is identical to Eq. (5) with χ = 0:
E np = ∞ 0 E np (λ) dλ = 1 2 2np−1 C np−1 .(30)
We can add in absorption because we have the distribution of the path length. The attenuation simply adds χ to S in the exponent:
E χ np (λ) = S(Sλ) 2np−1 e −(S+χ)λ (n p − 1)! (n p )! .(31)
Integrating over the path length again yields Eq. (5)
E χ np = ∞ 0 E χ np (λ)dλ = S S + χ 2np−1 (2n p − 1)! 2 2np−1 (n p − 1)! (n p )! = S S + χ 2np−1 C np−1 2 2np−1 .(32)
The joint probability for n p and n f is obtained by adding an independent Poisson process for forward scattering
P (n f , n p ) = ∞ 0 ρ(n f |2S f λ)E χ np (λ)dλ = (n f + 2 (n p − 1))! 2 2np−1 n f ! (n p − 1)!(n p )! S 2np−1 (S f ) n f (S + S f + χ) n f +2np−1 .(33)
This result is identical to equations (10) and (21).
First passage events are distribution-free
The location of an alternating walk alternates between peaks and valleys. First passage for an alternating walk occurs only at a valley where the step number m is even. We start with the finite set of alternating walks A n (c n ) generated by permutations of a set of n step sizes c n . The set c n is an element of the set C n of all sets of lengths.
The first passage events are the subsets F m of all allowed walks that are positive before step m and first become negative at step m. It is possible that a walk never becomes negative, so we add the event F 0 of walks that are positive for the first n steps. The set of n + 1 first passage events is a partition of the finite sample space. We will show that the cardinality of F m is independent of the set of n step sizes c n as long as no sub-walk returns to the origin. We define a boundary subset B n of C n of measure zero, where no sub-walk constructed from c n returns exactly to the origin.
We analyze the changes in event membership as we adjust values in c n continuously by modifying step sizes. For a walk constructed from a set of lengths not in the boundary set, let δ be the closest approach to the origin of any non-empty sub-walk. At step m > 0 the absolute value of the position must be at least δ. If δ > 0 then a step size change with magnitude smaller than δ to any length c k ∈ c n will not cause any change in first passage event membership. First passage event cardinality is therefore locally constant, and so is constant within connected subsets of C n − B n . The set C n − B n is not connected, so we examine changes in event cardinality when crossing the boundary.
Any set of lengths can be reached by changing one length at a time, and so if cardinality of an event does not change when crossing B n , then the event cardinality is constant in C n − B n .
The sequences of lengths in B n are the only elements of C n where walks can leave or enter first passage events with small changes to a length. When changing a length in a walk w m and moving the walk through the boundary point w * m the membership of w m in an event can change. The set of walks generated from permutations or an overall sign change of the steps in the sub-walk w * m also return to the origin. The problem is to show cancellation of the changes in event membership among the walks in this set as lengths are changed so that w m moves through the boundary point w * m . The key step in our approach is to consider uniquely defined pairs of walks w * and w ′ * from the boundary set. Walk w * begins with a positive critical sub-walk w * j as in Fig. (3). The paired walk w ′ * is identical after step j but begins with the time-reversed sub-walk w ′ * j . The sets of positions of the paired walks are the same, but they are in the reverse order for the first j steps. The step sizes in w ′ * m are the same as the step sizes in w * m , but the first j steps occur in the reverse order and with the opposite sign. All the critical walks generated from c * n ∈ B n that could change the cardinality of F m can be uniquely paired in this way. For the first passage problem, the boundary of F m consists of walks that are non-negative for the first m − 1 steps and return exactly to zero at one of the first m steps. Walks that return precisely to the origin on step m and are positive before that step for steps 1, · · · (m − 1) will enter or leave the event with a small change in a length. Similarly, walks that pass zero on step m but return precisely to the origin on step j < m change membership in F m with small length changes.
Suppose a walk from the boundary of F m begins with such a j-step sub-walk w * j which is positive for k < j and returns to 0 at step j. Consider changing a length c k ∈ w * j through the critical value c * k . A small increase in a length that is a downward step in w * j will cause the walk to be in F j while decreasing the same step will cause the walk to be in F m . Similarly, a small increase in a length that is an upward step in w * j will cause the walk to be in F m while decreasing the same step will cause the walk be in F j . Alternatively, if the walk touches zero at step m then increasing a downward length will cause it to be in F m while the paired walk will leave F m .
If a step of length c * k in sub-walk w * j has, say, a negative sign, then it is in w ′ * j with a positive sign. When c * k is changed to a lower value then walk w ′ is in F m and w is not. Similarly, when c k is changed to a higher value, then walk w is in F m and w ′ is not. Thus as c k is changed and the set of lengths crosses B n one of each pair of walks leaves the event F m and the other joins the event. Although the pair of walks switch which one is in event F m , the total contribution of the two walks to the cardinality of the event is the same. The cardinality of F m is therefore unchanged by crossing B n . Examining the complete set of pairs of walks with time-reversed sub-walks thus shows that the cardinality of first passage events, and thus the probability of first passage at step m is independent of the set of n real lengths, except for the boundary set B n of measure 0 in C n . Now averaging over C n of this invariant probability gives the result that the distribution of first passage step for alternating walks is step-size distribution-free.
A critical point is that the sub-walk must have even length for alternating walks so that walks beginning with w and w ′ are both alternating walks in our sample space. First passage always occurs on an even number step for alternating walks, so that is not a problem here. In other types of events on A n the requirement that the subwalks must have even length is important. The argument for first passage statistics for symmetric walks carries through the same as for alternating walks with the exception that the location of the first passage, and the length of the relevant sub-walks, need not be even.
To calculate first passage probabilities for an alternating walk, any set of lengths suffices. Make a convenient choice like any subset of integer powers of 2 where each length is larger than the sum of all smaller lengths. First passage of an alternating walk occurs only in a valley which occurs on even numbered steps m = 2m p .
Given a set of n = 2n p lengths selected from integer powers of 2, the fraction of walks with first passage at step m = 2m p < n is
p f mp = C mp−1 2 2mp−1(34)
and the fraction that remains nonnegative is
p + mp = (2m p − 1) C mp−1 2 2mp−1(35)
The calculation will proceed by induction on n p . The theorem is true for n p = 1 because the probability of first passage in the first valley is 1 2 , as is the probability that the walk stays positive. Either the first step upward is larger or smaller than second step.
Suppose the theorem is true for all m p ≤ n p . Consider alternating walks generated from a set c 2(np+1) of 2(n p + 1) lengths selected from integer powers of 2. Divide the alternating walks generated into a complete set of disjoint subsets where all walks in a subset have the same last two steps. For each of these subsets, the first 2n p steps are all the permutation of the same set of lengths. By induction, the fraction of these in F 2mp for m p ≤ n p is given by the theorem. Similarly, the fraction that stay positive until the last step is given by the theorem.
First passage can occur after the peak n p + 1 only if the walk stayed positive for the first n p valleys and step 2n p is the largest element of the set. The probability of this is
p f np+1 = p + np 2n p + 2 .(36)
The walk will stay positive only if it is positive for the first n p valleys and it does not have a first passage at valley n p + 1, so
p + np+1 = p + np − p f np+1
in agreement with (35). The theorem is true for n p + 1, and so is proved by induction.
Figure 2 .
2Example path with first passage at step 6.
Figure 3 .
3Example of pairing critical walks for the event "First passage on step 8." Critical walk (a) is paired with critical walk (b) with the sub-walk reversed. When the sixth step length is shortened, (c) shows walk (a) leaving the event and (d) shows walk (b) joining the event.
Escape of a uniform random walk from an interval. T Antal, Redner, Journal of statistical physics. 1236T Antal and S Redner. Escape of a uniform random walk from an interval. Journal of statistical physics, 123(6):1129-1144, 2006.
The equivalence principle in the theory of fluctuations of sums of random variables. E Sparre Andersen, Colloquium on Combinatorial Methods in Probability Theory. AarhusE Sparre Andersen. The equivalence principle in the theory of fluctuations of sums of random variables. In Colloquium on Combinatorial Methods in Probability Theory, Aarhus, pages 13-16, 1962.
. Subrahmanyan Radiative Chandrasekhar, Transfer, Dover Publications IncSubrahmanyan Chandrasekhar. Radiative Transfer. Dover Publications Inc., 1960.
Xcii. the reflexion of x-rays from imperfect crystals. The London, Edinburgh, and Dublin Philosophical Magazine and. Cg Darwin, Journal of Science. 43257CG Darwin. Xcii. the reflexion of x-rays from imperfect crystals. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 43(257):800-829, 1922.
Comparison of the photon diffusion model and kubelka-Munk equation with the exact solution of the radiative transport equation. Lf Gate, Applied optics. 132LF Gate. Comparison of the photon diffusion model and kubelka-Munk equation with the exact solution of the radiative transport equation. Applied optics, 13(2):236-238, 1974.
The effect of crystal shape and setting on secondary extinction. C Walter, Hamilton, Acta Crystallographica. 1010Walter C Hamilton. The effect of crystal shape and setting on secondary extinction. Acta Crystallographica, 10(10):629-634, 1957.
Modified kubelka-Munk equations for localized waves inside a layered medium. M Matthew, Haney, Kasper Van Wijk, Physical Review E. 75336601Matthew M Haney and Kasper van Wijk. Modified kubelka-Munk equations for localized waves inside a layered medium. Physical Review E, 75(3):036601, 2007.
Correspondence between continuous and discrete twoflux models for reflectance and transmittance of diffusing layers. Mathieu Hébert, Jean-Marie Becker, Journal of Optics A: Pure and Applied Optics. 10335006Mathieu Hébert and Jean-Marie Becker. Correspondence between continuous and discrete two- flux models for reflectance and transmittance of diffusing layers. Journal of Optics A: Pure and Applied Optics, 10(3):035006, 2008.
Monte carlo modeling of light transport in tissue (steady state and time of flight). In Optical-thermal response of laser-irradiated tissue. L Steven, Jacques, SpringerSteven L Jacques. Monte carlo modeling of light transport in tissue (steady state and time of flight). In Optical-thermal response of laser-irradiated tissue, pages 109-144. Springer, 2010.
An article on optics of paint layers. Paul Kubelka, Franz Munk, Z. Tech. Phys. 12Paul Kubelka and Franz Munk. An article on optics of paint layers. Z. Tech. Phys, 12(593-601), 1931.
Bibliographical review for reflectance of diffusing media. Bernadette Philips-Invernizzi, Daniel Dupont, Claude Caze, Optical Engineering. 40Bernadette Philips-Invernizzi, Daniel Dupont, and Claude Caze. Bibliographical review for reflectance of diffusing media. Optical Engineering, 40, 2001.
A guide to first-passage processes. Sidney Redner, Cambridge University PressSidney Redner. A guide to first-passage processes. Cambridge University Press, 2001.
Deriving kubelka-Munk theory from radiative transport. Christopher Sandoval, Arnold D Kim, JOSA A. 313Christopher Sandoval and Arnold D Kim. Deriving kubelka-Munk theory from radiative transport. JOSA A, 31(3):628-636, 2014.
Radiation through a foggy atmosphere. Arthur Schuster, The astrophysical journal. 211Arthur Schuster. Radiation through a foggy atmosphere. The astrophysical journal, 21:1, 1905.
On the equilibrium of the sun's atmosphere. Nachrichten von der Königlichen Gesellschaft der Wissenschaften zu Göttingen. K Schwarzschild, 195Math.-phys. KlasseK Schwarzschild. On the equilibrium of the sun's atmosphere. Nachrichten von der Königlichen Gesellschaft der Wissenschaften zu Göttingen. Math.-phys. Klasse, 195, p. 41-53, 195:41-53, 1906.
A random walk approach for light scattering in material. Klaus Simon, Beat Trachsler, DRW. Klaus Simon and Beat Trachsler. A random walk approach for light scattering in material. In DRW, pages 289-300, 2003.
The on-line encyclopedia of integer sequences. J A Neil, Sloane, Neil JA Sloane et al. The on-line encyclopedia of integer sequences. https://oeis.org
Leçons sur les fonctions de lignes, s. 118. V Volterra, Gauthier VillarsParisV Volterra. Leçons sur les fonctions de lignes, s. 118. Paris: Gauthier Villars, 1913.
The zig-zag walk with scattering and absorption on the real half line and in a lattice model. Joachim Wuttke, Journal of Physics A: Mathematical and Theoretical. 4721215203Joachim Wuttke. The zig-zag walk with scattering and absorption on the real half line and in a lattice model. Journal of Physics A: Mathematical and Theoretical, 47(21):215203, 2014.
Optical coherence-domain reflectometry: a new optical evaluation technique. C Robert, Sally Youngquist, David E N Carr, Davies, Optics letters. 123Robert C Youngquist, Sally Carr, and David E.N. Davies. Optical coherence-domain reflect- ometry: a new optical evaluation technique. Optics letters, 12(3):158-160, 1987.
|
[] |
[
"History Data Driven Distributed Consensus in Networks",
"History Data Driven Distributed Consensus in Networks"
] |
[
"Venkatraman Renganathan [email protected]. ",
"Angela Fontan ",
"Karthik Ganapathy [email protected] ",
"\nDepartment of Automatic Control -LTH\nLund University\nLundSweden\n",
"\nDivision of Decision and Control Systems\nKTH Royal Institute of Technology\nStockholmSweden\n",
"\nDepartment of Mechanical Engineering\nThe University of Texas at Dallas\nRichardsonTXUSA\n"
] |
[
"Department of Automatic Control -LTH\nLund University\nLundSweden",
"Division of Decision and Control Systems\nKTH Royal Institute of Technology\nStockholmSweden",
"Department of Mechanical Engineering\nThe University of Texas at Dallas\nRichardsonTXUSA"
] |
[] |
The association of weights in a distributed consensus protocol quantify the trust that an agent has on its neighbors in a network. An important problem in such networked systems is the uncertainty in the estimation of trust between neighboring agents, coupled with the losses arising from mistakenly associating wrong amounts of trust with different neighboring agents. We introduce a probabilistic approach which uses the historical data collected in the network, to determine the level of trust between each agent. Specifically, using the finite history of the shared data between neighbors, we obtain a configuration which represents the confidence estimate of every neighboring agent's trustworthiness. Finally, we propose a History-Data-Driven (HDD) distributed consensus protocol which translates the computed configuration data into weights to be used in the consensus update. The approach using the historical data in the context of a distributed consensus setting marks the novel contribution of our paper.
| null |
[
"https://arxiv.org/pdf/2202.09223v1.pdf"
] | 246,996,976 |
2202.09223
|
53ac490f7c645cc9d1e228c985843d23d87c2c94
|
History Data Driven Distributed Consensus in Networks
Venkatraman Renganathan [email protected].
Angela Fontan
Karthik Ganapathy [email protected]
Department of Automatic Control -LTH
Lund University
LundSweden
Division of Decision and Control Systems
KTH Royal Institute of Technology
StockholmSweden
Department of Mechanical Engineering
The University of Texas at Dallas
RichardsonTXUSA
History Data Driven Distributed Consensus in Networks
HistoryMemoryData-drivenDistributed ConsensusNetworked System
The association of weights in a distributed consensus protocol quantify the trust that an agent has on its neighbors in a network. An important problem in such networked systems is the uncertainty in the estimation of trust between neighboring agents, coupled with the losses arising from mistakenly associating wrong amounts of trust with different neighboring agents. We introduce a probabilistic approach which uses the historical data collected in the network, to determine the level of trust between each agent. Specifically, using the finite history of the shared data between neighbors, we obtain a configuration which represents the confidence estimate of every neighboring agent's trustworthiness. Finally, we propose a History-Data-Driven (HDD) distributed consensus protocol which translates the computed configuration data into weights to be used in the consensus update. The approach using the historical data in the context of a distributed consensus setting marks the novel contribution of our paper.
INTRODUCTION
We study the problem of consensus in a multi-agent system in the presence of untrustworthy agents in this paper. Many cooperative tasks involving networked agents require them to utilize distributed consensus protocols to coordinate agreement on certain quantities of interest, with applications such as formation control in robotics (Fax and Murray (2004); Ren and Beard (2008)), agreement seeking in opinion dynamics (Hegselmann and Krause (2002); Blondel et al. (2009) ;Fontan and Altafini (2021)), or cyber-networks comprising of many interconnected smart entities which relies on distributed consensus protocols for efficient operations (Renganathan et al. (2021)). However, Pasqualetti et al. (2012) showed that the distributed nature of networks opens up many attack points for malicious attackers rendering them vulnerable. This work considers the situation where well-behaving agents (called "cooperative" in our notation) in a network seek to achieve consensus in the presence of "untrustworthy" agents (called "potentially non-cooperative" in our notation) using the inference from the past interactions with their neighbors.
A related problem is that of consensus in unreliable networks, which has been largely studied in the literature, see e.g., Lamport et al. (2019); Agmon and Peleg (2004 Sundaram and Hadjicostis (2008); LeBlanc et al. (2013); Saldaña et al. (2017); Dibaji et al. (2018), and specifically resilient consensus protocols, such as the W-MSR protocol by LeBlanc et al. (2013), have been developed in the recent past to guarantee resiliency by intelligently constructing a nonlinear consensus update. In general, the update rule of these distributed consensus protocols depends upon the current time step information obtained from all the neighboring agents in the network. An exception is the protocol for resilient consensus proposed in Saldaña et al. (2017), named SW-MSR, which extends the classical W-MSR algorithm by introducing a sliding window approach that allows the agents to store the values received from their neighbors at the previous T time steps. While the resilient consensus literature imposes an assumption on the connectivity and the total number of non-cooperative (also called non-reliable or malicious) agents in the network, the recent work by Yemini et al. (2021) departs from such assumptions and uses the notion of trust in order to maintain consensus in a networked system in the presence of malicious agents.
Our aim is to design a distributed consensus protocol that enables each agent to estimate the trustworthiness of its neighbors, represented by a (normalized) non-negative value in [0, 1], where 0 (resp., 1) indicates that the corresponding agents do not trust (resp., trust fully) each other, with the idea that an agreement should be reached only between agents whose trust in each other is nonzero. Similar to Yemini et al. (2021), in this work we do not impose any structural or connectivity assumption on the network or any assumption on the total number of potentially noncooperative agents, but we consider the observed history at the previous T time steps to estimate trust between each agent in the network. Such an approach offers a paradigm shift from considering memory-less update in distributed consensus protocols, which do not offer the debugging capabilities of getting to know when and where an intentional attack or a fault happened in the network, thereby possibly lacking the retrospecting ability to analyze for anomalies. On the other hand, it is not practical for every agent in a network to have infinite book-keeping abilities to store the shared values of its neighbors information to analyze for any anomalies. However, given a finite memory resource is made available for agents in a network, distributed consensus algorithms can be reinforced with retrospecting abilities to enhance the quality of the decisions that they make and mimic the trust-based decision-making behavior of humans in a distributed setting.
Under the local information model setting, the protocol we propose is related to the bounded confidence models in opinion dynamics (see for instance Hegselmann and Krause (2002)) where each agent updates its state (analogous to its opinion) based only on the states of agents that are within a certain confidence range of its own, enforcing the idea that only trustworthy agents (here intended as agents with similar opinions) can influence each other. Moreover, inspired by Lorenz (2009);Liang et al. (2013); Morarescu and Girard (2011), we assume that the confidence bounds are heterogeneous (i.e., agent-dependent) and time-dependent. The problem of consensus in networks with random weighting matrices was studied in Tahbaz-Salehi and Jadbabaie (2008,2006). We refer to a closely aligned idea that appeared in Yu and Vorobeychik (2019b,a), where malicious nodes were identified in an uncertain network with high confidence and removed. We consider extending a similar idea as Yu and Vorobeychik (2019a) for designing a distributed consensus protocol using an history data-driven approach. Specifically, at each time step, we use the available finite historical data to estimate the first two moments of an unknown distribution governing the true nature (called "configuration" in our notation) of an agent's neighbors.
Statement Of Contributions:
We propose a novel history data-driven distributed consensus protocol for networks. Specifically, our main contributions are as follows:
(1) We model the true nature of neighboring agents of an agent in a network as a random vector (which we term as the "configuration" of neighbors), and we learn the parameters governing its true but unknown distribution from the collected historical data.
(2) We translate the trustworthiness that resulted from the neighbor configuration into weights and propose a new History-Data-Driven (HDD) distributed consensus protocol for networks. (3) We demonstrate by means of numerical simulation that our proposed design effectively models the neighbor configuration from the historical data, and arrives at a trust-based consensus 1 .
1 Notion introduced in Definition 2.
The rest of the paper is organized as follows: The preliminaries of the consensus protocol and the definition of a neighbor configuration are established in section 2. In section 3, the empirical estimation of the configuration parameters from the past historical data is discussed. The proposed HDD distributed consensus protocol is presented in section 4 along with the effect of parameter variations.
Our proposed algorithm is then demonstrated in section 5. Finally, the paper is closed in section 6 with a summary and research directions for the future.
NOTATION & PRELIMINARIES
We denote the set of real numbers, integers, non-negative real numbers and non-negative integers by R, Z, R ≥0 , Z ≥0 respectively. The operator \ denotes the set subtraction. The cardinality of the set M is denoted by |M | and its i th element by {M } i . The i th element of a vector x is denoted by [x] i or simply x i and the Euclidean norm of x is denoted by x 2 or simply x . A vector in R n with all its elements being ones is denoted by 1 n . The j th column of a matrix A is denoted by A j . An element in i th row and j th column
of matrix A is denoted by A ij . The uniform distribution between a, b ∈ R, a < b is denoted by U [a, b].
PROBLEM FORMULATION
Consensus Dynamics of a Network
Consider a network having N agents whose connectivity is modeled via an undirected and connected graph G = (V, E), where V represents the set of agents with |V| = N . A set of time-invariant communication links amongst the agents is represented using E ⊂ V × V. We associate with each agent i ∈ V a state x i (t) ∈ R at time t ∈ Z ≥0 . Let the set of inclusive neighbors be defined as
J i = N i ∪ {i}, where N i = {j ∈ V : (j, i) ∈ E}
is the neighbor set of agent i, whose states are available to agent i via communication links. The degree of i is denoted as d i = |N i |, and every agent is assumed to have access to its own state at any time t. At any time t, each agent updates its own state based on its current state and the states of its neighboring agents according to a prescribed memory-less update rule
x i (t + 1) = f i (x j (t)), j ∈ J i , i ∈ V.
(1) Typical distributed consensus protocols of the form (1) involve associating a weight corresponding to all inclusive neighbors j ∈ J i and using it in the consensus update. In this work, we consider the weighted averaging type of update protocols, namely
x(t + 1) = W (t)x(t) with x(t) = [x 1 (t) . . . x N (t)] , that is, x i (t + 1) = j∈Ji w ij (t)x j (t), i ∈ V,(2)
where W (t) is an element-wise non-negative time-varying weighting matrix with its entries w ij (t) ≥ 0 modeling the trustworthiness associated by agent i on its inclusive neighboring agents j ∈ J i at each time t. Olshevsky and Tsitsiklis (2009) showed that, under assumptions on the graph (such as connectivity of G) and on the weights of W (t) (e.g., weights chosen according to a convex combination, making W (t) a stochastic matrix), an asymptotic consensus value is guaranteed by (2), that is,
∃ c ∈ R s.t. lim t→∞ x(t) = c1 N .
In this work we consider the situation in which some agents may not follow the update rule (2). To this end, we introduce the notion of cooperative and (potentially) non-cooperative agents. Definition 1. An agent i ∈ V is said to be cooperative if it updates its state based on (2). It is said to be (potentially) non-cooperative 2 , otherwise. Definition 2. An agent i ∈ V is said to be in a trustbased consensus with a set of identified trusted neighbors
j ∈ N i ⊆ N i if lim t→∞ x i (t) − x j (t) = 0 for all j ∈ N i .
The intuition is that if the cooperative agents manage to correctly identify and distrust non-cooperative agents, effectively trusting only (a subset of) their neighbors, then the sequence {x i (t)} t≥0 is convergent for all cooperative agents i, i.e., x i (t) → x * i ∈ R, and either the agents reach an agreement, i.e., x * i = x * j for all i, j ∈ V, or clustering, i.e., x * i = x * j for all i, j belonging to the same cluster. However, a smart non-cooperative agent, if undetected, may act as a leader and be followed by a set of cooperative agents (whose corresponding sequence {x i (t)} t≥0 then, in that case, need not be convergent).
Availability of Historical Data
When a memory-less distributed protocol like (2) is used by the cooperative agents i ∈ V under the setting where some agents might be non-cooperative, the resulting asymptotic consensus can be easily manipulated by some smart adversaries. Under this setting, the mechanism we propose for each agent with uncertain information on the nature of its neighbors, is to observe the neighbors' shared data for a certain period of time, arrive at an estimate of the neighbors' trustworthiness, and subsequently use it in its update to arrive at a consensus. To facilitate a tractable problem formulation and to make the resulting consensus algorithm suitable for dynamic implementation, we consider a finite historical data of length T ∈ Z ≥0 updated in a rolling horizon fashion 3 . At all time steps t, every agent i ∈ V is assumed to have access to the history of its own values and to its neighboring agents' values x j (t), j ∈ N i for the past T time-steps. That is,
with κ t,T = {t − l} T −1 l=0 , we have X T i,j (t) = {x j (k) | j ∈ N i , k ∈ κ t,T } , X T i,i (t) = {x i (k) | k ∈ κ t,T } , X T i,Ni (t) = X T i,j (t) | j ∈ N i .
Thus, the main purpose of this work is (i) to design a protocol that allows each cooperative agent i ∈ V to estimate the trustworthiness of its neighbors at each time step t, given the history X T i,Ni (t) and X T i,i (t) for the past T time steps; (ii) to study the role of the estimated trustworthiness in solving the trust-based consensus problem for the cooperative agents, despite the presence of non-cooperative agents in the network.
SET MEMBERSHIP BASED EMPIRICAL ESTIMATION OF CONFIGURATION
In this section, we describe how to learn the trustworthiness of neighbors, given their T time steps historical data. For each agent i ∈ V, the configuration of neighbors N i at time t, denoted by π i t ∈ [0, 1] di , encodes the degree of trustworthiness of every neighbor j ∈ N i . A neighbor j ∈ N i is said to be completely trustworthy or not trustworthy if π i t j equals to 1 or 0, respectively, and any value in [0, 1] defines its degree of trustworthiness. Similarly, we defineπ i t = 1 di − π i t to be the configuration representing the degree of non-cooperativeness of the neighbors at time t. Note that π i t is a random vector where the j th entry of π i t corresponding to the neighbor j ∈ N i is supported on a compact interval [0, 1]. Further, π i t ∼ P i t with P i t denoting the true but unknown distribution of the π i t supported on a compact set [0, 1] di . Let µ i t ∈ R di and Σ i t ∈ R di×di denote the true mean and covariance respectively associated with P i t . Though in reality P i t is not readily available, it can be estimated from data, that is, using the T time steps history data, it is possible to form an empirical distributionP i t . Let us denote the mean and the covariance ofP i t byμ i t andΣ i t , respectively. Here, μ i t j is agent i's estimated trustworthiness at time t about the neighboring agent j ∈ N i given its past T time steps historical data. We propose a set membership based approach to estimate the parameters of the empirical configuration distributionP i t given the historical data X T i,i (t) and X T i,Ni (t). We base the following discussion on the presumption that a neighbor j ∈ N i is believed to be more trustworthy by agent i, if j's values are in the desired vicinity of agent i's value throughout the considered past.
The -Neighborhood Based Set Membership
To define a set membership based estimation, we require a set of confidence neighborhoods for all the past T time steps. Thus, for all k ∈ κ t,T , the confidence neighborhood around the x i (k) is defined as,
B xi(k) ( i,k ) = {y ∈ R | y − x i (k) 2 ≤ i,k } ,(3)
where i,k > 0 is the confidence bound for agent i at time k. To value the recent past more than the distant past, we assume that at each time step t agent i ∈ V is free to choose a decreasing sequence of confidence bounds i,k , ∀k ∈ κ t,T as follows:
i,t−(T −1) > · · · > i,t−2 > i,t−1 > i,t > 0.
(4) Using the confidence neighborhoods B xi(k) ( i,k ), and the information sets X T i,i (t), X T i,Ni (t), we define the set membership counter for all time steps k ∈ κ t,T as follows,
N i k = j ∈ N i | x j (k) ∈ B xi(k) ( i,k ) .(5)
Here, N i k ⊆ N i accounts for the neighbors j ∈ N i who share their values in the vicinity of the agent i, at time step k of the past history. It is possible that at time k ∈ κ t,T , the set N i k may turn out to be empty, or equal to N i .
Estimating Parameters of Configuration Distribution
Now, we illustrate how each agent i ∈ V estimates the trustworthiness of its neighbors at each time instant t.
First, a frequency counter and a discounted importance vector are defined for each agent as follows. A frequency counter C i j (t) records, at each time step t, the time indices k ∈ κ t,T where the neighbor j ∈ N i belonged to N i k :
C i j (t) = k ∈ κ t,T | j ∈ N i k j ∈ N i .
A discounted importance vector d i j ∈ R T qualitatively captures how a neighboring agent j ∈ N i behaved with respect to the agent i, by valuing the recent past more than the distant past using a discount factor ν i,t ∈ (0, 1):
d i j k = ν t−k i,t , if k ∈ C i j (t), 0, if k / ∈ C i j (t)
. Finally, the estimated meanμ i t and estimated covariancê Σ i t at time t are computed as:
μ i t j = 1 T k∈κ t,T d i j (t) k , j ∈ N i ,(6)Σ i t = 1 T − 1 k∈κ t,T D i k −μ i t D i k −μ i (t) ,(7)
where the variability matrix 4 D i ∈ R di×T is given by
D i jk = x i (k) − x j (k) 1 + x i (k) − x j (k) , j ∈ N i , k ∈ κ t,T .(8)
Then an estimate 5 of trustworthy configuration isπ i t =μ i t , and subsequentlyπ i t = 1 di −π i t would be an estimate of non-cooperative configuration. In the next section, we elucidate how an agent i ∈ V can use the inferred trustworthiness of its neighbors j ∈ N i to update its value.
DESIGN OF AN HISTORICAL DATA-DRIVEN DISTRIBUTED CONSENSUS PROTOCOL
In this section, we use the obtained trustworthiness information of the neighbors of an agent to design a distributed consensus protocol. Following the definition of HDD protocol, we discuss the effect of various parameter variations on the consensus obtained using HDD protocol.
An HDD Distributed Consensus Protocol
For every collaborative agent i ∈ V,μ i t denotes the estimated trustworthiness of its neighbors given their past historical data. Further, every collaborative agent i has to trust itself completely at all time steps. Therefore, at each time step t, we form the augmented trust vector
z † i (t) ∈ R |Ji| as z † i (t) = μ i t 1 , since J i = N i ∪ {i}.(9)
Then, every collaborative agent i ∈ V updates its states using the following proposed History-Data-Driven (HDD) distributed consensus protocol as follows 4 Several choices for defining the matrix D i exist other than (8). 5 This style of inferring the trustworthiness has the potential of being vulnerable with smarter adversaries as they can manipulate the estimated parametersμ i t andΣ i t to render them off from their respective true values µ i t and Σ i t . Future research will seek to address this using the distributionally robust stochastic program (DRSP) model described in Delage and Ye (2010).
x i (t + 1) = j∈Ji [z † i (t)] j z † i (t) 1 :=wij (t) x j (t),(10)
where the weights w ij (t) ∈ [0, 1], ∀i ∈ V, j ∈ J i , and j∈Ji w ij (t) = 1. Moreover, w ij (t) = 0 if and only if j / ∈ J i or j / ∈ N i k for all k ∈ κ t,T . Remark 1. The weights w ij (t) given by (10) translates the trustworthiness information of neighboring agents into weights for the distributed consensus update rule. The HDD protocol is actually a nonlinear consensus update, such as the W-MSR protocol, as the weight w ij (t) computed by agent i for its neighbor j ∈ N i with an informed choice of the parameters T, i,k and ν i,t may turn out to be zero based on the inference using the historical data, meaning that at time t, agent i neglects neighbor j's contribution. Remark 2. Following the proof of Proposition 1 present in Morarescu and Girard (2011), it is possible to prove that, no matter the choice of T ≥ 1 and ν i,t , the sequence {x i (t)} t≥0 is convergent for all cooperative agents i, that is,
x i (t) → x * i , if,
for each cooperative agent i, the confidence bounds { i,k } k∈κ t,T at each instant t are chosen so that i,t−T +1 is time-decaying and eq. (4) is satisfied 6 . Possible choices are, for instance,
i,t−T +1 = R i e −ρi(t−T +1) or i,t−T +1 = R i ρ t−T +1 i , where R i ∈ R ≥0 and ρ i ∈ (0, 1).
Effects of Parameter Variations
The design parameters of our algorithm are T ∈ Z ≥0 (T finite), {ν i,t } i∈V,t∈R+ , and { i,k } i∈V,k∈κ t,T . The parameter ν i,t ∈ (0, 1) can be regarded as the forgetting factor for an agent i ∈ V at time t and thus influences how much an agent is willing to remember its neighbors' past interactions from the time t given the history length T (defined respecting the available memory constraints). For instance, ν i,t closer to 1 indicates that the agent emphasizes its recent past interactions with its neighbors more and hence its forgetfulness decreases rather slowly over time. On the other hand, ν i,t closer to 0 indicates that the agent forgets quickly. The next parameter confidence bound i,k at time k is the agent i's freedom to choose its desired vicinity area around its values to value its neighbors appropriately. For instance, a cautious agent i ∈ V would tend to have a small i,k even at its distant past, while a relaxed agent may tend to choose a generous i,k , ∀k ∈ κ t,T . Given that N i k for an agent i ∈ V directly depends upon the { i,k } k∈κ t,T and which in turn defines theμ i t , it is clear that the resulting consensus is directly impacted by the choice of the confidence bound that an agent chooses according to its behavioural aspects. Though the efficacy of the proposed update protocol is limited upon the memory constraint defining the parameter T , its freedom in the design of other design parameters makes it both an interesting and powerful consensus protocol. Remark 3. The collected historical data can be used to predict the neighbors value using machine learning techniques and if the neighbor shares a value closer to the predicted value, agent i can allocate an higher trust to value their contribution more and thereby define datadriven predictive consensus protocol. Another variation to HDD protocol would be to remove neighbors whose trustworthiness fall below a specified trust-threshold and subsequently using only the remaining neighbors values. Future work will seek address above variations and to investigate adaptive designs of discount factor ν i,t and confidence bounds i,k , ∀k ∈ κ t,T .
A NUMERICAL EXAMPLE
In this section, we elaborate the simulation results that we performed to demonstrate our proposed HDD distributed consensus protocol. We considered an undirected graph G = (V, E) with |V| = N = 13 agents. Further, the first 10 agents were assumed to be cooperative, and are denoted by V c = {1, . . . , 10}, and the rest V nc = V\V c to be noncooperative. The cooperative nodes i ∈ V c were randomly connected with probability p = 0.4 (here, p denotes the probability of an edge between two nodes), while each noncooperative node i ∈ V nc was connected to all cooperative nodes. All agents were given a random fixed history of data generated for a considered history of length T . The HDD protocol was demonstrated for a total of T t = 200 time steps with the new state trajectories of neighboring agents being used to update the historical data in a rolling horizon fashion. Each agent i ∈ V was given random confidence bounds (decreasingly sorted) for all t where each i,k was drawn from U [ , ] with the lower limit set to a constant value, = 0.01 and the upper limit was varied as ∈ {0.5, 1, 1.5} to observe different behaviours. Every agent i ∈ V was given the same discount factor ν i,t = ν ∈ (0, 1) at a given time step t. The HDD protocol was executed by varying one of the parameters T , ν, while keeping the rest fixed.
The results of our simulation are shown in Figure 1. It is assumed that every non-cooperative agent follows a random state update rule. On all the sub-figures of Figure 1, the discount factor variations with ν ∈ {0.05, 0.50, 0.95} are shown. With low values of the discount factor, we observed clustering behaviour between agents, and with higher values of discount factor, the normal consensus convergence is observed. This is due to the fact that higher values of ν enabled the agents to remember the past interactions of their neighbors to a greater extent. The effect of varying the confidence bounds are shown in sub-figures 1a, 1b and 1c, respectively for = 0.5, 1.0, 1.5. Higher values of confidence bounds encouraged the agents to cooperate with each other, while lower values of confidence bounds resulted in delayed cooperation and in clustering behaviour between agents. Finally, when the history length was reduced to T = 5, the agents converged quickly thanks to small memory and big enough confidence bound. If the agents were to safeguard themselves against non-cooperative agents, their best bet would be to have small confidence bounds; however, such a strategy might lead to clustering behaviour. This explains that there is a definite trade-off that the agents have to observe if they plan on safely interacting with their neighbors. A detailed investigation of the effects of the parameters on the resulting HDD protocol weights and the final consensus value is available in the appendix of Renganathan et al. (2022). The code used to obtain the simulation results is made publicly available at https://github.com/ venkatramanrenganathan/HDDConsensus.
CONCLUSION & FUTURE OUTLOOKS
We proposed a novel historical data-driven distributed consensus protocol for uncertain networks. Our proposed approach formulates the uncertainty about the trustworthiness of neighbors of an agent as a random vector termed as configuration, and learns the parameters defining its unknown but true distribution via history datadriven approach. Subsequently, the trustworthiness of all neighbors of an agent is inferred leading to the proposed HDD distributed consensus protocol. Our simulation results demonstrated the effectiveness of our proposed idea. As a future work, we seek to investigate the moment uncertainty along with the losses due to mistakenly associating wrong trust with neighbors given their historical data using distributionally robust optimization techniques. Other promising directions are investigating adaptive designs for confidence bounds and discount factors, and to design an history data-driven predictive consensus algorithm. 3. An -neighborhood based set membership for agent 1, namely B x1(k) ( i,k ) with i,k = > 0, ∀k ∈ κ t,T corresponding to the past T time steps historical data is illustrated here. In our work, we propose to have the balls to be decreasing when time moves forward along with other parameters.
(c) HDD protocol (10) with T = 15, { i,k } k∈κ t,T ∼ U [0.01, 1.50].(d) HDD protocol (10) with T = 5, { i,k } k∈κ t,T ∼ U [0.01, 1.00].
Fig. A. 2 .
2Effect of parameters variation for the HDD protocol (10): matrix W (t). The color map indicates cooperative agents (i.e., i ∈ V c ). Each panel shows the elements w ij (t) for i ∈ V c (i.e, cooperative agents) and j = 2,11, 12, 13 (top-left top-right, bottom-left, bottom-right, respectively) at time t = 200, for increasing values of ν ∈ {0.05, 0.1, . . . , 0.95}, with confidence bounds { i,k } k∈κ t,T ∼ U [ , ].
Fig. A.
Fig. A.3. An -neighborhood based set membership for agent 1, namely B x1(k) ( i,k ) with i,k = > 0, ∀k ∈ κ t,T corresponding to the past T time steps historical data is illustrated here. In our work, we propose to have the balls to be decreasing when time moves forward along with other parameters.
) ;
)V.Renganathan & A. Fontan contributed equally. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program under grant agreement No 834142 (Scalable Control).
For instance, an agent i ∈ V can act non-cooperatively by applying random update function f i (·) other than (2) at all time-steps. 3 Future work will seek to understand the behaviour with growing history over time.
Future research will explore this direction and seek to address provable convergence to finite-time trust-based consensus.
(a) HDD protocol (10) with T = 15, { i,k } k∈κ t,T ∼ U [0.01, 0.50]. (b) HDD protocol (10) with T = 15, { i,k } k∈κ t,T ∼ U [0.01, 1.00].
Appendix A. PARAMETER VARIATIONS ON HDD PROTOCOLA basic illustration of our approach with constant i,k , k ∈ κ t,T is shown inFig. A.3. Now, consider the same example presented in section 5 as shown inFig. 1. The resulting trust-based consensus or (clustering) value at time t = 200 is plotted inFig. A.1. When the forgetting factor was small in addition to the confidence bounds t being small, we observed a "clustering" behavior: this is due to the fact that the HDD protocol values only the most recent past. For instance, for ν < 0.1, ν t−k ≈ 0 for k = t − T + 1, . . . , t − 2, that is, the agents "forget quickly". Instead, when the forgetting factor is close to 1, the states tend to converge to consensus, due to the ability of the agents to remember past events (such as remembering agent j in its k -neighborhood for a certain time k ∈ {t − T + 1, . . . , t}).Fig. A.2shows the elements of the j th (j = 2, 11, 12, 13) column of W (t) when t = 200, for increasing values of ν. Remember that each element w ij (i ∈ V c , j = 2, 11, 12, 13) represents the trust that each agent i has on its neighbor j; then, the intuition is that if there exists a value of ν such that w ij (200) = 0 for all i ∈ V c and j ∈ V nc , it means that the cooperative agents correctly decide to not trust the non-cooperative agents. For instance,Fig. A.2shows that the non-cooperative agents 11, 12 are "detected" for most values of ν (except for agent 7 in yellow which almost always believes the non-cooperative neighbors (j ∈ V nc ) no matter what values of confidence bounds and discount factor are used). One could think that ν ≈ 1 corresponds to an optimal choice; this is however not the case if a non-cooperative agent adopts a smart behavior (see agent 13 and bottom-right panel of all sub-figures inFig. A.2)and obtains the trust of all the other agents. Finally, it is interesting to notice that in order to achieve cooperation, an agent needs to trust its neighbors' states and lower the certainty in its own state (see the element w 22 (200) depicted in blue in the top-left panel of all four sub-figures inFig. A.2). Specifically, a cooperative agent (like agent 2) under smaller confidence bounds is stubborn initially with smaller discount factor and then relaxes its certainty upon itself with higher discount factor resulting in consensus. On the other hand, when the confidence bounds increased, the rate at which it relaxes its certainty increases and, as a result, a faster cooperation is observed. Under the effect of different history lengths depicted by the top right and bottom right sub-figures ofFig. A.2, we see that under shorter history length, the cooperative agents relax a lot faster their certainties leading to faster cooperation as their memory is smaller. Note that if the cooperative agents were to relax their certainty and cooperate faster, it might come at the expense of potentially starting to believe non-cooperative neighbors. Future work will seek to design an adaptive sequence of confidence bounds and discount factor at each time step to rectify this phenomenon and encourage agents to include neighbors with more distant opinions. . Clustering and consensus behaviour is consistently observed on all four subplots with respect to lower discount factor and higher discount factor settings respectively.
Fault-tolerant gathering algorithms for autonomous mobile robots. N Agmon, D Peleg, Agmon, N. and Peleg, D. (2004). Fault-tolerant gathering algorithms for autonomous mobile robots. 1070-1078.
On krause's multi-agent consensus model with state-dependent connectivity. V D Blondel, J M Hendrickx, J N Tsitsiklis, IEEE Transactions on Automatic Control. 5411Blondel, V.D., Hendrickx, J.M., and Tsitsiklis, J.N. (2009). On krause's multi-agent consensus model with state-dependent connectivity. IEEE Transactions on Automatic Control, 54(11), 2586-2597.
Distributionally robust optimization under moment uncertainty with application to data-driven problems. E Delage, Y Ye, Operations research. 583Delage, E. and Ye, Y. (2010). Distributionally robust opti- mization under moment uncertainty with application to data-driven problems. Operations research, 58(3), 595- 612.
Resilient randomized quantized consensus. S M Dibaji, H Ishii, R Tempo, IEEE Transactions on Automatic Control. 638Dibaji, S.M., Ishii, H., and Tempo, R. (2018). Resilient randomized quantized consensus. IEEE Transactions on Automatic Control, 63(8), 2508-2522.
Information flow and cooperative control of vehicle formations. J Fax, R Murray, IEEE Transactions on Automatic Control. 499Fax, J. and Murray, R. (2004). Information flow and coop- erative control of vehicle formations. IEEE Transactions on Automatic Control, 49(9), 1465-1476.
The role of frustration in collective decision-making dynamical processes on multiagent signed networks. A Fontan, C Altafini, IEEE Transactions on Automatic Control. to appearFontan, A. and Altafini, C. (2021). The role of frustration in collective decision-making dynamical processes on multiagent signed networks. IEEE Transactions on Automatic Control, to appear.
Opinion Dynamics and Bounded Confidence Models, Analysis and Simulation. R Hegselmann, U Krause, Journal of Artificial Societies and Social Simulation. 53Hegselmann, R. and Krause, U. (2002). Opinion Dy- namics and Bounded Confidence Models, Analysis and Simulation. Journal of Artificial Societies and Social Simulation, 5(3), 1-33.
The byzantine generals problem. L Lamport, R Shostak, M Pease, Concurrency: the Works of Leslie Lamport. Lamport, L., Shostak, R., and Pease, M. (2019). The byzantine generals problem. In Concurrency: the Works of Leslie Lamport, 203-226.
Resilient asymptotic consensus in robust networks. H J Leblanc, H Zhang, X Koutsoukos, S Sundaram, IEEE Journal on Selected Areas in Communications. 314LeBlanc, H.J., Zhang, H., Koutsoukos, X., and Sundaram, S. (2013). Resilient asymptotic consensus in robust networks. IEEE Journal on Selected Areas in Commu- nications, 31(4), 766-781.
Opinion dynamics in networks with heterogeneous confidence and influence. H Liang, Y Yang, Wang , X , Physica A: Statistical Mechanics and its Applications. 3929Liang, H., Yang, Y., and Wang, X. (2013). Opinion dynamics in networks with heterogeneous confidence and influence. Physica A: Statistical Mechanics and its Applications, 392(9), 2248-2256.
Heterogeneous bounds of confidence: Meet, discuss and find consensus!. J Lorenz, Complexity. 154Lorenz, J. (2009). Heterogeneous bounds of confidence: Meet, discuss and find consensus! Complexity, 15(4), 43-52.
HDD protocol (10) with T = 15. { i,k } k∈κ t,T ∼ U [0.01, 0.50HDD protocol (10) with T = 15, { i,k } k∈κ t,T ∼ U [0.01, 0.50].
HDD protocol (10) with T = 15. { i,k } k∈κ t,T ∼ U [0.01, 1.50HDD protocol (10) with T = 15, { i,k } k∈κ t,T ∼ U [0.01, 1.50].
HDD protocol (10) with T = 5, { i,k } k∈κ t. 0.01, 1.00HDD protocol (10) with T = 5, { i,k } k∈κ t,T ∼ U [0.01, 1.00].
The color map indicates cooperative agents (i.e., i ∈ V c ), while non-cooperative agents (i.e., i ∈ V nc ) are shown in grey color. Each panel shows the evolution of agents' states x i (t), i ∈ V. states' trajectories x(t). Fig. 1. Effect of parameters variation for the HDD protocol. for ν = 0.05, 0.5, 0.95, with the confidence bounds { i,k } k∈κ t,T ∼ U [ , ] represented by shaded areas. (a),(b),(c): Effect of varying the confidence bounds; here T = 15, = 0.01 and = {0.5, 1.0, 1.5}. (d): Effect of varying the history length T ; here T = 5, = 0.01 and = 1Fig. 1. Effect of parameters variation for the HDD protocol (10): states' trajectories x(t). The color map indicates cooperative agents (i.e., i ∈ V c ), while non-cooperative agents (i.e., i ∈ V nc ) are shown in grey color. Each panel shows the evolution of agents' states x i (t), i ∈ V, for ν = 0.05, 0.5, 0.95, with the confidence bounds { i,k } k∈κ t,T ∼ U [ , ] represented by shaded areas. (a),(b),(c): Effect of varying the confidence bounds; here T = 15, = 0.01 and = {0.5, 1.0, 1.5}. (d): Effect of varying the history length T ; here T = 5, = 0.01 and = 1.
Opinion dynamics with decaying confidence: Application to community detection in graphs. I C Morarescu, A Girard, IEEE Transactions on Automatic Control. 568Morarescu, I.C. and Girard, A. (2011). Opinion dynamics with decaying confidence: Application to community detection in graphs. IEEE Transactions on Automatic Control, 56(8), 1862-1873.
Convergence speed in distributed consensus and averaging. A Olshevsky, J N Tsitsiklis, SIAM journal on control and optimization. 481Olshevsky, A. and Tsitsiklis, J.N. (2009). Convergence speed in distributed consensus and averaging. SIAM journal on control and optimization, 48(1), 33-55.
Consensus computation in unreliable networks: A system theoretic approach. F Pasqualetti, A Bicchi, F Bullo, IEEE Transactions on Automatic Control. 571Pasqualetti, F., Bicchi, A., and Bullo, F. (2012). Con- sensus computation in unreliable networks: A system theoretic approach. IEEE Transactions on Automatic Control, 57(1), 90-104.
Spoof resilient coordination in distributed and robust robotic networks. W Ren, R W Beard, London Springer, V Renganathan, K Fathian, S Safaoui, T Summers, IEEE Transactions on Control Systems Technology. Distributed Consensus in Multi-vehicle Cooperative ControlRen, W. and Beard, R.W. (2008). Distributed Consensus in Multi-vehicle Cooperative Control. Springer, London. Renganathan, V., Fathian, K., Safaoui, S., and Summers, T. (2021). Spoof resilient coordination in distributed and robust robotic networks. IEEE Transactions on Control Systems Technology, 1-8.
History data-driven distributed consensus in networks. V Renganathan, A Fontan, K Ganapathy, Renganathan, V., Fontan, A., and Ganapathy, K. (2022). History data-driven distributed consensus in networks. URL https://github.com/venkatramanrenganathan/
Resilient consensus for timevarying networks of dynamic agents. D Saldaña, A Prorok, S Sundaram, M F M Campos, V Kumar, 2017 American Control Conference (ACC). Saldaña, D., Prorok, A., Sundaram, S., Campos, M.F.M., and Kumar, V. (2017). Resilient consensus for time- varying networks of dynamic agents. In 2017 American Control Conference (ACC), 252-258.
Distributed function calculation via linear iterations in the presence of malicious agents -part ii: Overcoming malicious behavior. S Sundaram, C N Hadjicostis, American Control Conference. Sundaram, S. and Hadjicostis, C.N. (2008). Distributed function calculation via linear iterations in the presence of malicious agents -part ii: Overcoming malicious behavior. In 2008 American Control Conference, 1356- 1361.
On consensus over random networks. A Tahbaz-Salehi, A Jadbabaie, 44th Annual Allerton Conference. CiteseerTahbaz-Salehi, A. and Jadbabaie, A. (2006). On consensus over random networks. In 44th Annual Allerton Con- ference. Citeseer.
A necessary and sufficient condition for consensus over random networks. A Tahbaz-Salehi, A Jadbabaie, IEEE Transactions on Automatic Control. 533Tahbaz-Salehi, A. and Jadbabaie, A. (2008). A necessary and sufficient condition for consensus over random net- works. IEEE Transactions on Automatic Control, 53(3), 791-795.
Characterizing trust and resilience in distributed consensus for cyberphysical systems. M Yemini, A Nedic, A J Goldsmith, Gil , S , IEEE Transactions on Robotics. Yemini, M., Nedic, A., Goldsmith, A.J., and Gil, S. (2021). Characterizing trust and resilience in distributed con- sensus for cyberphysical systems. IEEE Transactions on Robotics, 1-21.
S Yu, Y Vorobeychik, arXiv:1901.11463Distributionally robust removal of malicious nodes from networks. arXiv preprintYu, S. and Vorobeychik, Y. (2019a). Distributionally robust removal of malicious nodes from networks. arXiv preprint arXiv:1901.11463.
Removing malicious nodes from networks. S Yu, Y Vorobeychik, Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems. the 18th International Conference on Autonomous Agents and MultiAgent SystemsYu, S. and Vorobeychik, Y. (2019b). Removing mali- cious nodes from networks. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, 314-322.
|
[
"https://github.com/venkatramanrenganathan/"
] |
[
"Visibility graphs for robust harmonic similarity measures between audio spectra",
"Visibility graphs for robust harmonic similarity measures between audio spectra"
] |
[
"Delia Fano Yela \nCentre for Digital Music Queen\nMary University of London E14NS\nLondonUK\n",
"Dan Stowell \nCentre for Digital Music Queen\nMary University of London E14NS\nLondonUK\n",
"Mark Sandler \nCentre for Digital Music Queen\nMary University of London E14NS\nLondonUK\n"
] |
[
"Centre for Digital Music Queen\nMary University of London E14NS\nLondonUK",
"Centre for Digital Music Queen\nMary University of London E14NS\nLondonUK",
"Centre for Digital Music Queen\nMary University of London E14NS\nLondonUK"
] |
[] |
Graph theory is emerging as a new source of tools for time series analysis. One promising method is to transform a signal into its visibility graph, a representation which captures many interesting aspects of the signal. Here we introduce the visibility graph for audio spectra. Such visibility graph captures the harmonic content whilst being resilient to broadband noise. We propose to use a structural distance between two graphs as a novel harmonic-biased similarity measure. We present experiments demonstrating the utility of this distance measure for real and synthesised audio data. The source code is available online.
| null |
[
"https://arxiv.org/pdf/1903.01976v2.pdf"
] | 67,877,061 |
1903.01976
|
867369cfee19c5dc964405c04e86f27e9ca9e318
|
Visibility graphs for robust harmonic similarity measures between audio spectra
Delia Fano Yela
Centre for Digital Music Queen
Mary University of London E14NS
LondonUK
Dan Stowell
Centre for Digital Music Queen
Mary University of London E14NS
LondonUK
Mark Sandler
Centre for Digital Music Queen
Mary University of London E14NS
LondonUK
Visibility graphs for robust harmonic similarity measures between audio spectra
Graph theory is emerging as a new source of tools for time series analysis. One promising method is to transform a signal into its visibility graph, a representation which captures many interesting aspects of the signal. Here we introduce the visibility graph for audio spectra. Such visibility graph captures the harmonic content whilst being resilient to broadband noise. We propose to use a structural distance between two graphs as a novel harmonic-biased similarity measure. We present experiments demonstrating the utility of this distance measure for real and synthesised audio data. The source code is available online.
I. INTRODUCTION
Graphs are a tool of growing interest in the signal processing community for data representation and analysis. Their structure offers a new perspective, often unveiling non trivial properties on the data they represent. In particular, time series analysis has greatly benefited from graph representations as they provide a mapping able to deal with non-linearities and multi-scaling issues present in multiple applications [2], [14], [13], [3].
A popular mapping from time series to complex networks is the visibility graph [6]. Every node in such graph represents a datum of the time series, and two nodes are connected if they fulfil visibility criteria analogous to the visibility between points on a landscape. The visibility between data will only depend on their relative height and location, creating a graph structure capturing the links between data. The success of this simple visibility mapping is partly due to its powerful properties. Visibility graphs preserve characteristics of the time series such as periodicity [12], and are invariant to several transformations of the time series, such as vertical and horizontal rescaling. It was introduced as a time series analysis tool [6], [7] and has been successfully employed in several applications such as financial series analysis [11].
Here we introduce visibility graphs applied to magnitude spectra. Such graph will preserve all the properties of visibility graphs as its construction remains the same. Therefore, similarly to time series, the visibility graph of spectra may reveal hidden structures in the signal not apparent in the magnitude domain. In particular, we focus on musical audio signals, and we propose the spectral visibility graph degree as a novel representation for audio analysis.
In the spectrum of audio signals, peaks often correspond to harmonic events while percussive or burst-like events present a broadband nature. Broadband content can be a nuisance in music analysis tasks when the target is the harmonic content of the signal. In particular, tasks that require distances of harmonic content face a challenge when the broadband event out-powers the targeted harmonic one [4]. Conversely, we will show that the representation we propose has properties which preserve the harmonic peaks salience in presence of broadband events. Therefore, we propose such representation for robust harmonic similarity measures.
In experiments section we demonstrate that conventional distance metrics fail to recognise the harmonic content in the spectrum in presence of broadband noise, whereas the proposed visibility representation is loyal to its harmonic content. Furthermore we show how real world scenarios could also benefit from such visibility representation, in a final source specific query task to retrieve vocals within a musical mixture.
II. VISIBILITY GRAPHS
A graph consists of a non-empty finite set of elements called nodes and a finite set of edges joining pairs of nodes together. If the set of edges is comprised of ordered pairs of distinct nodes, the graph is called a digraph and it is said to be directed. On the other hand, if the connection between nodes is symmetric, the graph is said to be undirected [1]. The visibility graph described in [6] is associated to a time series although valid for any ordered sequence. Every datum is defined as a node in the graph and two pairs of nodes are joined by an edge if they are visible to each other. The visibility between two points (t a , y a ) and (t b , y b ) of a given time series y = f (t) of length N , is determined by the following geometrical criterion:
y c < y a + (y b − y a ) t c − t a t b − t a where (t c , y c ) is every intermediate point such that t a < t c < t b .
In other words, two points of a given time series are said to 'see' each other if one can draw a straight line joining them without intercepting any intermediate data height. This visibility is referred to as 'natural' visibility, as other kinds exist [10]. Here we will use the defined 'natural' visibility and simply refer to it as 'visibility'. Since such visibility is symmetric (both points either see each other or do not) the visibility graph is an undirected graph. Visibility graphs are always fully connected (i.e. every node has at least one edge) as every datum always sees at least its neighbours. Note that the visibility transformation is not reversible, finding a greater utility as an analysis tool.
We can represent a visibility graph in the form of a square binary matrix A(i, j) ∈ B N ×N (i, j = 1, 2, 3, ..., N ) such that:
A(i, j) = 1 ⇔ nodes i and j are visible
A(i, j) = 0 otherwise
This matrix A is referred to as 'adjacency' matrix. Since the visibility graph is undirected, the corresponding adjacency matrix will be symmetric.
The degree k(i) of a node i is defined as the count of its edges, in other words, the number of nodes connected to it. In the case of visibility graphs, the degree of a node indicates the number of visible nodes or data points. For example, in Figure 1, the first value of the sequence only sees its neighbour and so its degree will be equal to 1. However, the maximum data point of the sequence in fifth position has a wider view and therefore has a larger degree value.
The degree of a node can easily be obtained from the adjacency matrix as it corresponds to the sum of either the row or column (indifferent in this symmetric case) storing the edges of that node. We can define a degree vector k ∈ N N containing the degrees of all nodes i = 1, 2, 3, ..., N of the visibility graph with adjacency matrix A as follows:
k(i) = N j=1 A(i, j)
We also define the degree distribution p, indicating how often the different degree values appear in the degree vector (i.e. histogram). If the values are normalised by the total number of nodes in the graph, p will represent the probability of the different degree values of existing in that graph.
Visibility graphs are invariant to horizontal and vertical translation as the absolute value of the data points have no effect on the visibility (only their relative values matter). For instance, as illustrated in Figure 1, the visibility of a signal with and without a DC offset is equal and so the degree value for each node remains the same in both cases. Furthermore, rescaling of both horizontal and vertical axes also has no effect on the visibility. If the signal is time stretched, the relative position of the points remains the same and so does the visibility.
III. SPECTRA VISIBILITY GRAPHS FOR AUDIO SIGNALS
Inspired by the invariant properties of visibility graphs, we propose to employ such mapping for magnitude spectra, introducing visibility graphs to spectral analysis. We define the visibility graph of a given magnitude spectrums = f (ω) of s ∈ C F , where ω is frequency, following the construction of visibility graphs for time series.
Every frequency bin corresponds to a node, and two nodes will be connected together if the associated frequency bins (ω a ,s a ) and (ω b ,s b ) see each other, fulfilling the visibility criterion:s
c <s a + (s b −s a ) ω c − ω a ω b − ω a where (ω c , ω c ) is every intermediate frequency bin such that ω a < ω c < ω b .
Similarly to time series visibility graphs, we can analogously construct its associated adjacency matrix A The spectrogram (A) and the proposed representation (B) of 10 seconds of track 51 of the dataset DSD100. Both representations are normalised by their own maximum and compressed by a factor of 0.6. The spectral visibility graph degree enhances the harmonics components of the signal. and find the degree k and degree distribution p vectors, such that for f = 1, 2, ..., F frequency bins in the spectrum
k(f ) = F j=1 A(f, j)
Similarly to the degree vector of time series, this degree vector remains invariant under several transformations of the spectrum, including vertical and horizontal translation as well as vertical and horizontal rescaling.
In the case of audio signals, a horizontal rescaling of the spectrum would correspond to a change in pitch and a vertical translation to the presence of uniform broadband noise. Being resilient to such transformations is a major advantage in the audio analysis of applications where the relation between peaks (i.e. harmonic content) is the subject of interest. Therefore, we propose the degree vector k as an alternative representation for magnitude spectras.
Taking a step further, let S ∈ C F ×T be the spectrogram of an audio time signal y, andS its magnitude, where F is the number of frequency bins and T the number of time frames. Here, the proposed representation K ∈ N F ×T will take a matrix form such that every column t = 1, 2, ..., T will correspond to the degree vector k t of the visibility graph of frame t ofS (Figure 2). More precisely, taking A t ∈ B F ×F as the visibility graph's adjacency matrix of the time frame's magnitude spectra t (i.e. column) of the spectrogramS, we define the degree matrix K associated toS such that:
K(f, t) = F j=1 A t (f, j)
where f = 1, 2, ..., F and t = 1, 2, ..., T .
Even though spectral peaks tend to take high values in the proposed representation, their prominence will depend on their surroundings. In other words, peaks close to each other will have less height than sparse ones, such as the harmonics of a musical note. Looking at Figure 1, one may notice how the height at position 4 lost pertinence in the degree domain, going from being the second maxima to being equal to lesser heights (7 and 10); explained by its proximity to the maximum peak in 5. On the other hand, the heights at position 2 and 8 (equally spaced from the maximum) surrounded by smaller heights, gained relevance in the degree domain. Therefore, one can think the transformation into the degree domain, and so into the proposed representation, as a sort of compression enhancing sparse peaks (i.e. harmonics) visible in Figure 2.
As an audio analysis tool, the structure and properties of the proposed mapping directly relate to harmonic content analysis, and so we propose to examine the common case where both harmonic and broadband events overlap. In such scenario, the harmonic energy in the spectrum will remain prominent up to a certain signal-to-noise ratio (SNR), taking the harmonic event as the signal of interest and the broadband as noise. If the broadband event overpowers the harmonic content, it will overcast the harmonic contribution in the magnitude spectrum, complicating the analysis of its harmonic content.
A common task in audio analysis is the search for similar harmonic content between spectra (e.g. time frames in a spectrogram). In the presence of powerful additive broadband noise, most distance metrics fail to recognise the similarity of the harmonic content as they treat all spectral energy as equivalent. Such scenario relates to a vertical translation of the magnitude spectrum and so the harmonic event spectrum with and without additive broadband noise should present a comparable visibility graph and degree vector. Therefore, unlike in the magnitude spectrum (e.g. Figure 2.A), the harmonic peaks in the proposed representation (e.g. Figure 2.B) will remain salient in presence of additive broadband events, and so, one can now use standard distance metrics to reliably measure harmonic similarity. Hence we propose the spectral visibility graph degree as a novel domain for robust harmonic similarity measure in audio signals.
IV. EXPERIMENTS
To evaluate the proposed representation of audio signals for harmonic similarity measure we performed two experiments, one with synthesised data and a second one with real musical recordings. In both experiments the task is to find the correct nearest neighbour of a given harmonic event. We use three different representations of the audio signals: the magnitude spectrum, the spectral visibility graph degree and the spectral visibility graph degree distribution. Our proposed representation is the spectral visibility graph degree; however, we included the degree distribution in the experiments as it has an additional pitch invariance that could benefit the task (i.e. the absolute location of peaks information is ignored). Our goal is to compare these representations by using different distance metrics and conclude on which is more appropriate for harmonic similarity measurements. We use the mean reciprocal rank (MRR) as the evaluation metric, as we know before hand which is the correct nearest neighbour.
The basic computation 1 of the visibility graph has a computation complexity of O(n 2 ). For high frequency resolution spectra, such approach is not ideal in terms of computation time. Therefore, here we used an alternative visibility algorithm based on a 'Divide & Conquer' approach that significantly reduces the computation time with a computational complexity of O(n log n) for the average case [8]. Python source code for our implementation and our experiments is freely available online 2 .
In the first experiment we used part of the synthesised data from [5]: 12 synthesised instruments with the same midi score of 14 notes (A2 to G4) sampled at 44100Hz. Each instrument signal was divided into the distinct midi notes and then individually transformed into the magnitude frequency domain with a Fourier transform of size 16384, 'clean' spectra. Only the first 2000 bins of the magnitude spectra were kept for the rest of the analysis. Random normal noise was then added to the note signals at different SNR values and the result transformed to the frequency domain, 'noisy' spectra. The pair-wise distances between all spectra, both clean and noisy, were then computed and sorted in ascending order. For every clean track, the rank of its noisy version was found and used to compute the MRR. This procedure is repeated for the spectral visibility graph degree representation as well as for the degree distribution.
The average MRR across all notes of all instruments for different SNR is plotted in Figure 3. As expected the proposed method (orange line) achieves best results when the SNR is low. However we see a small dip in performance relative to the raw spectru, using the Euclidean distance in the higher SNR cases. This can be explained by the bigger difference in value between the degree peaks of the clean and noisy signals than in the spectrum case. Even though the peaks remain prominent in the noisy case, the number of nodes the 'peak node' sees is reduced compared to the clean peak degree as there are new data heights induced by the noise. In the case of high SNR, the noise does not overpower the harmonic content and so it does not introduce too much of a difference in the Euclidean distance. However, the location of the peaks are better preserved in the proposed representation and so it always presents the best results whilst using the cosine distance metric.
In the second experiment we use the publicly available Demixing Secrets Dataset (DSD100), containing the stems and mixtures of 100 songs sampled at 44100 Hz [9]. In this case the query will be clean vocal frames and the goal is to find their corresponding frames in the mixture. The magnitude spectrogram for both the vocal and mixture tracks is calculated, with a window size of 2046 samples with 50% overlap, and only the first 500 frequency bins will be considered in the following (i.e. low-pass filter cut-off at around 10kHz). Based on the spectrogram energy of the vocal stem, we select the frames with vocal activity and use them as query frames. The pair-wise distance between the clean vocal query frames and all the frames in the mixture spectogram is then calculated and sorted. The rank of the corresponding mixture frame containing the clean vocal query is then processed and stored to calculate the MRR. This procedure is repeated for the spectral visibility graph degree representation as well as for the degree distribution. Figure 4 shows the results for experiment 02. The proposed representation is, in both cases (Euclidean and cosine distance), visibly much more suitable than the magnitude spectrogram and the degree distribution for the given task. The fact that the degree distribution representation always achieved the worst results shows that the location of the harmonic peaks is a crucial piece of information for this type of harmonic similarity task. Even though the degree distribution was not advantageous in this case, there may be other audio analysis tasks for which it is useful.
V. CONCLUSION
Here we introduced the visibility graph for magnitude spectra. We propose to use the spectral visibility graph degree as an alternative representation for magnitude spectra. Such representation presents properties valuable in audio analysis. Here we focus on a translation invariance of the proposed representation as it directly relates to a harmonic event in presence of broadband noise. We further demonstrate its use for robust similarity measures of both synthetic and real harmonic events. Even though we have demonstrated one application of the proposed representation, we expect such graph-based approach for audio analysis to find other useful applications in the future.
Fig. 1 .
1Illustration of the visibility invariance to vertical translation.
Fig. 2 .
2Fig. 2. The spectrogram (A) and the proposed representation (B) of 10 seconds of track 51 of the dataset DSD100. Both representations are normalised by their own maximum and compressed by a factor of 0.6. The spectral visibility graph degree enhances the harmonics components of the signal.
Fig. 3 .
3The average mean-reciprocal-rank (MRR) amongst all notes of all instruments in experiment 01: 12 synthesised instruments playing 14 notes, clean and with additive random noise. Pair-wise similarity between all signals in the frequency magnitude, degree and degree distribution domain. The clean notes act as query and the expected closest neighbour is their noisy version.
Fig. 4 .
4Mean-reciprocal-rank (MRR) of all mixtures in experiment 02: dataset Dev DSD100, vocal stems and their correspondent mixtures. Pair-wise similarity between the clean vocals and the mixture signals in the magnitude, degree and degree distribution domain for each track. The clean vocal time frames act as query and the expected closest neighbour is that time frame in the mixture.
Original visibility graphs Fortran 90/95 implementation can be found at http://www.maths.qmul.ac.uk/ ∼ lacasa/Software.html 2 Available at https://github.com/delialia/vgspectra
ACKNOWLEDGMENT This work was funded by EPSRC grant EP/L019981/1. Dan Stowell was supported by EPSRC Early Career research fellowship EP/L020505/1.
Digraphs: theory, algorithms and applications. J Bang-Jensen, G Z Gutin, Springer Science & Business MediaJ. Bang-Jensen and G. Z. Gutin. Digraphs: theory, algorithms and applications. Springer Science & Business Media, 2008.
Graphical models for time-series. D Barber, A T Cemgil, IEEE Signal Processing Magazine. 276D. Barber and A. T. Cemgil. Graphical models for time-series. IEEE Signal Processing Magazine, 27(6):18-28, 2010.
Duality between time series and networks. A S Campanharo, M I Sirer, R D Malmgren, F M Ramos, L A N Amaral, PloS one. 6823378A. S. Campanharo, M. I. Sirer, R. D. Malmgren, F. M. Ramos, and L. A. N. Amaral. Duality between time series and networks. PloS one, 6(8):e23378, 2011.
Interference reduction in music recordings combining kernel additive modelling and non-negative matrix factorization. D Yela, S Ewert, D Fitzgerald, M B Sandler, Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)New Orleans, USAD. Fano Yela, S. Ewert, D. FitzGerald, and M. B. Sandler. Interference reduction in music recordings combining kernel additive modelling and non-negative matrix factorization. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), New Orleans, USA, 2017.
Shift-invariant kernel additive modelling for audio source separation. D Yela, S Ewert, K O'hanlon, M B Sandler, Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. the IEEE International Conference on Acoustics, Speech, and Signal ProcessingCalgary, CanadaD. Fano Yela, S. Ewert, K. O'Hanlon, and M. B. Sandler. Shift-invariant kernel additive modelling for audio source separation. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Calgary, Canada, 2018.
From time series to complex networks: The visibility graph. L Lacasa, B Luque, F Ballesteros, J Luque, J C Nuno, Proceedings of the National Academy of Sciences. 10513L. Lacasa, B. Luque, F. Ballesteros, J. Luque, and J. C. Nuno. From time series to complex networks: The visibility graph. Proceedings of the National Academy of Sciences, 105(13):4972-4975, 2008.
Network structure of multivariate time series. L Lacasa, V Nicosia, V Latora, Scientific reports. 515508L. Lacasa, V. Nicosia, and V. Latora. Network structure of multivariate time series. Scientific reports, 5:15508, 2015.
Fast transformation from time series to visibility graphs. X Lan, H Mo, S Chen, Q Liu, Y Deng, Chaos: An Interdisciplinary Journal of Nonlinear Science. 25883105X. Lan, H. Mo, S. Chen, Q. Liu, and Y. Deng. Fast transformation from time series to visibility graphs. Chaos: An Interdisciplinary Journal of Nonlinear Science, 25(8):083105, 2015.
The 2016 signal separation evaluation campaign. A Liutkus, F.-R Stöter, Z Rafii, D Kitamura, B Rivet, N Ito, N Ono, J Fontecave, International Conference on Latent Variable Analysis and Signal Separation. SpringerA. Liutkus, F.-R. Stöter, Z. Rafii, D. Kitamura, B. Rivet, N. Ito, N. Ono, and J. Fontecave. The 2016 signal separation evaluation campaign. In International Conference on Latent Variable Analysis and Signal Separation, pages 323-332. Springer, 2017.
Horizontal visibility graphs: Exact results for random time series. B Luque, L Lacasa, F Ballesteros, J Luque, Physical Review E. 80446103B. Luque, L. Lacasa, F. Ballesteros, and J. Luque. Horizontal visibility graphs: Exact results for random time series. Physical Review E, 80(4):046103, 2009.
The multiplex dependency structure of financial markets. N Musmeci, V Nicosia, T Aste, T Di Matteo, V Latora, Complexity. N. Musmeci, V. Nicosia, T. Aste, T. Di Matteo, and V. Latora. The multiplex dependency structure of financial markets. Complexity, 2017, 2017.
Detecting series periodicity with horizontal visibility graphs. A Nuñez, L Lacasa, E Valero, J P Gómez, B Luque, International Journal of Bifurcation and Chaos. 22071250160A. Nuñez, L. Lacasa, E. Valero, J. P. Gómez, and B. Luque. Detecting series periodicity with horizontal visibility graphs. International Journal of Bifurcation and Chaos, 22(07):1250160, 2012.
Visibility algorithms: A short review. A M Nuñez, L Lacasa, J P Gomez, B Luque, New Frontiers in Graph Theory. InTechA. M. Nuñez, L. Lacasa, J. P. Gomez, and B. Luque. Visibility algorithms: A short review. In New Frontiers in Graph Theory. InTech, 2012.
Segregating event streams and noise with a markov renewal process model. D Stowell, M D Plumbley, The Journal of Machine Learning Research. 141D. Stowell and M. D. Plumbley. Segregating event streams and noise with a markov renewal process model. The Journal of Machine Learning Research, 14(1):2213-2238, 2013.
|
[
"https://github.com/delialia/vgspectra"
] |
[
"A PARALLEL PREPROCESSING FOR THE OPTIMAL ASSIGNMENT PROBLEM BASED ON DIAGONAL SCALING",
"A PARALLEL PREPROCESSING FOR THE OPTIMAL ASSIGNMENT PROBLEM BASED ON DIAGONAL SCALING",
"A PARALLEL PREPROCESSING FOR THE OPTIMAL ASSIGNMENT PROBLEM BASED ON DIAGONAL SCALING",
"A PARALLEL PREPROCESSING FOR THE OPTIMAL ASSIGNMENT PROBLEM BASED ON DIAGONAL SCALING"
] |
[
"Meisam Sharify ",
"Stéphane Gaubert ",
"ANDLaura Grigori ",
"Meisam Sharify ",
"Stéphane Gaubert ",
"ANDLaura Grigori "
] |
[] |
[] |
We present a preprocessing method, which is suitable for parallel computation, to solve large optimal assignment problems. We think of the optimal assignment problem as a limit of a deformation of an entropy maximization problem. We show that the matrix maximizing the entropy converges, as the deformation parameter goes to infinity, to a matrix whose nonzero entries are precisely the ones belonging to optimal assignments. For every value of the deformation parameter, the matrix of maximal entropy can be computed by Sinkhorn iteration. This leads to a parallel preprocessing for the optimal assignment problem, which allows to delete entries that do not belong to optimal assignments, so that the reduced problem becomes executable on a sequential machine.
| null |
[
"https://arxiv.org/pdf/1104.3830v2.pdf"
] | 118,221,593 |
1104.3830
|
6ca4b266c40e7af1c426f41f10646b5101b465df
|
A PARALLEL PREPROCESSING FOR THE OPTIMAL ASSIGNMENT PROBLEM BASED ON DIAGONAL SCALING
19 Apr 2011
Meisam Sharify
Stéphane Gaubert
ANDLaura Grigori
A PARALLEL PREPROCESSING FOR THE OPTIMAL ASSIGNMENT PROBLEM BASED ON DIAGONAL SCALING
19 Apr 2011large scale optimal assignment problementropy maximizationmatrix diagonal scalingparallel computingSinkhorn iterationNewton method AMS subject classifications 90C2754C7065Y0590C0649M15
We present a preprocessing method, which is suitable for parallel computation, to solve large optimal assignment problems. We think of the optimal assignment problem as a limit of a deformation of an entropy maximization problem. We show that the matrix maximizing the entropy converges, as the deformation parameter goes to infinity, to a matrix whose nonzero entries are precisely the ones belonging to optimal assignments. For every value of the deformation parameter, the matrix of maximal entropy can be computed by Sinkhorn iteration. This leads to a parallel preprocessing for the optimal assignment problem, which allows to delete entries that do not belong to optimal assignments, so that the reduced problem becomes executable on a sequential machine.
1. Introduction. One of the most classical problems in combinatorial optimization is the optimal assignment problem. Several applications of this problem arise in different fields of applied sciences such as bioinformatics for protein structure alignment problem [Hol93,LCL04], VLSI design [HCLH90], image processing and computer vision [CWC + 96], and the pivoting problem in the solution of large linear systems of equations [ON96,DK00,LD03]. Thus, this problem has received considerable attention and several algorithms have been proposed to solve it.
The first polynomial time algorithm to solve this problem was proposed by H. W. Kuhn in 1955 [Kuh55]. It works in O(n 4 ) time, which was improved to O(n 3 ) by Edmonds and Karp [EK70] (see also [DK69]). In the sparse case, Fredman and Tarjan [FT87] proposed an improved algorithm which uses Fibonacci heaps for the shortest paths computations. It runs in O(n(m + n log n)) time. Several other algorithms have also been developed. We refer the interested reader to the recent book of Burkard et al. [BDM09].
In this paper we exploit the connection between the optimal assignment problem and entropy maximization. The latter is well studied in the field of convex optimization [FRT97].
The main idea is to think of the optimal assignment problem as the limit of a deformation of an entropy maximization problem. More precisely, given an n × n nonnegative matrix A = (a ij ), let us look for an n × n bistochastic matrix X = (x ij ) maximizing the relative entropy
J p (X) := − 1≤i,j≤n x ij (log(x ij /a p ij ) − 1) , (1.1)
Here, p is the deformation parameter. We will show in Section 2 that when p goes to infinity, the unique solution X(p) = (x ij (p)) of the entropy maximization problem converges to a point X(∞) which is of maximal entropy among the ones in the convex hull of the matrices representing optimal permutations. In particular, if there is only one optimal permutation, X(p) converges to the matrix representing this optimal permutation. In Section 3 we prove that, for X(p) as the solution to equation (1.1) for some value of p and for X(∞) as the solution when p → ∞, we have
|x ij (p) − x ij (∞)| = O(exp(−cp))
for c > 0. This shows an exponential convergence to the optimal solution when p increases.
The maximal entropy matrix X(p) can be computed by any matrix scaling algorithm such as Sinkhorn iteration [SK67] or Newton method [KR07]. Subsequently, these iterative methods can be used to develop new algorithms to solve the optimal assignment problem and related combinatorial optimization problems.
An interesting application of this new approach, is the solution of large scale dense optimal assignment problems. Several efforts have been made to solve this problem [BT09,LO94]. A well-known application arises from the approximation algorithms and heuristics for solving the Asymmetric Traveling Salesman Problem or the Vehicle Routing Problem. There are also some applications in object recognition and computer vision. An application to cosmology (reconstruction of the early universe) can be found in the work of Brenier et al. [BFH + 03]. For a survey on the applications of large dense linear assignment problems, we refer the reader to [BT09]. Models of large dense random assignment problems are also considered in [MPV, Ch. VII] from the point of view of statistical physics.
Here, we investigate a preprocessing algorithm which can be used to solve large scale optimal assignment problems. This preprocessing is based on an iterative method that eliminates the entries not belonging to an optimal assignment. This reduces the initial problem to a much smaller problem in terms of memory requirements. This is illustrated in Figures 1.1 and 1.2.
The idea of this algorithm is to take p large enough, then apply a diagonal scaling algorithm to A (p) until convergence to a bistochastic matrix X, and finally delete the small entries of X. Here, the exponential of A (p) leads to numerical overflow for large values of p. However, we shall show that it is possible to implement this iteration in a numerically stable way. The present algorithm assumes the existence of at least one matching, since otherwise, the Sinkhorn iteration may not converge. However, we note that matrix balancing (Sinkhorn iteration) can also be used to detect the existence of a perfect matching, as shown by Linial, Samorodnitsky and Wigderson [LSW00].
We consider two variants of the algorithm, one by using the Sinkhorn iteration as the diagonal scaling algorithm and the other one by using Newton iteration. The advantage of Sinkhorn iteration is that, it can be efficiently implemented in parallel [ADRU08,DRU08]. Thus we show that for very large dense optimal assignment problems the data of which cannot be stored in one machine, the parallel Sinkhorn iteration can be used to reduce the size of the problem and then, it can be solved by any classical method. On the other hand, the advantage of Newton method is the speed of the convergence to bistochastic matrix.
For both variants, we present several numerical results of various full and sparse matrices from gallery of Matlab and The University of Florida Sparse Matrix Collection. We show that the Sinkhorn iteration can be efficiently used to decrease the size of the dense matrices, up to 99% in a small number of iterations. For Newton iteration, we show that it is not only efficient for dense matrices but also efficient for sparse symmetric matrices. Note also that the present approach yields approximate dual variables, which provide an approximate optimality certificate for the assignment which is found (Section 4.1.1).
In the last section, we introduce an iterative method which is based on a modification of the Sinkhorn scaling algorithm, in which the deformation parameter is slowly increased (this procedure is reminiscent from simulated annealing, the parameter p playing the role of the inverse of the temperature). We prove that this iteration, which we refer to as deformed-Sinkhorn iteration, converges to a matrix whose entries that belong to the optimal permutations are nonzero, while all the other entries are zero. An estimation of the rate of convergence is also presented, but this appears to be mostly of theoretical interest since in practice, the convergence of this variant appears to be slow.
2. Entropy maximization and matrix scaling. The diagonal scaling problem can be generally defined as finding diagonal matrices D r and D c with positive diagonal entries such that the scaled matrix D r AD c has prescribed row and column sums. Due to the variety of its applications, this problem has been well studied [MS69,Bru74,SK67]. A comparison of the proposed algorithms to solve this problem, can be found in [SZ90]. A remarkable special case arises when the row and column sums of the matrix X = D r AD c are required to be identically one, so that X is bistochastic. Then, the following theorem provides a sufficient condition for the existence of a diagonal scaling.
Theorem 2.1 (Sinkhorn [SK67]). Let A be an n×n nonnegative matrix with total support (every positive entry belongs to a diagonal). Then there exist diagonal matrices D r and D c such that D r AD c is bistochastic. Moreover, if A is fully indecomposable, then D r and D c are unique up to a constant factor. Now, consider the following optimization problem, which consists in finding an n × n bistochastic matrix X = (x ij ) maximizing the following relative entropy
max X∈Bn J p (X), J p (X) := ij x ij b ij + p −1 S(X), b ij = log a ij , (2.1) where S(X) := − ij x ij log x ij
is the entropy function, p > 0 is a parameter, and B n denotes the set of n × n bistochastic matrices. The convention 0 × (−∞) is understood when interpreting the product x ij b ij . We shall assume that the matrix A := (a ij ) has total support, so that the diagonal matrices D r and D c are known to exist. We denote by G(A) := {(i, j) | a ij > 0} the pattern (set of non-zero entries) of the matrix A.
The general relation between the entropy maximization and scaling problems is well known, see e.g. [Sch89] for an overview. We shall need in particular the following result.
Proposition 2.2 (Corollary of [BLN94, Th. 3.1]). Let A be a matrix with total support. Then, the solution X(p) of the entropy maximization problem indicated in Equation 2.1 is unique and it is characterized by the existence of two positive vectors, U and V , such that x ij = a p ij u i v j for all i, j. Thus, the characterization of the proposition shows that X is obtained from the pth Hadamard power A (p) := (a p ij ) by a diagonal scaling. The previous proposition is a special case of Theorem 3.1 of [BLN94], which is established in a more general infinite dimensional setting (for p = 1; but the result for an arbitrary p follows trivially from it). We shall need in the sequel a few elements of the proof, which we next include.
First, the function J p is upper semi-continuous, and B n is compact, hence, the maximum of J p over B n is attained. If there is at least one permutation σ such that i b iσ(i) > −∞, the associated permutation matrix X = (x ij ), with x ij = 1 if j = σ(i), and x ij = 0 otherwise, is such that J p (X) > −∞. Then since the maximum of J p is attained, its value must be finite. Moreover, since the objective function is strictly concave and the feasible set is convex, the point of maximum X(p) is unique.
We claim that X(p) has the same pattern (set of positions of non-zeros entries) as the matrix A.
To see this, let Y be a bistochastic matrix with the same pattern as A, i.e. y ij > 0 iff a ij > 0. Assume by contradiction that X(p) does not have the same pattern as A, so that x ij (p) = 0 and y ij (p) > 0 for some (i, j). Then because the right derivative of the function t → −t log t at t = 0 + is infinite, the right derivative of t → J p (X(p) + t(Y − X(p))) at t = 0 + is easily seen to be infinite, and so, J p (X(p) + t(Y − X(p))) > 0 and X(p) + t(Y − X(p)) ∈ B n hold for t small enough, contradicting the optimality of X(p). Hence, the claim is established.
Consider now the Lagrange function
L(X, U, V ) = J p (X) + i u i ( j x ij − 1) + j v j ( i x ij − 1) ,
where U = (u i ) and V = (v j ) are vectors of Lagrange multipliers. The stationarity condition implies that if X is an optimal solution of the entropy maximization problem indicated in Equation 2.1, then there must exist two vectors of multipliers U and V such that, for all (i, j) ∈ G(A),
∂L ∂x ij = b ij − p −1 (1 + log x ij ) + u i + v j = 0 .
It follows that
x ij (p) = exp(p(b ij + u i + v j ) − 1) , ∀(i, j) ∈ G(A)
showing that X is obtained from the pth Hadamard power A (p) := (a p ij ) by a diagonal scaling.
Using the latter characterization of X(p), we observe that:
J p (X(p)) = − i log u i − j log v j .
We now study the convergence of X(p) as p tends to infinity. We shall consider the face F of the polytope of bistochastic matrices consisting of the optimal solutions of the linear programming formulation of the optimal assignment problem max x∈Bn ij
x ij b ij = max σ∈Sn i b iσ(i) .
Theorem 2.3. As p tends to infinity, the matrix X(p) converges to the unique matrix X * maximizing the entropy among the ones that belong to the face F consisting of the convex hull of optimal permutation matrices. In particular, if the solution of the optimal assignment problem is unique, then X(p) converges to the associated bistochastic matrix.
Proof. Since X(p) is the point of maximum of J p ,
J p (X(p))= ij x ij (p)b ij + p −1 S(X(p)) ≥ J p (X * ) = ij x * ij b ij + p −1 S(X * ) = max σ∈Sn i b iσ(i) + p −1 S(X * )
Consider a sequence (p k ) k≥1 converging to infinity, and assume that X(p k ) converges to some matrix Z, which must belong to B n . Setting p = p k in the previous inequality and taking the limit as k tends to infinity, we get ij z ij b ij ≥ max σ∈Sn i b iσ(i) , which shows that Z belongs to the face F . Observe that
p −1 k (S(X(p k )) − S(X * )) = (J p k (X(p k )) − J p k (X * )) + ij x * ij b ij − ij x ij (p k )b ij
is the sum of two nonnegative terms, because X(p k ) is a point of maximum of J p k , and X * ∈ F is a convex hull of matrices representing optimal permutations. It follows that S(X(p k )) − S(X * ) ≥ 0, and so, if Z is any accumulation point of X(p k ) as k tends to infinity, S(Z) − S(X * ) ≥ 0, showing that Z is of maximal entropy among the matrices in F . Since the entropy function is strictly convex, X * is is the only point with the latter property, and so every accumulation point of X(p k ) is equal to X * , showing that X(p) converges to X * as p → ∞. Corollary 2.4. If there is only one optimal permutation, then X(p) converges to the corresponding permutation matrix.
3. The speed of convergence. We have already shown in Theorem 2.3 that the maximal entropy solution X(p) converges as p tends to infinity, to a matrix X(∞) which is a convex hull of optimal permutation matrices. In particular, X(p) converges to an optimal permutation matrix if the optimal permutation is unique. Now, the question is how fast is this convergence. This is answered by the following theorem.
Theorem 3.1. Assume that the matrix A has total support and that log a ij ∈ Q, for all (i, j) such that a ij > 0. Then, there exists a positive constant c such that, for all i, j ∈ [n],
|x ij (p) − x ij (∞)| = O(exp(−cp))
To establish Theorem 3.1, recall that a real Puiseux series in the variable t is an expression of the form
f = k≥k c k t k/r (3.1)
where r ∈ N is positive,k ∈ Z, c k ∈ R for all k, and the sum is taken over all k ∈ Z such that k ≥k. We denote by R cvg {{t}} the set of real Puiseux series that are absolutely convergent for all t of small enough positive modulus. Lemma 3.2. For all i, j ∈ [n], there exists a Puiseux series of the form (3.1), such that
x ij (p) = f (exp(−p)) = k≥k c k exp(−pk/r) the latter series being absolutely convergent for all large enough p.
In order to establish this result, we shall use some tools from the theory of real ordered fields, for which we refer the reader to [BPR06, chapter 2].
Let us consider the following statement: if a nonnegative matrix A has total support, then there exists a unique nonnegative matrix X with row and column sums 1, and there exist diagonal matrices D and D ′ with positive diagonal entries such that
A = DXD ′ .
According to Sinkhorn's theorem [SK67] and to Proposition 2.2, this statement is true when the entries of A, X, D, D ′ belong to the field of real numbers. Moreover, this statement belongs to the first order theory of the real closed field (R, +, ×, 0, 1, >). By Tarski's theorem [Tar51], any first order statement that is valid in a special real closed field must also be valid in any real closed field. In particular, the above statement holds over the field of convergent real Puiseux series, which is known to be a real closed field. Indeed, the fact that formal Puiseux series constitute a real closed field is standard, the proof that the same is true in the case of convergent Puiseux series can be found in [BK76, § 10].
Thus for a matrix A(t) ∈ R cvg {{t}} n×n with total support, there exists diagonal matrices D(t), D ′ (t), ∈ R cvg {{t}} n×n together with a unique bistochastic matrix
X(t) ∈ R cvg {{t}} n×n such that A(t) = D(t)X(t)D ′ (t).
We choose now the matrix A(t) = (a ij (t)) such that a ij (t) = t log aij where log a ij ∈ Q. Then, the entries of the corresponding matrixX(t) have the form
x ij (p)= +∞ k=kij c ijk t k/rij
and this series is convergent for a suitably small positive t. Make now the substitution t = exp(−p). We deduce that for all suitably large p,
x ij (p) = +∞ k=kij c ijk exp(−pk/r ij ) . (3.2)
Since x(p) ij has a finite limit as p → ∞, understanding thatk ij is the first index k for which the coefficient c ijk is non-zero, we necessarily havek ij ≥ 0, so that x ij (∞) can be identified to the constant term in the latter series. Setting c = min i,j (k ij + 1)/r ij we get
|x ij (p) − x ij (∞)| = O(exp(−cp)) , which proves Theorem 3.1. Remark 3.3.
The assumption that log a ij ∈ Q in Theorem 3.1 is inconvenient. It could be avoided by replacing the field of converging Puiseux series by a field of converging generalized Dirichlet series, along the lines of [Mar]. However, this would require working out the convergence issues which are not treated in [Mar].
Remark 3.4. The formulation (2.1) is somehow reminiscent of interior point methods, in which the entropy S(X) = − ij x ij log x ij is replaced by a log-barrier function (the latter would be ij log x ij in the present setting). The present X(p) thought of as a function of p → ∞ is analogous to the central path, and as does the central path, X(p) converges to a face containing optimal solutions. However, the entropy S(X) does not satisfies the axioms of the theory of self-concordant barriers on which the analysis of interior point methods is based. Indeed, the speed of convergence in O(exp(−cp)) appears to be of a totally different nature by comparison with the speed of O(1/p) observed in interior point methods [NN94].
Example 3.5. The constant c appearing in Theorem 3.1 can be small if there are several nearly optimal permutations, and then a large value of p may be needed to approximate X(∞). However, in such cases, a much smaller value of p turns out to be enough for the method described in the next sections, the aim of which is to eliminate a priori entries not belonging to (nearly) optimal permutations. This is illustrated by the following matrix, in which the identity permutation is optimal, and the transposition (1, 2) is nearly optimal: The convergence of X(p) to X(∞) is illustrated in Figures 3.1. Observe that the graph of log x ij (p) as a function of p is approximately piecewise affine. In fact, each piece corresponds to a monomial in the Puiseux series expansion (3.2) (see [Vir01] for an explanation of this fact). The path p → X(p) converges quickly to the face containing the two nearly optimal permutations and slowly to the unique optimal permutation. Remark 3.6. Finding an explicit formula for the speed of convergence c appears to be an interesting combinatorial problem (which is beyond the scope of this paper).
4. Preprocessing for the optimal assignment problem. For a fixed p > 0, the solution for the entropy maximization problem displayed in Equation (2.1) can be computed by any scaling algorithm such as Sinkhorn iteration or Newton method. Using Theorem 3.1, it can be seen that if the original matrix has only one optimal permutation, the order of magnitude of all the entries which belong to the optimal permutation will be 1 ± O(exp(−cp)) while the order of magnitude of all other entries are O(exp(−cp)). As an example, consider the following 5 by 5 random matrix with the bold entries belonging to optimal permutation. Thus, for sufficiently large values of p, when X(p) is an ǫ−bistochastic matrix, meaning that some distance between X(p) and a bistochastic matrix is less than ǫ, one may delete all the small entries which are less than a threshold t, chosen consistent with ǫ, while keeping all others. In this way the size of the original problem in terms of memory requirements will be reduced to a much smaller one.
For a column (row) stochastic matrix, that is a matrix for which the sum of all columns (rows) are one, the distance to the set of bistochastic matrices will be measured by max i |r i − 1| where r i indicates the ith row (column) sum.
Determining the coarsest accuracy ǫ and the maximal threshold t which are needed to find an optimal permutation would require to know the maximal entropy solution X(∞) characterized in Theorem 2.3. This information is in general not available. However, the worst case can be considered to be the one where X(∞) is uniform, with all entries equal 1/n (and n! optimal permutations). Since we need to preserve the optimal permutations, this leads to a conservative choice ǫ = t = 1/n which we adopted in the present experimental results. The choice of the value of p will be discussed in Section 4.1.2. This leads to Algorithm 1.
Algorithm 1 An optimal assignment preprocessing for fixed p input: A, p n ← size(A, 1)
ǫ, t ← 1/n comment: Prescaling if max(A) min(A) > e then m ← 1 log(max(A)/ min(A)) , c ← e log(min(A)) log(max(A)/ min(A)) A ← 1 c A (m) else A ← 1 min(A) A end if B ← A (p) comment: Main loop repeat
apply one iteration of any diagonal scaling algorithm to B so B ← DB ′ D, where D, D ′ are diagonal matrices until B is ǫ−bistochastic Delete all the entries of B which are less than a threshold t
The naive computation of A (p) is numerically unstable for large values of p. This can be avoided by the prescaling step in Algorithm 1. Then, we set max(A) = max ij a ij , min(A) = min aij >0 a ij . By applying this prescaling, all the nonzero scaled entries will be placed in the [1, e] interval. In the case when max(A)/ min(A) > e, the prescaling has another interesting property, that is, the scaled matrix is invariant by any entrywise power of the input matrix. In other words, if we apply the prescaling to the matrix A (q) , for all q ≥ 1, the matrix obtained after the prescaling step turns out to be independent of the choice of q. When max(A) min(A) < e the entries of A have already been located in the interval min(A)[1, e], then we do not need to perform the previous prescaling since the denominator in the formula defining m will be small if max(A) is close to min(A). We shall also see in Section 4.1.1 that the iterations can be implemented robustly for large values of p by working with log-coordinates. Next, we provide more details on the proposed algorithm.
Sinkhorn iteration.
A simple way to compute the diagonal matrices D, D ′ is Sinkhorn iteration [SK67]. This algorithm starts from a given matrix A, divides every row by its sum, then every column of the new matrix by its sum, and so on, until the matrix obtained in this way converges to a bistochastic matrix. The advantage of this algorithm is that, it can be efficiently implemented in parallel [ADRU08] and it can be applied to any non-negative matrix which has at least one nonzero permutation. The disadvantage is that, it is generally slower than other methods.
Recall first that the open cone C = {x ∈ R n : x i > 0, ∀i} consisting of positive vectors of R n is equipped with Hilbert's projective metric, defined by This general bound is applicable only for positive matrices and it can be coarse in practice. Recently, Knight [Kni08] provided a local rate of convergence. Due to his work, for classical Sinkhorn iteration the local rate of convergence of a fully indecomposable matrix, is bounded by σ 2 2 where σ 2 is the second singular value of the bistochastic matrix to which the iteration converges. Hence, the following result allows us to estimate the local convergence rate of Sinkhorn iteration, as p → ∞.
d(x, x ′ ) = log max i,j x i x ′ j x ′ i x j Note that d(x, x ′ )
Proposition 4.2. Assume that there is only one optimal permutation. Then, there is a constant c > 0 such that
1 − O(exp(−cp)) ≤ σ 2 (X(p)) ≤ 1 as p → ∞
Assume now that the matrix X(∞) is fully indecomposable (which implies that there are several optimal permutations). Then, σ 2 (X(p)) → σ 2 (X(∞)) < 1 as p → ∞ .
Proof. Due to the perturbation theorem of Mirsky [Mir60], for any unitarily invariant norm . and n × n matrices, X andX with singular values σ 1 ≥ σ 2 ≥ . . . ≥ σ p andσ 1 ≥σ 2 ≥ . . . ≥σ p respectively, we have,
diag(σ i − σ i ) ≤ X − X .
So, for X(p) and X(∞),
|σ 2 (X(p)) − σ 2 (X(∞))| ≤ X(p) − X(∞) 2 ≤ O(exp(−cp))
for which the constant c depends on the coefficients of the Puiseux series and possibly on the dimension of X(p). Thus, if the original matrix has only one optimal permutation, σ 2 (X(∞)) = 1 which implies that 1 − O(exp(−cp)) ≤ σ 2 (X(p)) Moreover according to the Birkhoff-von Neumann theorem [Bir46], for any norm . on R n which is invariant under permutation of the coordinates and for any bistochastic matrix X, X = 1 and subsequently 1 − O(exp(−cp)) ≤ σ 2 (X(p)) ≤ 1 When X(∞) is fully indecomposable, since the multiplication of two fully indecomposable matrices is also fully indecomposable, M = X(∞)X T (∞) is fully indecomposable. Note also that for all 1 ≤ i ≤ n, m ii = n j=1 x 2 ij > 0, which implies that M is primitive. Then, according to the Perron-Frobenius theorem, all the eigenvalues of M distinct from ρ(M ) have a modulus strictly smaller than ρ(M ) = 1 which yields σ 2 (X(∞)) < 1.
4.1.1. Logarithmic p-Sinkhorn iteration and approximate optimality certificates. As it was discussed before, computing the pth Hadamard power of A may cause some numerical difficulties. To avoid this problem a prescaling has been proposed, after which all the matrix entries are in [1, e] interval. A theoretical disadvantage of this prescaling is that the increase of p is limited since e p < l, where l is the largest number, in the numerical range. However, we next give a log-coordinate implementation of Sinkhorn iteration which avoids this limitation. This will provide as a by product a certificate allowing one to check the approximate optimality of a permutation.
Let A ∈ R n×n be a real non-negative matrix which has total support. For a given p, consider the following iteration for a sequence of vectors U k , V k ∈ R n V 0 = ½ (4.1)
U k+1 = I(A (p) V k ) (4.2) V k+1 = I(A (p)T U k+1 ) (4.3)
where ½ is a vector [1, 1, . . . , 1] T of dimension n and I is an operator which inverses the entries of a vector. Proposition 4.3. For a nonnegative matrix, A, which has total support, the iteration defined by Equations 4.1, 4.2 and 4.3 coincides with Sinkhorn iteration.
Proof. Let W k and Z k respectively, be column scaled and row scaled matrices defined as the following:
W k = diag(U k )A (p) diag(V k ) Z k = diag(U k+1 )A (p) diag(V k )
Also, let C denote the column scaling operator in which all the columns of a matrix are divided by it's sums and R be the similar operator for rows. It is easy to verify that, R(DB) = R(B) and C(BD) = C(B) for any diagonal matrix D. According to the definition
Z k = R(A (p) diag(V k )) = R(diag(U k )A (p) diag(V k )) = R(W k )
a similar statement can be proved for W k , that is, W K = C(Z K ) which completes the proof.
Assume thatŪ k = (u k i ) = p −1 log U k andV k = (v k i ) = p −1 log V k , then, the logarithmic form of this iteration can be written as:
u k+1 i = − 1 p log j exp p(log a ij +v k j ) v k+1 i = − 1 p log j exp p(log a ji +ū k+1 j ) Letx ij = log a ij +v k j − max j (log a ij +v k j ) y ji = log a ji +ū k+1 j − max j (log a ji +ū k+1 j )
for whichx ij ,ŷ ji ≤ 0. The logarithmic iteration can be reformulated by usingx ij and y ji as the following:
u k+1 i = − max j (log a ij +v k j ) − 1 p log j exp px ij (4.4) v k+1 i = − max j (log a ji +ū k+1 j ) − 1 p log j exp pŷ ji (4.5)
The last iteration can be computed for a sufficiently large p, without having numerical difficulties. We note that a related trick was used by Malajovich and Zubelli [MZ01] in a different context. Proposition 4.4 (Approximate optimality certificate). LetŪ ,V andX be produced by the p-Sinkhorn iteration. Also, let ζ i := 1 p log j exp px ij and let Val(OAP) denote the logarithmic of the value of an optimal permutation. Then,
Val(OAP) ≤ − n i=1ū i − n j=1v j − n i=1 ζ i .
(4.6)
Proof. Observe that at each step of the Sinkhorn iteration:
log a ij +v k j ≤ −ū k+1 i − ζ i , 1 ≤ i ≤ n
Let σ denote an optimal permutation. Choosing j = σ(i) in the previous inequality, and summing over 1 ≤ i ≤ n, we get (4.6).
In practice, this proposition will be used to check the validity of the preprocessing, by comparing the logarithm of the value of the permutation which is eventually found with the upper bound (4.6).
Experimental results.
The experiments which are presented here have been obtained by using Sinkhorn iteration in Algorithm 1 as a diagonal scaling method. We used Matlab version 7.10.0. The detailed Matlab implementation of the algorithm is presented below.
Finding the best value for p seems to be tricky since increasing p yields a slow convergence and at the same time, it yields the lower percentage of remaining entries. This fact also can be seen in Figures 4.1, 4.2 A=A.^m; end A=A.^(p); d=(1/n)+1; it=0; while (d> 1/n) %main loop A=diag(sparse((A*ones(n,1)).^(-1)))*A; A=A*diag(sparse((A'*ones(n,1)).^(-1))); d=max(abs(sum(A')-1)); it=it+1; end; [indx,indy]=find(A>t); A=sparse(indx,indy,1,n,n).*A; end
In the following experiments, we set the parameter p to 100 which leads to a reasonable decrease in the size of the problem and generally does not yield to a slow convergence, however it could be any reasonably large value. Recall that the convergence is measured by max i |r i − 1|, where r i denotes the ith row (column) sum for a column (row) stochastic matrix. Table 4.1 displays the results for dense matrices from the gallery of test matrices of Matlab. For these experiments the dimension is 5000. The columns from left to right are: gallery name, number of nonzeros, number of iterations, the logarithmic value of optimal assignment and the percentage of remaining entries after deleting small entries. The same results are also presented for a random matrix, referred to as "'rand"'(the random function of Matlab) and an Euclidean random matrix referred to as "'Euclidean"'. The latter, which is of interest in statistical physics, is a matrix whose entries are functions of random points in an Euclidean space [Par02]. More precisely, we draw at random 2n points x 1 , . . . , x n ; y 1 , . . . , y n uniformly in the unit cube of R 3 . Then, we consider the matrix A = (a ij ) where a ij = exp(−d(x i , y j )) and d is the Euclidean distance. In this way, a permutation σ which maximizes n i=1 a ij is the same permutation which minimizes the distance between these two sets of points. As Table 4.1 shows, For more than 58% of the cases, the algorithm converges very fast (in less than 80 iterations) and for 82% of the cases the algorithm converges in less than 500 iterations(which is less than 0.1 of the dimension of the input matrix). Also for more than 41% of the cases the original problem reduced to a new problem which has less than 2% of the original entries and in 82% it reduces to a new problem with less than 30% of the input entries. Since, the Sinkhorn iteration can be implemented in parallel, this method can be efficiently applied to large dense optimal assignment problems as a parallel preprocessing to reduce the size of the original problem.
We also tested several sparse matrices from The University of Florida Sparse Matrix Collection. The results show that using Sinkhorn iteration as a diagonal scaling method in Algorithm 1 generally makes a slow convergence for sparse matrices.
Newton iteration.
Solving the diagonal matrix scaling problem by using Newton iteration has been considered first in the work of Khachian and Kahalantari [KK92] for positive semidefinite symmetric matrices. They have considered the more general problem of finding a positive zero of the mapping
f (x) = b + Ax − x −1
where A is a given matrix of dimension n and b is a fixed n−dimensional vector. They proposed a path-following Newton algorithm of complexity O( √ nL) where L is the binary length of the input. Recently, Knight and Ruiz have considered a Newton algorithm for nonnegative matrices [KR07]. For a symmetric matrix A, they considered the diagonal matrix scaling problem as finding a vector x such that
f (x) = D(x)Ax − ½ = 0 where D(x) = diag(x)
. If A is nonsymmetric, then the following matrix will be considered as the input of the algorithm.
S = 0 A A T 0
They showed that the Newton iteration can be written as
A k x k+1 = Ax k + D(x k ) −1 ½ where A k = A + D(x k ) −1 D(Ax k )
. Thus in each iteration a linear system of equations should be solved for which they used the Conjugate Gradient method. In the nonsymmetric case, the latter linear system is singular, however it is proved that the system is consistent whenever A has support (A ≥ 0 has support if it has a positive diagonal). Our experiments which will be presented later shows that, the method works fast for dense nonsymmetric matrices. However with the default tuning parameters, it does not work fast in sparse nonsymmetric cases. More details and the exact implementation of this method can be found in [KR07]. Here, we used the later method in Algorithm 1 to find the scaling matrices. We also set the parameter p to 100 which is the same as Sinkhorn iteration. In the following tables, No. it. denotes the total number of operations, each of them takes O(n 2 ) time to be done. This includes all the iterations of Conjugate Gradient method for each Newton step. Tables 4.2 and 4.3 show the results for dense symmetric and nonsymmetric matrices with dimension 5000. For both cases the algorithm converges rapidly in a small number of iterations. The percentage of the remaining entries is reasonably less than the original problem. In fact, in more than 38% of the cases, the original problem reduced to a much smaller problem which has less than 2% of the original entries and in 72% of the cases the problem reduces to a problem with less than 30% of the original entries.
Tables 4.4 and 4.5 show the result of this algorithm on several sparse symmetric and nonsymmetric matrices from The University of Florida Sparse Matrix Collection. These results show that the algorithm generally works very well for sparse symmetric matrices while the convergence for sparse nonsymmetric matrices is not fast.
Deformed Sinkhorn iteration.
In the previous section, we computed X(p) for a fixed value of p. However, it is natural to develop a "path following method" in which the value of p is gradually increased in the course of Sinkhorn balancing iterations. In this section we propose such an algorithm. We prove that if the matrix A has support (A has support if it has a positive diagonal), and if the growth of p is moderate enough, then the sequence of matrices produced by the algorithm converges to a point which belongs to the face generated by optimal permutations. 5.1. Definition. Let A ∈ R n×n be a real non-negative matrix. Consider the following iteration which is a standard Sinkhorn iteration with a deformation of using a sequence p m which goes to infinity.
U m+1 = I(A (pm+1) V m ) V m+1 = I(A (pm+1)T U m+1 )
Let W m+1 and Z m respectively, be column scaled and row scaled matrices defined as the following: Proof. We only prove the last one since others are straightforward.
W m+1 = diag(U m+1 )A (pm+1) diag(V m+1 ) Z m = diag(U m+1 )A (pm+1) diag(V m ) (5.1)Z m = R(A (pm+1) diag(V m )) = R(A (pm) diag(V m ) • A (pm+1−pm) ) = R((diag(U m )A (pm) diag(V m )) • A (pm+1−pm) ) = R(W m • A (pm+1−pm) )
According to the previous proposition, we define the following iteration which we refer to as deformed Sinkhorn iteration.
W 0 = C(A (p0) ); W m = C(Z m−1 ), c m = (Z m−1 T )½ (5.2) Z m = R(W m • A (pm+1−pm) ), r m = (W m • A (pm+1−pm) )½ (5.3)
Here, r m , c m respectively are the vectors of row sums and column sums.
5.2. Convergence to optimal assignment. For an input matrix, A = (a ij ), assume that the deformed Sinkhorn iteration converges to a bistochastic matrix. Define the weight of a permutation, σ, with respect to A, to be ω σ (A) = i a iσ(i) . If A has a support, it should have at least one optimal permutation as σ opt with nonzero weight. It is evident that σ opt is the optimal permutation for all the matrices W m and Z m produced by each deformed Sinkhorn iteration. Observe that for all permutations σ and π, the ratio ωσ (A) ωπ(A) is invariant if we multiply the matrix A by diagonal matrices. So it follows from the Equation 5.1 that
γ m = ω σ (Z m ) ω π (Z m ) = γ m−1 ( ω σ (A) ω π (A) ) pm+1−pm = ( ω σ (A) ω π (A) ) pm+1
Thus, for all non optimal permutations such as σ, ωσ(Z m ) ωσ opt (Z m ) will converge to zero when p m → ∞. Since in each iteration the weight of optimal permutation, ω σopt (Z m ), is bounded above by 1, the weight of all non optimal permutations will converge to zero which yields the following lemma.
Lemma 5.2. Assume that the deformed Sinkhorn iteration converges to a matrix, Z, produced by the deformed Sinkhorn iteration when p m → ∞. If the original matrix A has a support, then all the permutations of Z have zero weight, except the optimal permutations of the original matrix A.
Due to the theorem of Birkhoff-von Neumann, a square bistochastic matrix in R is a convex combination of permutation matrices. Hence, all the nonzero entries of a bistochastic matrix belong to a permutation with nonzero weight. This statement together with the previous lemma yield the following theorem.
Theorem 5.3. For a non-negative matrix A which has a support, as p m → ∞, if the deformed Sinkhorn iteration converges to a matrix X, then all the nonzero entries of X belong to an optimal permutation of the original matrix.
Convergence to bistochastic matrix for positive matrices.
Recall that the rate of convergence of the classical Sinkhorn iteration is bounded above by κ(A) 2 where κ(A) = θ(A) 1/2 −1 θ(A) 1/2 +1 . The following theorem presents the main result of this section:
Theorem 5.4. Let A be a positive matrix. If p m = a log(m + 1) where 0 < a log θ < 2, then the deformed Sinkhorn iteration will converge to a bistochastic matrix and subsequently to a solution of optimal assignment of the original matrix A.
The proof relies on the next lemmas. For a matrix A, θ(A) = θ(A T ), and for two diagonally equivalent matrices such as A and B, θ(A) = θ(B).
Lemma 5.5. For positive matrices A and B, diagonal matrix D, and d(x, x ′ ) the Hilbert projective metric, the following properties hold.
1. d(Ax, Ax ′ ) ≤ κ(A)d(x, x ′ ) 2. d((A • B)x, x ′ ) ≤ log max(B) min(B) + d(Ax, x ′ ) 3. κ(AD • B) = κ(A • BD) = κ((A • B)D) = κ(D(A • B)) = κ(A • B) Proof.
The proof is straightforward. Corollary 5.6. κ(A) is invariant under R or C operators. Lemma 5.7. Let W m and Z m be the matrices in Equations (5.2,5.3) at iteration m. The following properties hold.
1. κ(Z m ) = κ(A (pm+1) ) 2. κ(W m ) = κ(A (pm) ) Proof. The proof is straightforward by using the induction on m. The next lemma is similar to Lemma 2 in [FL89], where the classical Sinkhorn iteration is considered. Since 0 < α < 1, one readily checks that the sequence l m decreases with m and converges to zero. If β m+1 ≤ l m for every m, then lim m→∞ β m ≤ lim m→∞ l m = 0, and the result is established. Assume now that β m+1 > l m for some m. Define δ k := β k+1 − l k for all k ≥ m. Observe that
δ k+1 = f k (β k ) − f k (l k ) = κ 2 (A (p k ) )(β k − l k ) = κ 2 (A (p k ) )δ k + κ 2 (A (p k ) )(l k−1 − l k ) .
Using the fact that κ 2 (A (pr ) ) ≤ 1 holds for all r, an immediate induction yields Letting k → ∞ in (5.4), we get lim sup k→∞ δ k ≤ l m . Since this holds for all m, it follows that lim sup k→∞ δ k ≤ 0, and so lim sup k→∞ β k+1 = lim sup k→∞ δ k + l k ≤ lim sup k→∞ δ k + lim k→∞ l k = 0. Hence, β k converges to zero.
The proof of the Theorem 5.4 is achieved since lim m→∞ d(c m , ½) = 0 yields lim m→∞ d(r m , ½) = 0 6. Conclusion. We considered the connection between the entropy maximization problem and the optimal assignment problem. This allows us to propose an algorithm which can be used as a preprocessing in the solution of large scale optimal assignment problems to reduce the size of the input problem in terms of memory requirements.
Two variants of the algorithm have been implemented. The first variant, which is based on Sinkhorn iteration, shows a generally reasonable convergence for dense matrices, with a reduction of up to 99% of the input problem. However the algorithm works slowly for sparse matrices. This version of the algorithm can be efficiently used as a parallel preprocessing to reduce the size of the input problem in very large dense optimal assignment problems.
Another variant of the algorithm, implemented by using the Newton iteration, shows fast convergence for all dense matrices and sparse symmetric matrices. However the convergence speed for sparse nonsymmetric matrices is slow.
The last part of the paper concerns a new iterative method that we refer to as deformed-Sinkhorn iteration. It is proved that the iteration converges to the solution of optimal assignment problem, if the input matrix is positive and if it has only one optimal permutation. For positive matrices with more than one optimal permutation, the iteration converges to a matrix for which all the nonzero entries belong to at least one optimal permutation.
Fig. 1 . 1 .
11Euclidean random assignment problem (Section 4.1.2) Fig. 1.2. Reduced problem
For p = 10, we have the following matrix, the significant entries of which indicate precisely the optimal and nearly optimal permutations:
Fig. 3 . 1 .
31The variation of log 10 x 12 (p) as a function of p.
the Sinkhorn iteration on A (50) the following matrix can be computed. 4E − 27 1.5E − 08 1.0E+00 7.4E − 26 4.7E − 06 4.8E − 02 9.4E-01 4.6E − 56 4.0E − 32 7.9E − 28 2.5E − 13 4.6E − 19 9.3E − 12 1.0E+00 1.0E − 02 1.5E − 23 1.2E − 02 6.2E − 27 4.3E − 31 9.8E-01 9.5E-01 4.1E − 02 6.2E − 07 1.0E − 34 2.3E − 06
is zero if and only if the vectors x and x ′ are proportional. We refer to [BR97, § 6] for more background. In particular, if A is a positive matrix, a theorem of Birkhoff shows that the map x → Ax is a contraction in Hilbert's projective metric, with a contraction rateκ(A) := sup{ d(Ay, Ay ′ ) d(y, y ′ ) : y, y ′ ∈ C, y, y ′ non proportional} = θ(A) 1/2 − 1 θ(A) 1/2 + 1 , where θ(A) = exp sup{d(Ay, Ay ′ ) : y, y ′ ∈ C} = max i,j,p,l a ir a jl a jr a ilThe following result is a consequence of this theorem. Proposition 4.1 (Franklin and Lorenz[FL89]). For a positive matrix A, the global rate of convergence of Sinkhorn iteration is bounded above by κ(A) 2 .
which illustrate the percentage of the remaining entries and the required number of Sinkhorn iterations, for several values of p for the "lotkin" 1000 by 1000 matrix from the gallery of Matlab. /(log(Max)-log(Min)); c=exp(log(Min)/(log(Max)-log(Min))); A=(1/c)*(A.^m); else m=1/log(Max);
Fig. 4 . 1 .
41The number of iterations as a function of p.
Fig. 4 . 2 .
42The percentage of remaining entries as a function of p.
Proposition 5. 1 .
1For a diagonal matrix D, real matrices B, C and the matrices W m , Z m in the iteration, the following properties hold. 1. R(C • (DB)) = R(C • B) where • indicates the Hadamard product 2. W m = C(Z m−1 ) 3. Z m = R(W m • A (pm+1−pm) )
Lemma 5 . 8 . 1
581Let r m , c m be the vectors defined in Equation (5.2,5.3) at iteration m and M = max(A) min(A) then,d(r m , ½) ≤ (p m+1 − p m ) log M + (p m − p m−1 )κ(A (pm) ) log M +κ(A (pm) )κ(A (pm−1) )d(r m−1 , ½) d(c m , ½) ≤ (p m − p m−1 ) log M + (p m − p m−1 )κ(A (pm−1) ) log M +κ 2 (A (pm−1) )d(c m−1 , ½)Proof. Let ½/V indicates the entrywise inverse of a given vector, V . We have,r m = (W m • A (pm+1−pm) )½ = (Z m−1 diag(½/c m ) • A (pm+1−pm) )½ = (Z m−1 • A (pm+1−pm) ) diag(½/c m )½ = (Z m−1 • A (pm+1−pm) )(½/c m ), so d(r m , ½)= d((Z m−1 • A (pm+1−pm) )(½/c m ), Z m−1 ½) ≤ (p m+1 − p m ) log M + κ(Z m−1 )d(c m , ½) = (p m+1 − p m ) log M + κ(A (pm) )d(c m , ½). Also d(c m , ½)= d((W m−1 T • A (pm−pm−1)T )(½/r m−1 ), W m−1 T ½) ≤ (p m − p m−1 ) log M + κ(W m−1 T )d(½/r m−1 , ½) = (p m − p m−1 ) log M + κ(W m−1 )d(r m−1 , ½) = (p m − p m−1 ) log M + κ(A (pm−1) )d(r m−1 , ½), then d(r m , ½) ≤ (p m+1 − p m ) log M + (p m − p m−1 )κ(A (pm) ) log M +κ(A (pm) )κ(A (pm−1) )d(r m−1 , ½)The second statement is established in a similar way.Lemma 5.9. Assume that p m = a log(m + 1), where 0 < a log θ(A) < 2. Then we have lim m→∞ d(c m , ½) = 0. Proof. Since d(c m , ½) = a log m + 1 m log M + a log m + 1 m κ(A (pm−1) ) log M +κ 2 (A (pm−1) )d(c m−1 , ½) < 2a log M m + κ 2 (A (pm−1) )d(c m−1 , ½) . Let β 1 := d(c 1 , ½), and define the sequence β m by β m := f Since every function f m is nondecreasing, an immediate induction shows that d(c m , ½) ≤ β m , for all m ≥ 1, and so, it suffices to show that lim m β m = 0. Let l m be the fixed point of f m−
(A (pr ) ) δ m + l m − l k , ∀k ≥ m + 1 .(5.4)Since 1 − κ 2 (A (pr ) ) ∼ 4r −α , we have ∞ r=m κ(A (pr ) ) = 0
Table 4 .1
4Sinkhorn iteration for dense matrices from the gallery of test matrices of Matlab and for random and random Euclidean distance matricesGallery
nnz
No. it.
Val(OAP)
Rem. En.(%)
cauchy
25000000
79
4.54725E + 00
47.95
minij
25000000
473
1.25025E + 07
26.57
moler
25000000
304
4.99950E + 07
28.43
orthog
25000000
304
4.99950E + 07
28.43
pei
25000000
1
5.50000E + 04
00.02
prolate
25000000
42
2.00000E + 03
00.66
randcorr
25000000
1
5.00000E + 03
00.02
toeppd
25000000
1
1.24767E + 07
00.02
chebvand
24997500
2
5.00000E + 03
38.67
circul
25000000
1
2.50000E + 07
19.48
cycol
25000000
3
1.73422E + 04
13.23
lotkin
25000000
73
5.54715E + 00
48.59
rand
25000000
2
4.99837E + 03
28.38
Euclidean
25000000
417
4.77693E + 03
01.49
chebspec
25000000
1084
5.33411E + 07
01.98
lehmer
25000000
3537
5.00000E + 03
18.58
gcdmat
25000000
11174
1.25025E + 07
00.06
Table 4 .2
4Newton iteration for dense symmetric matricesGallery
nnz
No. it.
Val(OAP)
Rem. En.(%)
cauchy
25000000
156
−4.10569E + 04
47.95
fiedler
24995000
175
3.91202E + 04
35.73
gcdmat
25000000
152
3.75911E + 04
00.06
lehmer
25000000
166
0.00000E + 00
18.58
minij
25000000
167
3.75911E + 04
26.57
moler
25000000
167
4.45149E + 04
28.43
orthog
25000000
164
−1.9561E + 04
48.10
pei
25000000
151
1.19895E + 04
00.02
prolate
25000000
155
−4.58145E + 03
00.66
randcorr
25000000
151
0.00000E + 00
00.02
toeppd
25000000
151
3.91132E + 04
00.02
Table 4
4.3
Newton iteration for dense nonsymmetric matrices
Gallery
nnz
No. it.
Val(OAP)
Rem. En.(%)
chebspec
25000000
251
4.03274E + 04
01.98
chebvand
24997500
166
−1.19254E − 03
38.67
circul
25000000
161
4.25860E + 04
19.48
cycol
25000000
162
6.19386E + 03
11.81
lotkin
25000000
257
−4.10477E + 04
48.59
rand
25000000
164
−1.63137E + 00
28.39
Euclidean
25000000
314
−2.30779E + 02
01.49
Table 4 .4
4Newton iteration for sparse symmetric matricesGallery
n
nnz
No. it.
Val(OAP)
Rem. En.(%)
2cubes sphere
101492
1647264
155
1.29645E + 06
95.91
Andrews
60000
760154
151
1.45202E + 05
07.89
apache2
715176
4817870
155
6.65166E + 06
26.18
boneS01
127224
5516602
153
1.13622E + 06
02.31
denormal
89400
1156224
153
−2.88379E + 05
07.73
Dubcova3
146689
3636643
159
−8.55189E + 03
46.57
ecology1
1000000
4996000
153
3.61494E + 06
20.02
filter3D
106437
2707179
161
−7.01011E + 05
79.95
finan512
74752
596992
151
1.03471E + 05
19.67
G2 circuit
150102
726674
153
6.58486E + 05
41.77
GaAsH6
61349
3381809
162
2.32268E + 05
28.82
gas sensor
66917
1703365
160
−4.89303E + 05
90.37
H2O
67024
2216736
153
3.08149E + 05
03.02
helm2d03
392257
2741935
153
5.01026E + 05
14.31
Lin
256000
1766400
153
1.60526E + 06
14.49
nasasrb
54870
2677324
161
8.56473E + 05
62.37
offshore
259789
4242673
161
4.84144E + 06
99.87
parabolic fem
525825
3674625
153
−4.83938E + 05
71.46
qa8fm
66127
1660579
153
−5.51168E + 05
03.98
rail 79841
79841
553921
151
−8.54968E + 05
15.09
s3dkq4m2
90449
4427725
161
5.21115E + 04
73.77
shallow water2
81920
327680
151
1.95771E + 06
25.00
ship 003
121728
3777036
161
3.05969E + 06
85.85
shipsec8
114919
3303553
164
1.94819E + 06
82.96
t3dh e
79171
4352105
156
−1.28870E + 06
27.32
thermomech TK
102158
711558
151
4.85968E + 05
15.49
tmt sym
726713
5080961
158
1.00529E + 06
71.46
filter3D
106437
2707179
161
−7.01011E + 05
79.95
G3 circuit
1585478
7660826
153
6.72048E + 06
72.19
H2O
67024
2216736
153
3.08149E + 05
03.02
SiO2
155331
11283503
153
7.14208E + 05
17.34
thermal2
1228045
8580313
154
1.63908E + 06
80.32
Table 4 .
45
Newton iteration for sparse nonsymmetric matrices
Gallery
n
nnz
No. it.
Val(OAP)
Rem. En.(%)
af23560
23560
460598
2248
8.74776E + 04
70.32
bayer04
20545
85537
183275
−5.45190E + 04
80.21
bbmat
38744
1771722
2234
4.73786E + 04
32.83
ecl32
51993
380415
23389
−2.73185E + 05
81.66
g7jac200sc
59310
717620
47245
3.93891E + 04
86.40
gemat11
4929
33108
2780
4.07095E + 03
84.70
graham1
9035
335472
4014
−1.84675E + 04
51.59
hcircuit
105676
513072
34980
−3.83585E + 05
88.59
hydr1
5308
22680
73772
5.25311E + 03
78.65
jpwh 991
991
6027
151
1.47688E + 03
16.44
lhr71c
70304
1528092
2227871
−7.63013E + 04
83.56
mahindas
1258
7682
3485
−6.49190E + 01
31.71
onetone1
36057
335552
23601
1.13220E + 05
87.97
onetone2
36057
222596
24122
1.13220E + 05
85.64
orani678
2529
90158
5073
−1.57076E + 02
05.20
sherman3
5005
20033
168
−2.62102E + 04
85.63
sherman5
3312
20793
1696
6.67064E + 03
29.55
Acknowledgment. The authors thank Jean-Charles Gilbert for his comments on an early version of this manuscript.
A parallel matrix scaling algorithm. P Amestoy, I S Duff, D Ruiz, B Uçar, High Performance Computing for Computational Science -VECPAR. Berlin / HeidelbergSpringer5336P. Amestoy, I. S. Duff, D. Ruiz, and B. Uçar. A parallel matrix scaling algorithm. In High Performance Computing for Computational Science -VECPAR 2008, volume 5336 of Lecture Notes in Computer Science, pages 301-313. Springer Berlin / Heidelberg, 2008.
Assignment problems. R Burkard, M Dell'amico, S Martello, Society for Industrial and Applied Mathematics (SIAM). BFH + 03R. Burkard, M. Dell'Amico, and S. Martello. Assignment problems. Society for Indus- trial and Applied Mathematics (SIAM), Philadelphia, PA, 2009. [BFH + 03]
Reconstruction of the early universe as a convex optimization problem. Y Brenier, U Frisch, M Henon, G Loeper, S Matarrese, R Mohayaee, A Sobolevskii, Mon.Not.Roy.Astron.Soc. 346Y. Brenier, U. Frisch, M. Henon, G. Loeper, S. Matarrese, R. Mohayaee, and A. Sobolevskii. Reconstruction of the early universe as a convex optimization problem. Mon.Not.Roy.Astron.Soc., 346:501-524, 2003.
Three observations on linear algebra. G Birkhoff, Univ. Nac. Tucumán. Revista A. 5G. Birkhoff. Three observations on linear algebra. Univ. Nac. Tucumán. Revista A., 5:147-151, 1946.
The asymptotic theory of stochastic games. T Bewley, E Kohlberg, Math. Oper. Res. 13T. Bewley and E. Kohlberg. The asymptotic theory of stochastic games. Math. Oper. Res., 1(3):197-208, 1976.
Entropy minimization, DAD problems, and doubly stochastic kernels. J M Borwein, A S Lewis, R D Nussbaum, J. Funct. Anal. 1232J. M. Borwein, A. S. Lewis, and R. D. Nussbaum. Entropy minimization, DAD prob- lems, and doubly stochastic kernels. J. Funct. Anal., 123(2):264-307, 1994.
Algorithms in real algebraic geometry. S Basu, R Pollack, M.-F Roy, Algorithms and Computation in Mathematics. 10Springer-Verlagsecond editionS. Basu, R. Pollack, and M.-F. Roy. Algorithms in real algebraic geometry, volume 10 of Algorithms and Computation in Mathematics. Springer-Verlag, Berlin, second edition, 2006.
Nonnegative matrices and applications. R B Bapat, T E S Raghavan, of Encyclopedia of Mathematics and its Applications. CambridgeCambridge University Press64R. B. Bapat and T. E. S. Raghavan. Nonnegative matrices and applications, volume 64 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, 1997.
The DAD theorem for arbitrary row sums. R A Brualdi, Proc. Amer. Math. Soc. 45R. A. Brualdi. The DAD theorem for arbitrary row sums. Proc. Amer. Math. Soc., 45:189-194, 1974.
Towards auction algorithms for large dense assignment problems. L Buš, P Tvrdík, Comput. Optim. Appl. 433L. Buš and P. Tvrdík. Towards auction algorithms for large dense assignment problems. Comput. Optim. Appl., 43(3):411-436, 2009.
Maximumweight bipartite matching technique and its application in image feature matching. ] Y.-Q + 96, V Cheng, R T Wu, A R Collins, E M Hanson, Riseman, Proc. SPIE Visual Comm. and Image Processing. SPIE Visual Comm. and Image essing+ 96] Y.-Q. Cheng, V. Wu, R. T. Collins, A. R. Hanson, and E. M. Riseman. Maximum- weight bipartite matching technique and its application in image feature matching. In In Proc. SPIE Visual Comm. and Image Processing, 1996.
An algorithm for solving the assignment problem. E A Dinic, M A Kronrod, Dokl. Akad. Nauk SSSR. 189E. A. Dinic and M. A. Kronrod. An algorithm for solving the assignment problem. Dokl. Akad. Nauk SSSR, 189:23-25, 1969.
On algorithms for permuting large entries to the diagonal of a sparse matrix. I S Duff, J Koster, SIAM J. Matrix Anal. Appl. 224I. S. Duff and J. Koster. On algorithms for permuting large entries to the diagonal of a sparse matrix. SIAM J. Matrix Anal. Appl., 22(4):973-996, 2000.
Computing a class of bipartite matchings in parallel. I S Duff, D Ruiz, B Uçar, Presentation at SIAM 13th Conference on Parallel Processing for Scientific Computing (PP08). Atlanta, GA, USAI. S. Duff, D. Ruiz, and B. Uçar. Computing a class of bipartite matchings in par- allel. Presentation at SIAM 13th Conference on Parallel Processing for Scientific Computing (PP08), Atlanta, GA, USA, March 2008.
Theoretical improvements in algorithmic efficiency for network flow problems. J Edmonds, R M Karp, Combinatorial Structures and their Applications (Proc. Calgary Internat. Conf. Calgary, Alta; New YorkGordon and BreachJ. Edmonds and R. M. Karp. Theoretical improvements in algorithmic efficiency for network flow problems. In Combinatorial Structures and their Applications (Proc. Calgary Internat. Conf., Calgary, Alta., 1969), pages 93-96. Gordon and Breach, New York, 1970.
On the scaling of multidimensional matrices. J Franklin, J Lorenz, Linear Algebra Appl. 114J. Franklin and J. Lorenz. On the scaling of multidimensional matrices. Linear Algebra Appl., 114/115:717-735, 1989.
Entropy optimization and mathematical programming. S.-C Fang, J R Rajasekera, H.-S J Tsao, International Series in Operations Research & Management Science. 8Kluwer Academic PublishersS.-C. Fang, J. R. Rajasekera, and H.-S. J. Tsao. Entropy optimization and mathe- matical programming. International Series in Operations Research & Management Science, 8. Kluwer Academic Publishers, Boston, MA, 1997.
Fibonacci heaps and their uses in improved network optimization algorithms. M L Fredman, R E Tarjan, J. Assoc. Comput. Mach. 343M. L. Fredman and R. E. Tarjan. Fibonacci heaps and their uses in improved network optimization algorithms. J. Assoc. Comput. Mach., 34(3):596-615, 1987.
C.-Y Huang, Y.-S Chen, Y.-L Lin, Y.-C Hsu, Data path allocation based on bipartite weighted matching. Design Automation Conference. C.-Y. Huang, Y.-S. Chen, Y.-L. Lin, and Y.-C. Hsu. Data path allocation based on bipartite weighted matching. Design Automation Conference, pages 499-504, 1990.
Protein Structure Comparison by Alignment of Distance Matrices. L Holm, Journal of Molecular Biology. 2331L. Holm. Protein Structure Comparison by Alignment of Distance Matrices. Journal of Molecular Biology, 233(1):123-138, September 1993.
Diagonal matrix scaling and linear programming. L Khachiyan, B Kalantari, SIAM J. Optim. 24L. Khachiyan and B. Kalantari. Diagonal matrix scaling and linear programming. SIAM J. Optim., 2(4):668-672, 1992.
The Sinkhorn-Knopp algorithm: convergence and applications. P A Knight, SIAM J. Matrix Anal. Appl. 301P. A. Knight. The Sinkhorn-Knopp algorithm: convergence and applications. SIAM J. Matrix Anal. Appl., 30(1):261-275, 2008.
A fast algorithm for matrix balancing. P A Knight, D Ruiz, Web Information Retrieval and Linear Algebra Algorithms, number 07071 in Dagstuhl Seminar Proceedings. Andreas Frommer, Michael W. Mahoney, and Daniel B. SzyldDagstuhl, Germany; GermanyInternationales Begegnungs-und Forschungszentrum für Informatik (IBFI), Schloss DagstuhlP. A. Knight and D. Ruiz. A fast algorithm for matrix balancing. In Andreas Frommer, Michael W. Mahoney, and Daniel B. Szyld, editors, Web Information Retrieval and Linear Algebra Algorithms, number 07071 in Dagstuhl Seminar Proceedings, Dagstuhl, Germany, 2007. Internationales Begegnungs-und Forschungszentrum für Informatik (IBFI), Schloss Dagstuhl, Germany.
The Hungarian method for the assignment problem. H W Kuhn, Naval Res. Logist. Quart. 2H. W. Kuhn. The Hungarian method for the assignment problem. Naval Res. Logist. Quart., 2:83-97, 1955.
A Study on Tools and Algorithms for 3-D Protein Structures Alignment and Comparison. Y H Lin, H C Chang, Y L Lin, Int. Computer Symposium. Lin, Y. H., Chang, H. C., and Lin, Y. L. A Study on Tools and Algorithms for 3- D Protein Structures Alignment and Comparison. In Int. Computer Symposium, pages 1000-1005, December 2004.
SuperLU DIST: A Scalable Distributed-memory Sparse Direct Solver for Unsymmetric linear systems. X S Li, J W Demmel, ACM Transactions on Mathematical Software. 292X. S. Li and J. W. Demmel. SuperLU DIST: A Scalable Distributed-memory Sparse Direct Solver for Unsymmetric linear systems. ACM Transactions on Mathemat- ical Software, 29(2), 2003.
On very large scale assignment problems. Y Lee, J B Orlin, Large scale optimization. Gainesville, FL; DordrechtKluwer Acad. PublY. Lee and J. B. Orlin. On very large scale assignment problems. In Large scale opti- mization (Gainesville, FL, 1993), pages 206-244. Kluwer Acad. Publ., Dordrecht, 1994.
A deterministic strongly polynomial algorithm for matrix scaling and approximate permanents. N Linial, A Samorodnitsky, A Wigderson, Combinatorica. 204N. Linial, A. Samorodnitsky, and A. Wigderson. A deterministic strongly polyno- mial algorithm for matrix scaling and approximate permanents. Combinatorica, 20(4):545-568, 2000.
A field of generalised puiseux series for tropical geometry. T Markwig, Semin. Mat. Torino. to appear in RendT. Markwig. A field of generalised puiseux series for tropical geometry. to appear in Rend. Semin. Mat. Torino (2009).
Symmetric gauge functions and unitarily invariant norms. L Mirsky, Quart. J. Math. Oxford Ser. 112L. Mirsky. Symmetric gauge functions and unitarily invariant norms. Quart. J. Math. Oxford Ser. (2), 11:50-59, 1960.
M Mezard, G Parisi, M Virasoro, Spin Glass Theory and Beyond. World Scientific Publishing Company9M. Mezard, G. Parisi, and M. Virasoro. Spin Glass Theory and Beyond (World Sci- entific Lecture Notes in Physics, Vol 9). World Scientific Publishing Company.
The spectrum of a nonlinear operator associated with a matrix. M V Menon, H Schneider, Linear Algebra and Appl. 2M. V. Menon and H. Schneider. The spectrum of a nonlinear operator associated with a matrix. Linear Algebra and Appl., 2:321-334, 1969.
Tangent Graeffe iteration. G Malajovich, J P Zubelli, Numer. Math. 894G. Malajovich and J. P. Zubelli. Tangent Graeffe iteration. Numer. Math., 89(4):749- 782, 2001.
Interior-point polynomial algorithms in convex programming. Y Nesterov, A Nemirovskii, SIAM Studies in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM). 13Y. Nesterov and A. Nemirovskii. Interior-point polynomial algorithms in convex pro- gramming, volume 13 of SIAM Studies in Applied Mathematics. Society for In- dustrial and Applied Mathematics (SIAM), Philadelphia, PA, 1994.
A new pivoting strategy for Gaussian elimination. M Olschowka, A Neumaier, Linear Algebra Appl. 240M. Olschowka and A. Neumaier. A new pivoting strategy for Gaussian elimination. Linear Algebra Appl., 240:131-151, 1996.
Euclidean random matrices, the glass transition and the boson peak. G Parisi, The European Physical Journal E: Soft Matter and Biological Physics. 93G. Parisi. Euclidean random matrices, the glass transition and the boson peak. The European Physical Journal E: Soft Matter and Biological Physics, 9(3):213-218, November 2002.
Matrix scaling, entropy minimization, and conjugate duality. I. Existence conditions. M H Schneider, Linear Algebra Appl. 114M. H. Schneider. Matrix scaling, entropy minimization, and conjugate duality. I. Ex- istence conditions. Linear Algebra Appl., 114/115:785-813, 1989.
Concerning nonnegative matrices and doubly stochastic matrices. R Sinkhorn, P Knopp, Pacific J. Math. 21R. Sinkhorn and P. Knopp. Concerning nonnegative matrices and doubly stochastic matrices. Pacific J. Math., 21:343-348, 1967.
A comparative study of algorithms for matrix balancing. M H Schneider, S A Zenios, Oper. Res. 38M. H. Schneider and S. A. Zenios. A comparative study of algorithms for matrix balancing. Oper. Res., 38:439-455, May 1990.
A decision method for elementary algebra and geometry. A Tarski, University of California PressBerkeley and Los Angeles, Calif2nd edA. Tarski. A decision method for elementary algebra and geometry. University of California Press, Berkeley and Los Angeles, Calif., 1951. 2nd ed.
Dequantization of real algebraic geometry on logarithmic paper. O Viro, European Congress of Mathematics. Barcelona; Birkhäuser, BaselIO. Viro. Dequantization of real algebraic geometry on logarithmic paper. In European Congress of Mathematics, Vol. I (Barcelona, 2000), volume 201 of Progr. Math., pages 135-146. Birkhäuser, Basel, 2001.
|
[] |
[
"A note on the optimality of decomposable entanglement witnesses and completely entangled subspaces",
"A note on the optimality of decomposable entanglement witnesses and completely entangled subspaces"
] |
[
"R Augusiak \nICFO-Institut de Ciències Fotòniques\nParc Mediterrani de la Tecnologia\n08860CastelldefelsSpain\n",
"J Tura \nCentre de Formació Interdisciplinària Superior\nUniversitat Politècnica de Catalunya\nPau Gargallo 508028BarcelonaSpain\n",
"M Lewenstein \nICFO-Institut de Ciències Fotòniques\nParc Mediterrani de la Tecnologia\n08860CastelldefelsSpain\n\nICREA-Institució Catalana de Recerca i Estudis Avançats\nLluis Companys 2308010BarcelonaSpain\n\nKavli Institute for Theoretical Physics\nUniversity of California\n93106-4030Santa BarbaraCalifornia\n"
] |
[
"ICFO-Institut de Ciències Fotòniques\nParc Mediterrani de la Tecnologia\n08860CastelldefelsSpain",
"Centre de Formació Interdisciplinària Superior\nUniversitat Politècnica de Catalunya\nPau Gargallo 508028BarcelonaSpain",
"ICFO-Institut de Ciències Fotòniques\nParc Mediterrani de la Tecnologia\n08860CastelldefelsSpain",
"ICREA-Institució Catalana de Recerca i Estudis Avançats\nLluis Companys 2308010BarcelonaSpain",
"Kavli Institute for Theoretical Physics\nUniversity of California\n93106-4030Santa BarbaraCalifornia"
] |
[] |
Entanglement witnesses (EWs) constitute one of the most important entanglement detectors in quantum systems.Nevertheless, their complete characterization, in particular with respect to the notion of optimality, is still missing, even in the decomposable case. Here we show that for any qubit-qunit decomposable EW (DEW) W the three statements are equivalent: (i) the set of product vectors obeying e, f |W |e, f = 0 spans the corresponding Hilbert space, (ii) W is optimal, (iii) W = Q Γ with Q denoting a positive operator supported on a completely entangled subspace (CES) and Γ standing for the partial transposition. While, implications (i) ⇒ (ii) and (ii) ⇒ (iii) are known, here we prove that (iii) implies (i). This is a consequence of a more general fact saying that product vectors orthogonal to any CES in 2 ⊗ n span after partial conjugation the whole space. On the other hand, already in the case of 3 ⊗ 3 Hilbert space, there exist DEWs for which (iii) does not imply (i). Consequently, either (i) does not imply (ii), or (ii) does not imply (iii), and the above transparent characterization obeyed by qubit-qunit DEWs, does not hold in general.
|
10.1088/1751-8113/44/21/212001
|
[
"https://arxiv.org/pdf/1012.3786v3.pdf"
] | 119,107,523 |
1012.3786
|
499028464200903c834e3874a8e91973c03647db
|
A note on the optimality of decomposable entanglement witnesses and completely entangled subspaces
7 Apr 2011
R Augusiak
ICFO-Institut de Ciències Fotòniques
Parc Mediterrani de la Tecnologia
08860CastelldefelsSpain
J Tura
Centre de Formació Interdisciplinària Superior
Universitat Politècnica de Catalunya
Pau Gargallo 508028BarcelonaSpain
M Lewenstein
ICFO-Institut de Ciències Fotòniques
Parc Mediterrani de la Tecnologia
08860CastelldefelsSpain
ICREA-Institució Catalana de Recerca i Estudis Avançats
Lluis Companys 2308010BarcelonaSpain
Kavli Institute for Theoretical Physics
University of California
93106-4030Santa BarbaraCalifornia
A note on the optimality of decomposable entanglement witnesses and completely entangled subspaces
7 Apr 2011
Entanglement witnesses (EWs) constitute one of the most important entanglement detectors in quantum systems.Nevertheless, their complete characterization, in particular with respect to the notion of optimality, is still missing, even in the decomposable case. Here we show that for any qubit-qunit decomposable EW (DEW) W the three statements are equivalent: (i) the set of product vectors obeying e, f |W |e, f = 0 spans the corresponding Hilbert space, (ii) W is optimal, (iii) W = Q Γ with Q denoting a positive operator supported on a completely entangled subspace (CES) and Γ standing for the partial transposition. While, implications (i) ⇒ (ii) and (ii) ⇒ (iii) are known, here we prove that (iii) implies (i). This is a consequence of a more general fact saying that product vectors orthogonal to any CES in 2 ⊗ n span after partial conjugation the whole space. On the other hand, already in the case of 3 ⊗ 3 Hilbert space, there exist DEWs for which (iii) does not imply (i). Consequently, either (i) does not imply (ii), or (ii) does not imply (iii), and the above transparent characterization obeyed by qubit-qunit DEWs, does not hold in general.
Introduction
Entanglement witnesses (EW) [1,2] provide one of the best known methods of entanglement detection in composite (bipartite and multipartite) quantum systems (see the recent review [3] for other methods). These are Hermitian operators which, on one hand, have nonnegative mean values in all separable states and, on the other hand, they must have negative mean values in some entangled states.
The particular importance of EWs in detection of entanglement stems from several facts. First of all, we know that they give rise to a necessary and sufficient condition for separability [1] (see also [4] for the multipartite case). Precisely, given ̺ is separable if and only if W ̺ ≡ Tr(W ̺) ≥ 0 for all EWs or, equivalently, ̺ is entangled if and only if W ̺ < 0 for at least one such W . For the above reasons it is not feasible to check the "if" part of this criterion, nevertheless it still gives a strong necessary condition for separability. Then, as it was first stressed in Ref. [2], since EWs are Hermitian operators, it is clear that they correspond to some quantum observables and therefore the above criterion is applicable in experiment (see e.g. [5,6]). Finally, there is a whole bunch of works indicating their quantitative meaning (see e.g. [7,8,9,10,11]). More precisely, mean values of entanglement witnesses not only serve as entanglement detectors, but also can tell us how much entangled the state is.
Although the above extensive literature as well as Refs. [12,13,14,15,16,17,18,19,20,21,22]) is aiming at studying their properties and providing methods of construction it seems that still much can and should to be said about EWs. In particular, complete characterization and classification of EWs is far from satisfactory. The structure of the so-called optimal EWs (in the sense of Ref. [12], see also below for the definition), even in the decomposable case, is still unknown. The importance of this problem stems from the fact that the above separability criterion can be restated using optimal EWs only. This is because every EW which is not optimal can be optimized [12]. Therefore it is of great importance to characterize the set of EWs with respect to their optimality. The early attempts to achieve this goal were discussed already in [12].
The main purpose of this note is to go towards solving the above problems. We investigate few notions connected to the optimality of the decomposable entanglement witnesses. In particular, we show that in the case of qubit-qunit Hilbert spaces, a more exhausting characterization with respect to optimality can be given. Precisely, for all qubit-qunit decomposable entanglement witnesses the three statements are equivalent: (i) W is optimal, (ii) W = Q Γ with Q being a positive operator supported on a completely entangled subspace (CES) and Γ denoting the partial transposition map ‡, (iii) the Hilbert space 2 ⊗ n is spanned by product vectors obeying e, f |W |e, f = 0. We achieve this goal by showing that product vectors orthogonal to any CES of 2 ⊗ n after partial conjugation (PC) § span 2 ⊗ n . This means that (ii) implies (iii) and together with already proven facts that (iii) implies (i) and (i) implies (ii) [12], gives the above equivalence. The above fact also solves, at least in this particular case, the long-standing question whether (i) implies (iii). ‡ Note that we do not specify the subsystem on which the transposition map is applied since our results are independent of this choice. However, for convenience, in all the proofs the transposition map is applied to the lower-dimensional subsystem. § By the partial conjugation of a product vector |e, f we mean the complex conjugation of either |e or |f . Since our result do not depend on the choice of the subsystem subject to PC, we do not state explicitly on which subsystem it acts. Nevertheless, for convenience, in all the proofs, it is applied to the lower-dimensional subsystem.
Then we study DEWs acting on higher-dimensional Hilbert spaces and show that already in the simplest case of 3 ⊗ 3 the above equivalence appears to be false. Specifically, depending on the rank of Q, (ii) does not always imply (iii). This in turn implies that either not all witnesses admitting the form W = Q Γ are optimal ((ii) does not imply (i)), or not all OEWs have the property that product vectors satisfying e, f |W |e, f = 0 span the corresponding Hilbert space ((i) does not imply (iii)).
It should be noticed that in the case of indecomposable EWs (IEW), examples of witnesses for which (i) does not imply (iii) are already known. A particular example of such a witness comes from the Choi map [23]. The latter is extremal in the convex set of positive maps [24] and therefore gives a optimal EW (see e.g. Ref. [22]). On the other hand, product vectors from 3 ⊗ 3 at which the witness has zero mean value span a seven dimensional subspace in 3 ⊗ 3 (see Refs. [18,21]). Recently, using the theory of convex cones, the geometrical properties of such witnesses have been studied in Ref. [21].
The paper is organized as follows. In Sec. 2 we recall all the necessary notions and present, in a concise way, all we need about optimality of DEW. Then, in Sec. 3, we present our main results. We conclude in Sec. 4.
Preliminaries
For further benefits let us now recall some definitions and facts regarding decomposable entanglement witnesses. We give the definitions of separable states, entanglement witnesses, optimal and decomposable entanglement witnesses. Then, we shortly remind what is known regarding relations between optimality and decomposability of EWs.
In what follows we will be concerned with finite-dimensional product Hilbert spaces m ⊗ n , henceforward denoted shortly by H m,n . By D m,n and D sep m,n we denote, respectively, the set of all density matrices and separable density matrices acting on H m,n . In the case of equal local dimensions m = n, we use a single subscript m. Finally, M m ( ) will denote the set of m × m matrices with complex entries.
Following Ref. [25], we call a density matrix ̺ acting on H m,n separable if it can be written as
̺ = i p i |a i a i | ⊗ |b i b i |, p i ≥ 0, i p i = 1,(1)
where |a i and |b i denote some pure states from m and n , respectively. In 1996, basing on the Hahn-Banach separation theorem (cf. [26]), an important fact regarding the separability problem was proven [1]. Namely, a state ̺ acting on H m,n is entangled if and only if there exists a Hermitian operator W ∈ M m ( ) ⊗ M n ( ) such that Tr(W ̺) < 0 and at the same time Tr(W σ) ≥ 0 for all σ ∈ D sep m,n . This fact gives rise to the following definition. Any Hermitian operator W acting on H m,n is called entanglement witness if it has the properties: (i) its mean value W σ in any σ ∈ D sep m,n is nonnegative, (ii) there exists an entangled state σ such that W σ < 0. Notice, that both the conditions can be rephrased as follows: (i) e, f |W |e, f ≥ 0 for any pair of vectors |e ∈ m and |f ∈ n , (ii) W has at least one negative eigenvalue. Now, via the Choi-Jamio lkowski isomorphism [27,28], the theory of positive maps induces the following partition of EWs [29,12]. An entanglement witness W is called decomposable (DEW) if it can be written as W = aP + (1 − a)Q Γ with P, Q ≥ 0 and a ∈ [0, 1]. EWs that do not admit this form are called indecomposable. Notice that the decomposable witnesses detect only states which have nonpositive partial transposition (NPT). For detection of entangled states with positive partial transposition we need to use indecomposable entanglement witnesses.
Let us now pass to the notion of optimality. To this aim we introduce
D W = {̺ ∈ D m,n | W ̺ < 0},(2)
that is, the set of all entangled states detected by W . Following Ref. [12], we say that given two EWs
W i (i = 1, 2), W 1 is finer than W 2 if D W2 ⊆ D W1 .
Then, we say that W is optimal if there does not exist any other entanglement witness which is finer than W . It was shown in Ref. [12] that W 1 is finer than W 2 if and only if there exists a positive number ǫ and a positive operator P such that W 1 can be expressed as W 1 = (1 − ǫ)W 2 + ǫP . This immediately implies that W is optimal iff for any ǫ > 0 and P ≥ 0, the operator W = (1 + ǫ)W − ǫP is not an EW. The only candidates for positive operators that can be subtracted from W according to the above recipe must obey P P W = 0, with
P W = {|e, f ∈ H m,n | e, f |W |e, f = 0}. (3)
This implies a sufficient criterion for optimality of EWs. Namely, if the set of product vectors P W spans the Hilbert space H m,n , the witness W is optimal. Eventually, application of the above facts to the general form of DEW allows us to conclude that if a decomposable EW is optimal, it has to be of the form
W = Q Γ , Q ≥ 0,(4)
where supp(Q) does not contain any product vectors, or, in other words, supp(Q) is a completely entangled subspace (CES) in H m,n .
Optimality and product vectors in subspaces orthogonal to completely entangled subspaces
From the preceding section we know that regarding optimality of decomposable EWs, two facts hold: (i) if a DEW W is optimal, then it has to have the form (4) and (ii) if P W corresponding to W spans H m,n , then W is optimal. One could then ask if the opposite statements are also true. Or, in other words, if optimality of W is equivalent to the form (4), or to the fact that P W spans H m,n . First we show that in the case of the Hilbert space H 2,n the fact that DEW W can be written as in Eq. (4), implies that P W spans H 2,n . This immediately implies that both the above equivalences hold. On the other hand, we show that already in the 3 ⊗ 3 case there are witnesses admitting the form (4), but the P W does not span H 3 . Consequently, one of the above equivalences cannot hold. Either not all DEWs of the form (4) are optimal, or optimality does not imply that P W spans H 3 .
Before we start with our proofs, let us notice that since we deal only with witnesses that admit the form (4), the question about properties of P W can be seen as the question about properties of product vectors orthogonal to completely entangled subspaces. This is a consequence of a simple property of the transposition map saying that its dual map is again the transposition map, which allows to conclude that e, f |Q Γ |e, f = e * , f |Q|e * , f any product vector |e, f ∈ H m,n . This together with positivity of Q allows to conclude that |e, f belongs to P W iff |e * , f ∈ ker(Q). Thus, in what follows we can ask a bit more general question, namely if partially conjugated product vectors orthogonal to a given CES, span the corresponding Hilbert space. For instance, we will show that for any CES V of 2 ⊗ n , the product vectors belonging to V ⊥ span V ⊥ , while their partial conjugations span V . Notice that completely entangled subspaces were recently investigated e.g. in [30,31,32,33]. In particular, it was shown that the maximal dimension of CES in H m,n is (m − 1)(n − 1). This translates to the upper bound on the rank of Q in (4), i.e., r(Q) ≤ (m − 1)(n − 1). We still need to introduce some more terminology. We say that a positive operator Q is supported on H m,n if Q A and Q B have ranks m and n. Otherwise, if either Q A or Q B contains some vectors in its kernel, the operator Q acts effectively on a Hilbert space with smaller dimension. This can be translated to subspaces of H m,n . We can say that a given V is supported in H m,n if the latter is the "smallest" Hilbert space of which V can be a subspace. In other words, projector onto this subspace is supported on H m,n .
Eventually, let V be a subspace of some Hilbert space H. By V ⊥ we will be denoting the subspace of H of all vectors orthogonal to V (complement of V in H). Also, the notation
n ∋ |f = i f i |i ≡ (f 0 , f 1 , . . . , f n−1 )(5)
will be frequently used.
Decomposable witnesses acting on H 2,n
Let us first concentrate on the simplest case of m = 2. It follows from the previous discussion that the maximal dimension of a completely entangled subspace of 2 ⊗ n is n − 1. For pedagogical purposes let us start our considerations with this case. Then, we will pass to the cases of the remaining possible dimensions.
Lemma 1. Let V be a (n − 1)-dimensional CES of 2 ⊗ n .
Then there exists a nonsingular n × n matrix A such that the family of product vectors
|e(α), f (α) ≡ (1, α) ⊗ A(1, α, . . . , α n−1 ) (α ∈ ) (6) belong to V ⊥ . Moreover, the vectors |e(α), f (α) (α ∈ ) span V ⊥ , while |e * (α), f (α) span 2 ⊗ n .
Proof. Let |Ψ i (i = 1, . . . , n − 1) denote the linearly independent vectors spanning V . All of them have to be entangled as otherwise there would exist a product vector in V . This means that they can be expressed as
|Ψ i = |0 |ψ (i) 0 + |1 |ψ (i) 1 (i = 1, . . . , k),(7)
with nonzero vectors |ψ , are linearly independent. Otherwise in both cases it is possible to find a product vector in V .
Let us now look for the product vectors |e, f orthogonal to V , where we take |e = (1, α) ∈ 2 with α ∈ and arbitrary |f = (f 0 , . . . , f n−1 ) ∈ n . The
By Q A and Q B we denote Tr B (Q) and Tr A (Q), respectively. Notice that both are positive.
orthogonality conditions to |Ψ i (i = 1, . . . , n − 1) give us the set of n − 1 linear homogenous equations
Ψ i |e, f = 0 (i = 1, . . . , n − 1).(8)
for n variables f i . In order to solve it, we can fix one of the variables, say f 0 = 1, getting a system of n − 1 inhomogenous equations for n − 1 variables. It can easily be solved and the solution is given by
f i (α) = R i (α) R(α) (i = 1, . . . , n − 1),(9)
where R i and R are polynomials in α of degree at most n − 1. Moreover, since the vectors |ψ (i = 1, . . . , n) are linearly independent, the degree of the polynomial R is exactly n − 1. Consequently, the product vectors in V ⊥ we look for, take the generic form
|e(α), f (α) = (1, α) ⊗ (R(α), R 1 (α), . . . , R n−1 (α)) (α ∈ ).(10)
Note that we have multiplied above everything by R(α), so that the expression (10) is valid also for R(α) = 0, while the expression (9) only when R(α) = 0. Nevertheless, by continuity or by local change of the basis one shows that the vectors (10) for α being the roots of R are also orthogonal to |Ψ i . For further purposes, let us denote by V ⊥ sep the subspace of V ⊥ spanned by all the vectors (10).
The assumption that V does not contain any product vector implies that all the polynomials R, R i are linearly independent. In order to see it explicitly, let us assume that only k < n of them are linearly independent. Then, there has to exist n − k vectors |ξ i (i = 1, . . . , n − k) that are orthogonal to the subspace of n spanned by |f (α) (α ∈ ). Moreover, for any |h ∈ 2 , vectors |h |ξ i (i = 1, . . . , n − k) are orthogonal to V ⊥ sep . In what follows we show that among the latter there exists at least one product vector which is orthogonal to V ⊥ and thus has to be in V leading to the contradiction with the assumption that V is a CES.
For this purpose, let us notice that vectors (10) span (k +1)-dimensional subspace in V ⊥ . As a result, there exists a set of n − k vectors |ω i ∈ V ⊥ (i = 1, . . . , n − k), which are orthogonal to all |e(α), f (α) . Now, we take the following product vector
|η = (|0 + γ|1 ) ⊗ n−k i=1 b i |ξ i(11)
with γ ∈ and b i ∈ being some parameters to be determined. Obviously, |η is already orthogonal to V ⊥ sep . Orthogonality conditions ω i |η = 0 (i = 1, . . . , n − k) give us the system of n − k homogenous equations for n − k variables b i of the form (M 1 + γM 2 )|b = 0 with |b = (b 1 , . . . , b n−k ) and M i being some matrices. It has a nontrivial solution only if det(M 1 + γM 2 ) vanishes. The latter is a polynomial in γ of at most (n − k)th degree and obviously the corresponding equation is soluble in the complex field. Consequently, we have product vectors belonging to V , which is in a contradiction with the assumption that V is a CES. Thus, R, R i (i = 1, . . . , n − 1) are linearly independent. This in turn means that there exists a nonsingular transformation A : n → n such that |f (α) = A(1, α, . . . , α n−1 ) for any α ∈ .
On the other hand, it is easy to see that vectors
(1, α) ⊗ (1, α, . . . , α n−1 ) (α ∈ )(12)
span (n + 1)-dimensional subspace of 2 ⊗ n , while their PCs, that is,
(1, α * ) ⊗ (1, α, . . . , α n−1 ) (α ∈ ),(13)
span the whole 2 ⊗ n . In the first case this is because among 2n monomials in α appearing in Eq. (12), n + 1 are linearly independent. In the second case, α * is linearly independent of any polynomial in α and thus we have 2n linearly independent polynomials in (13). Therefore, since A is of full rank, vectors |e(α) ⊗A|f (α) (α ∈ ) span V , while |e * (α) ⊗ A|f (α) the whole H 2,n . This finishes the proof. Let us now pass to the remaining cases with respect to the dimension of V .
Lemma 2. Let V be a k < n − 1-dimensional CES of 2 ⊗ n . Then there exists a nonsingular transformation A, such that the vectors
(1, α) ⊗ A(R(α, β), β 1 R(α, β), . . . , β n−k−1 R(α, β), R 1 (α, β), . . . , R k (α, β)) (α, β 1 , . . . , β n−k−1 ∈ ) (14)
span V ⊥ , while their PCs span 2 ⊗ n . Here β ≡ (β 1 , . . . , β n−k−1 ), R(α, β) and R i (α, β) are polynomials of at most kth degree in α and first degree in β i (i = 1, . . . , n − k − 1).
Proof. We can follow the same reasoning as in the proof of Lemma 1. Now, we have k entangled vectors |Ψ i spanning V which can be written as in Eq. (7). For the same reason as before both sets {|ψ
(i) 0 } k−1 i=0 and {|ψ (i) 1 } k−1 i=0
are linearly independent. Therefore we can always find a nonsingular transformation A : n → n such that A|ψ
(i) 1 = |n − k − 1 + i (i = 1, . . . , k).
Let us now consider the locally transformed subspace V = (½ 2 ⊗ A)V (½ 2 ⊗ A † ), which is also a CES, and look for the separable vectors belonging to V ⊥ and taking the following form (1, α) ⊗ (1, β 1 , . . . , β n−k−1 , f 1 , . . . , f k ), (15) where β i ∈ are free parameters and f i (i = 1, . . . , k) are to be determined.
Orthogonality conditions to k vectors spanning V , i.e., | Ψ i = ½ 2 ⊗ A|Ψ i , lead us to the following inhomogenous linear equations
k j=1 f j ψ (i) 0 |n − k − 1 + j + αδ ij = x i (α, β) (i = 1, . . . , k),(16)
where x i (α, β) are polynomials of the first degree in α and all βs.
Following the same reasoning as in the proof of lemma 1, one obtains the product vectors orthogonal to | Ψ i in the form
|e(α), f (α, β) = (1, α) ⊗ (R, β 1 R, . . . , β n−k−1 R, R 1 , . . . , R k ) (α, β i ∈ ),(17)
where R i and R are polynomials of degree at most k in α and one in βs (for brevity we omitted arguments of R and R i in (17)). Moreover, due to the already mentioned fact that the vectors |ψ (i = 0, . . . , k − 1) are linearly independent, the highest power of α in R is exactly k.
Let us now show that the polynomials R, β i R (i = 1, . . . , n − k − 1), and R i (i = 1, . . . , k) are linearly independent. For this purpose, let us assume that only m < n of them are linearly independent. It is clear that m ≥ n − k as the monomials 1 and β i (i = 1, . . . , n − k − 1) are by the very definition linearly independent and therefore we can denote m = n − k + l with l = 1, . . . , k. Consequently, there exist k − l vectors | ξ i ∈ n orthogonal to the subspace spanned by |f (α, β) (α, β i ∈ ).
On the other hand, since R is of kth degree in α and the above m polynomials are of degree at most k in α, they, together with n−k polynomials αR(α, β) and αβ i R(α, β) (i = 1, . . . , n−k−1), constitute the set of 2(n−k)+l linearly independent polynomials. This implies that the vectors (17) span at least 2(n − k) + l-dimensional subspace in V ⊥ . In the worst case scenario, i.e., when this dimension is exactly 2(n − k) + l we have k − l linearly independent | ω i ∈ V ⊥ which are orthogonal to all vectors (17). Then, following the same reasoning as in the proof of lemma 1, we can show that there are product vectors in V , which contradicts the fact that V is CES.
In conclusion, all the polynomials R, β i R (i = 1, . . . , n−k−1) and R i (i = 1, . . . , k) are linearly independent. As a result, these n polynomials together with n − k polynomials αR and αβ i R (i = 1, . . . , n − k − 1) constitute the set of 2n − k linearly polynomials and therefore the continuous set of product vectors |e(α), f (α, β i ) in Eq. (17) span V . Also, for the same reason as before, the partially conjugated vectors
(1, α * ) ⊗ (R, β 1 R, . . . , β n−k−1 R, R 1 , . . . , R k )(18)
span H 2,n . Eventually, putting A = ( A −1 ) † , we see that the vectors (14) span V ⊥ , while their PCs span H 2,n . This completes the proof.
The above lemmas together with the previously known results allow us to prove the following theorem. Theorem 1. Let W be a decomposable witness acting on H 2,n . The the following statements are equivalent:
(i) W = Q Γ , where Q ≥ 0 and supp(Q) is a CES in H 2,n , (ii) P W spans 2 ⊗ n , (iii) W is optimal.
Proof. The implications (ii) ⇒ (iii) and (iii) ⇒ (i) were proven in Ref. [12]. The implication (i) ⇒ (ii) follows from the above lemmas.
Let us illustrate the above discussion with a simple example. Let us consider a witness W = Q Γ with Q supported on a (n − 1)-dimensional subspace V of 2 ⊗ n spanned by the following vectors
|Ψ i = (1/ √ 2)(|0, i − |1, i − 1 ) (i = 1, . . . , n − 1)(19)
The subspace V does not contain any product vector because, as one can directly check, there does not exist product vector orthogonal to
V ⊥ = span{|00 , |1, n − 1 , (1/ √ 2)(|0, i + |1, i − 1 ) (i = 1, . . . , n − 1)}.
Then the separable vectors spanning V ⊥ are given by (12) and, as already mentioned, they span 2 ⊗ n
Decomposable witnesses acting on H 3
Here we show that the simple characterization we proved in theorem 1 for 2 ⊗ n decomposable witnesses does not hold for some of witnesses acting already on 3 ⊗ 3 . Precisely, we will see that for witnesses (4) with r(Q) = 1, 2, the analog of the above theorem also holds, while there are witnesses with r(Q) = 3, 4 such that the separable vectors from the corresponding P W s do not span H 3 .
Let us start from the case of r(Q) = 1. Here we have a bit more general fact (see also Ref. [22] for a proof of optimality via extremality). Then, we will consider the case of r(Q) = 2.
Lemma 3. Let W = |ψ ψ| Γ , where |ψ is an entangled pure state from H m . Then, the statements (i), (ii), and (iii) from theorem 1 (accordingly reformulated) are equivalent.
Proof. As previously, implications (ii) ⇒ (iii) and (iii) ⇒ (i) follow from Ref. [12]. Below we prove that (i) implies (ii).
The Schmidt decomposition of |ψ reads
|ψ = s−1 i=0 √ µ i |ii ,(20)
where µ i ≥ 0 and s ≤ m denotes the Schmidt rank of |ψ . Without any loss of generality we can assume that s = m. Then, by a local full rank transformation we can bring |ψ to the maximally entangled state |ψ + m . The product vectors orthogonal to the latter are of the form |e |f , where |e is arbitrary vector from m and |f ∈ m is any vector orthogonal to |e . Then, this class of vectors after PC span m ⊗ m (cf. Ref. [38]).
Note, that with a bit more effort the above lemma can be generalized to any witness W = |ψ ψ| Γ acting on m ⊗ n . Proof. Let |Ψ i (i = 1, 2) be two linearly independent vectors spanning V . Clearly, we can assume that at least one of these vectors, say |Ψ 1 , is of Schmidt rank two. By a local unitary operations it can be brought to | Ψ 1 = |00 + |11 .
Let us now look for the product vectors orthogonal to V of the form (1, α, β) ⊗ (f 0 , f 1 , f 2 ) (α, β ∈ ). From the orthogonality conditions to the transformed vectors | Ψ i (i = 1, 2) one infers that they take the form (1, α, β) ⊗ (−αR(α, β), R(α, β), R 1 (α, β)),
with R and R 1 being polynomials in α and β. Let us now show that the three polynomials R, αR, and R 1 are linearly independent (the first two already are). To this end we can follow the approach already used in the previous lemmas. Assume that R 1 is linearly dependent on R and αR. Then, there exist a vector |ξ ∈ 3 orthogonal to every (−αR(α, β), R(α, β), R 1 (α, β)) (α, β ∈ ) and consequently any vector |h |ξ with arbitrary |h ∈ 3 is orthogonal to the vectors (21). On the other hand, one immediately sees that the latter span five-dimensional subspace in V ⊥ . This means that since dim V ⊥ = 7, there exist two vectors |ω i ∈ V ⊥ (i = 1, 2) orthogonal to all vectors in (21). It is then clear that among the two-parameter class |h |ξ there exist at least one vector orthogonal to both |ω i (i = 1, 2), implying the existence of a product vector in V . This is, however, in a contradiction with the assumption that V is a CES.
Since then R, αR, and αR are linearly independent, the partially conjugated vectors (1, α * , β * ) ⊗ (−αR(α, β), R(α, β), R 1 (α, β)) (α, β ∈ ) (22) certainly span H 3 .
Basing on the above lemma 3 and lemma 4, we can now formulate the analog of theorem 1 for some of DEWs acting on H 3 . Theorem 2. Let W be a decomposable witness acting on H 3 . Then the following conditions are equivalent:
(i) W = Q Γ with Q ≥ 0 such that r(Q) = 1, 2 and supp(Q) being a CES,
(ii) P W spans H 3 , (iii) W is optimal.
Proof. The implications (ii) ⇒ (iii) and (iii) ⇒ (i) were proven in Ref. [12]. The implication (i) ⇒ (ii) follows from lemma 3 and 4.
Still the cases of dim V = 3, 4 remain untouched. As we will see shortly, it is possible to provide examples of three and four-dimensional CESs supported in H 3 such that their complements, V ⊥ s, contain product vectors, which, when partially conjugated, do not span H 3 . While, due to the fact that five-dimensional subspaces of H 3 have generically six product vectors (cf. [33,35]), the existence of such threedimensional CESs for which the product vectors from their complements do not, when partially transposed, span H 3 is surprising and interesting. This implies that there are DEWs (4) with r(Q) = 3 such that P W s, even if containing continuous classes of product vectors, do not span H 3 . Among such EWs one may look for the analogs of the aforementioned Choi-like witnesses (optimal witnesses whose P W s do not span the corresponding Hilbert space) already known to exist among the indecomposable EWs [23]. Still, however, we cannot prove their optimality. On the other hand, it is possible to provide examples of witnesses (4) (thus also CESs) with r(Q) = 3, 4 such that their P W s do span H 3 .
In the first, three-dimensional case let us consider the subspace V 1 spanned by the following (unnormalized) vectors |01 + |10 , |02 + |20 , |1 (a|1 + b|2 ) + |2 (a 2 |1 + b 2 |2 ). (23) For complex a i , b i (i = 1, 2) satisfying ab 2 = a 2 b, V 1 does not contain any product vector. Then, under the conditions that (a 2 + b) 2 = 4ab 2 and b 2 = 0, one shows that all the product vectors in V ⊥ 1 are of the form
(1, α, λα) ⊗ (1, −α, −λα)(24)
and
(0, 1, α) ⊗ (0, b + b 2 α, −a − a 2 α),(25)
where λ = −(b + a 2 )/2b 2 . Direct check allows to conclude that both classes after the partial conjugation span only a seven-dimensional subspace in H 3 . Finally, since there do not exist PPT entangled states acting on H 3 of rank three, any positive Q with r(Q) = 3 and supported on this subspace has to be NPT, thus giving rise to a proper witness.
In the four-dimensional case the problem of existence of EWs (4) for which P W s do not span the corresponding Hilbert space is very much related to the results of [33,35]. In particular, five-dimensional subspaces in H 3 contain generically six product vectors (five of them are linearly independent), and obviously cannot span, when partially conjugated, H 3 . In order to provide a particular example of an EW (4) with r(Q) = 4, one may consider a CES orthogonal to some unextendible product basis (UPB) ¶ [36] (see also Ref. [34]). To this end, let us take one of the five-elements UPBs from H 3 given in [36], called PYRAMID:
|ψ i = |φ i |φ 2imod5 (i = 0, . . . , 4)(26)
with
|φ i = N cos 2πi 5 |0 + sin 2πi 5 |1 + h + |2 ,(27)
where h ± = (1/2) √ 5 ± 1 and N = 2/ 5 + √ 5. The subspace orthogonal to these vectors is spanned by orthogonal vectors of Schmidt rank two given by
η|10 + (η|0 + 2h 2 − |2 )|1 , −η|10 + |1 (η|0 + 2h 2 − |2 ), (−h − |0 + 4h 2 − |2 )|0 − h − |11 , |0 (h − |0 + 4h 2 − |2 ) + h − |11 ,(28)
where η = 1/2h + . Taking convex combination of projectors onto this vectors, denoted P i (i = 1, 2, 3, 4), with equal weights we obviously get PPT entangled state. However, by appropriate changing these weights we get a positive operator Q which is NPT. For instance we can consider the following one-parameter family of Qs Q(r) = r (P 1 + P 2 ) + (1/2)(1 − 2r) (P 3 + P 4 ) (0 ≤ r ≤ 1/2). (29) It is easy to check that Q(r) is NPT except for r = 1/4.
In spite of the above examples it is still possible to provide three and fourdimensional CESs such that the product vectors in their complements do span, after PC, H 3 . Note that generically for the three-dimensional CESs of H 3 this is the case. Let us then consider the following subspace:
V 2 = span{|01 − |10 , |02 − |20 , |12 − |21 , |02 + |20 − |11 }. (30)
Note that V 2 contains the antisymmetric subspace of H 3 and the fourth vector spanning it (which is of Schmidt rank three) belongs to the symmetric subspace of H 3 . It is clear that V 2 is supported in H 3 and it does not contain any product vectors. In order to see it explicitly, assume that some |e, f can be written as a linear combination of all these vectors. Application of the swap operator to |e, f gives |f, e . On the other hand, it changes the sign before first three vectors spanning V 2 and therefore one sees that |02 + |20 − |11 is proportional to |e, f + |f, e which contradicts the fact that it has Schmidt rank three.
It is now easy to see that the product vectors
(1, α, α 2 /2) ⊗ (1, α, α 2 /2) (α ∈ )(31)
are orthogonal to V 2 and their PC span H 3 . ¶ Following [36] we say that a set of product vectors from some product Hilbert space H is unextendible product basis if the vectors are orthogonal and there does not exist any other product vector in H orthogonal to all of them. Skipping the orthogonality condition we get nonorthogonal UPB.
Conclusion
Let us shortly summarize the obtained results and sketch lines of further possible research. Entanglement witnesses give one of the most relevant tools in the theory of entanglement. Their characterization is therefore of a great interest. In this note we have focused on the simpler case of decomposable entanglement witnesses and investigated couple of issues related to the notion of optimality. In the 2⊗n case, more profound characterization can be given to DEWs. Together with Ref. [12], our results show that a given DEW W is optimal iff the corresponding P W spans H 2,n . Then, the latter holds iff W = Q Γ with positive Q supported on some CES. Interestingly, such transparent characterization does not hold already in the case of DEWs acting on H 3 . Precisely, although for all such DEWs with r(Q) = 1, 2 the above equivalences also hold, there exist DEWs with r(Q) = 3, 4 such that the product vectors from the corresponding PWs do not span H 3 . This in general means that either not all witnesses taking the form (4) with Q supported on a CES are optimal, or that optimality of a DEWs W does not necessarily mean that its P W spans the corresponding Hilbert space.
Obviously the obtained results do not complete the characterization of DEWs, even in the two-qutrit case. In particular, the complete analysis of the cases when r(Q) = 3, 4 is missing. Even if for r(Q) = 3, generically PWs of DEWs (4) span H 3 , it is possible to find examples of DEWs, as the one provided above, for which this is not the case. One task would be to characterize such witnesses and check if some of them are optimal. This would prove that also in the case of DEWs optimality does not imply that P W spans the Hilbert space on which W acts. Let us remind that the existence of indecomposable EWs having this property is already known [23,18,21].
On the other hand, in the case of r(Q) = 4 is follows from e.g. [33,35] that almost all four-dimensional subspaces of H 3 have only six product vectors in their complement meaning that generic P W s of DEW (4) with r(Q) = 4 do not span H 3 . Nevertheless there exist CESs, as for instance the one presented above, such that product vectors in their complement span H 3 . Again, it seems interesting to characterize these CESs.
Then, one could ask the same questions in the case of higher-dimensional Hilbert spaces H m,n and, finally, similar analysis is missing in the case of indecomposable entanglement witnesses.
Lemma 4 .
4Let V be a CES of H 3 with dim V = 2.Then the product vectors from V ⊥ , when partially conjugated, span H 3 .
. M Horodecki, P Horodecki, R Horodecki, Phys. Lett. A. 2231Horodecki M, Horodecki P, and Horodecki R 1996 Phys. Lett. A 223 1
. B Terhal, Phys Lett. A. 271319Terhal B M 2000 Phys Lett. A 271, 319
. O Gühne, G Tóth, Phys. Rep. 4741Gühne O and Tóth G 2009 Phys. Rep. 474 1
. M Horodecki, P Horodecki, R Horodecki, Phys. Lett. A. 2831Horodecki M, Horodecki P, and Horodecki R 2001 Phys. Lett. A 283 1
. M Barbieri, De Martini, F , Di Nepi, G Mataloni, P , D 'ariano, G M Macchiavello, C , Phys. Rev. Lett. 91227901Barbieri M, De Martini F, Di Nepi G, Mataloni P, D'Ariano G M, and Macchiavello C 2003 Phys. Rev. Lett. 91 227901
. M Bourennane, M Eibl, C Kurtsiefer, S Gaertner, H Weinfurter, O Gühne, P Hyllus, D Bruß, M Lewenstein, A Sanpera, Phys. Rev. Lett. 9287902Bourennane M, Eibl M, Kurtsiefer C, Gaertner S, Weinfurter H, Gühne O, Hyllus P, Bruß D, Lewenstein M, and Sanpera A 2004 Phys. Rev. Lett. 92 087902
. F G S L Brandão, R O Vianna, Int. J. Quant. Inf. 4331Brandão F G S L and Vianna R O 2006 Int. J. Quant. Inf. 4 331
. F G S L Brandão, Phys. Rev. A. 7222310Brandão F G S L 2005 Phys. Rev. A 72 022310
. J Eisert, F G S L Brandão, Audenaert K M R , New J. Phys. 946Eisert J, Brandão F G S L, and Audenaert K M R 2007 New J. Phys. 9 46
. O Gühne, M Reimpell, Werner R F, Phys. Rev. Lett. 98110502Gühne O, Reimpell M, and Werner R F 2007 Phys. Rev. Lett. 98 110502
. O Gühne, M Reimpell, Werner R F, Phys. Rev. A. 7752317Gühne O, Reimpell M, and Werner R F 2008 Phys. Rev. A 77 052317
. M Lewenstein, B Kraus, J I Cirac, P Horodecki, Phys. Rev. A. 6252310Lewenstein M, Kraus B, Cirac J I, and Horodecki P 2000 Phys. Rev. A 62 052310
. M Lewenstein, B Kraus, P Horodecki, Cirac J I , Phys. Rev. A. 6344304Lewenstein M, Kraus B, Horodecki P, and Cirac J I 2001 Phys. Rev. A 63 044304
. A Acín, D Bruss, M Lewenstein, A Sanpera, Phys. Rev. Lett. 8740401Acín A, Bruss D, Lewenstein M, and Sanpera A 2001 Phys. Rev. Lett. 87 040401
. G Tóth, O Gühne, Phys. Rev. Lett. 9460501Tóth G and Gühne O 2005 Phys. Rev. Lett. 94 060501
. G Tóth, Phys. Rev. A. 7110301RTóth G 2005 Phys. Rev. A 71 010301(R)
. G Sarbicki, J. Phys. A. 41375303Sarbicki G 2008 J. Phys. A 41 375303
. J K Korbicz, M L Almeida, J Bae, M Lewenstein, Acín A , Phys. Rev. A. 7862105Korbicz J K, Almeida M L, Bae J, Lewenstein M, and Acín A 2008 Phys. Rev. A 78 062105
. J Sperling, W Vogel, Phys. Rev. A. 7922318Sperling J and Vogel W 2009 Phys. Rev. A 79 022318
. D Chruściński, A Kossakowski, Comm. Math. Phys. 2901051Chruściński D and Kossakowski A 2009 Comm. Math. Phys. 290 1051
. G Sarbicki, arXiv:0905.0778v2Sarbicki G arXiv:0905.0778v2.
. L Skowronek, E Størmer, J. Math. Phys. 5062106Skowronek L, Størmer E, andŻyczkowski K 2009 J. Math. Phys. 50 062106
. M-D Choi, Linear Algebra Appl. 1295Choi M-D 1975 Linear Algebra Appl. 12 95
. M-D Choi, T-Y Lam, Math. Ann. 2311Choi M-D and Lam T-Y 1977 Math. Ann. 231 1
. R Werner, Phys. Rev. A. 404277Werner R F 1989 Phys. Rev. A 40 4277
. A Taylor, D C Lay, John Wiley & SonsNew YorkIntroduction to functional analysis, Second editionTaylor A E and Lay D C 1980 Introduction to functional analysis, Second edition, (New York: John Wiley & Sons)
. M-D Choi, Linear Algebra Appl. 10285Choi M-D 1975 Linear Algebra Appl. 10 285
. A Jamio Lkowski, Rep. Math. Phys. 3275Jamio lkowski A 1972 Rep. Math. Phys. 3 275
. S Woronowicz, Rep. Math. Phys. 10165Woronowicz S L 1976 Rep. Math. Phys. 10 165
. N Wallach, Contemp. Math. 305291Wallach N 2002 Contemp. Math. 305 291
. K Parthasarathy, Proc. Indian Acad. Sci. (Math. Sci.). 114365Parthasarathy K R 2004 Proc. Indian Acad. Sci. (Math. Sci.) 114 365
. T Cubitt, A Montanaro, A Winter, J. Math. Phys. 4922107Cubitt T, Montanaro A, and Winter A 2008 J. Math. Phys. 49 022107
. J Walgate, A J Scott, J. Phys. A. 41375305Walgate J and Scott A J 2008 J. Phys. A 41 375305
. J M Leinass, J Myrheim, P Ø Sollid, Phys. Rev. 8162330Leinass J M, Myrheim J, and Sollid P Ø 2010 Phys. Rev. 81 062330
. J M Leinass, J Myrheim, P Ø Sollid, Phys. Rev. 8162329Leinass J M, Myrheim J, and Sollid P Ø 2010 Phys. Rev. 81 062329
. C H Bennett, D P Divincenzo, T Mor, P W Shor, J A Smolin, Phys. Rev. Lett. 825385Bennett C H, DiVincenzo D P, Mor T, Shor P W, Smolin J A, and Terhal B M 1999 Phys. Rev. Lett. 82 5385
. A Sanpera, D Bruß, M Lewenstein, Phys. Rev. A. 6350301Sanpera A, Bruß D, and Lewenstein M 2001 Phys. Rev. A 63 050301
. H-P Breuer, Phys. Rev. Lett. 9780501Breuer H-P 2006 Phys. Rev. Lett. 97 080501
|
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.